Web3 Leader Spotlight: Lucas Martin Calderon
This week, we caught up with Lucas Martin Calderon, Founder & CEO at Pentestify, an automated post-deployment smart contract detection-remediation solution. With over $40bn total value locked in DeFi alone, it's vital that security is placed at the very top of the agenda and people like Lucas are leading the way. Dive in to hear more from him & whilst you're here, register for our upcoming live Q&A with Lucas on the 23rd August.
Share some real-world examples where AI and blockchain technologies have collaborated to improve security in web3 applications?
Blockchain's transparency presents a vast playground for AI in security. Our tool, Neo, at Pentestify, leverages this by utilising machine learning models to detect and prevent even subtle variations of common smart contract vulnerabilities that conventional static or dynamic tools miss. Examples include detecting reentrancy, timestamp dependence, and improper access controls, with superior precision by analysing the public data on the blockchain.
How do you envision the collaboration between AI and blockchain technologies evolving in the next few years to further strengthen the security landscape in the web3 industry?
The future collaboration between AI and blockchain may revolve around encrypted data processing. This shift could involve AI models capable of training on, understanding, and inferring from encrypted data. By doing so, we could see more real-time security mechanisms, such as circuit breakers, and sophisticated protection layers that safeguard both the development process and live applications. It paves the way for advanced privacy-preserving analytics and fraud detection systems within the blockchain space.
With the advent of AI-powered cyberattacks, how can decentralised networks leverage AI-driven defences and anomaly detection techniques to protect against emerging threats?
AI's aptitude for anomaly detection in vast networks can be a robust defence against cyberattacks. For example, the recent Curve hack, due to a vulnerability in Vyper, could have been averted by AI-driven defences. AI models trained on decentralised networks could quickly identify suspicious activities or vulnerability exploitation patterns, enabling immediate responses, like halting suspicious transactions or notifying security teams, thus closing the window of opportunity for potential attackers.
What are the potential risks and limitations of bringing AI and ML to smart contract security?
The integration of AI and ML in smart contract security isn't without challenges. During training, AI models can become targets themselves. Knowledge of model weights may allow attackers to manipulate inputs to bypass AI-driven security controls. This leads to the potential for adversarial attacks that evade detection. At Pentestify, we are exploring the potential of conducting data training, inference, and verification on-chain, utilising zero-knowledge proofs to maintain confidentiality and integrity, minimising such risks.