Offensive Security with Machine Learning: Applications and a Blockchain Case Study
Offensive security adopts an attacker's mindset and techniques to strengthen defenses. The field has evolved to incorporate more complex tooling and increased automation and, recently, large language models (LLMs). While early AI lacked rigor for security professionals, recent autonomous agents now outperform humans in CTF competitions.
In this talk, we explore how recent advancements in AI can be leveraged in the offensive workflow.
First, we examine techniques to enable adversarial use of LLMs. Second, we focus on recent advancements of offensive use of AI throughout the cyber kill chain. To ground these ideas, we conclude by presenting a case study on automating exploit generation for blockchain. Smart contracts, which underpin the decentralized finance ecosystem and collectively govern billions of dollars, are particularly vulnerable due to their immutable and open characteristics. We present our early work on agentic use of AI to aid smart contract auditors in their existing vulnerability detection workflow.
This talk aims to show how you, as a security practitioner, can begin leveraging AI methods to scale your existing workflows while also grounding your understanding of the evolving capabilities that adversaries have at their disposal.