Offensive Security with Machine Learning: Applications and a Blockchain Case Study
Offensive security involves adopting an attacker’s mindset and techniques to strengthen defenses. The field of offensive security, starting with penetration testing, has evolved to embrace emerging technologies, incorporating more complex tooling, increased automation, and, most recently, techniques leveraging large language models (LLMs). While early AI techniques lacked the rigor needed to meet the demands of security professionals, recent developments have produced autonomous agents now ranking higher than humans in CTF competitions.
In this talk, we explore how recent advancements in AI can be leveraged in the offensive workflow.
We will examine current approaches in different stages of the cyber kill chain and how we can modify guardrailed models for offensive use cases through methods such as adversarial models and abliteration.
To ground these ideas, we present a case study on automating exploit generation for blockchain security practitioners. Smart contracts, which underpin the decentralized finance ecosystem and collectively govern billions of dollars, are particularly vulnerable due to their immutable and on-chain nature. We demonstrate early results showing how autonomous agents can be used to generate exploits, with the goal of strengthening the capabilities and findings of security auditors.
This talk aims to show how you, as a security practitioner, can begin leveraging AI methods to scale your existing workflows while also grounding your understanding of t