Ignore Previous Instructions: Embracing AI Red Teaming

Track: Artificial Intelligence
Abstract
In this talk, we will explore the journey of Red Teaming from its origins to its transformation into AI Red Teaming, highlighting its pivotal role in shaping the future of Large Language Models (LLMs) and beyond. Drawing from my firsthand experiences developing and deploying the largest generative red teaming platform to date, I will share insightful antidotes and real-world examples. We will explore how adversarial red teaming fortifies AI applications at every layer—protecting platforms, businesses, and consumers. This includes safeguarding the external application interface, reinforcing LLM guardrails, and enhancing the security of the LLMs' internal algorithms. Join me as we uncover the critical importance of adversarial strategies in securing the AI landscape.
David Campbell
David Campbell is a seasoned technology leader with nearly 20 years of experience in Silicon Valley's startup ecosystem, now spearheading Responsible AI initiatives at Scale AI. As the Lead AI Risk Engineer, David has been pivotal in developing a cutting-edge AI Red Teaming platform that marries ethical AI practices with rigorous security evaluations. His work, recognized by the U.S. Congress and highlighted by the White House, underscores his commitment to shaping a safer AI ecosystem. With a deep background in Security, Core Infrastructure, and Platform Engineering, David actively drives discussions and actions that integrate responsible AI principles into practical security frameworks, aiming to nurture robust, ethical AI applications across industries.