AI and AppSec: How to avoid insecure AI-generated code
AI is fundamentally changing how software is written. Developers are increasingly relying on code generated by leveraging generative AI tools and application security experts are scrambling to understand and mitigate security risks. To counteract this emerging threat, the developer community must take proactive measures and foster a culture of continuous learning that is essential to stay ahead of the ongoing battle for application security.This webcast explores the intersection of AI and application security, helping you understand the new security challenges while emphasizing proactive measures to identify and address vulnerabilities. It will highlight the importance of:
- Integrating security from the outset of development
- Leveraging AI for efficient vulnerability detection
- Fostering collaboration between developers, security experts, and AI researchers
- Ethical considerations and responsible AI practices are emphasized to build resilient and trustworthy software systems in the face of evolving cyber threats
Speakers
Chris Wood,
Principal Application Security SME, Immersive Labs
Bill Brenner,
VP - Content Strategy, CyberRisk Alliance
Joye Purser,
CISSP PHD, Field CISO, Veritas