The AI focussed unprompted security conference happened in SF in March of 2026.
The conference was oversubscribed, with 700+ attendees in person and 1500+ virtual attendees.
They have a NotebookLM with all content online: https://notebooklm.google.com/notebook/78ee3710-1741-488d-af06-159f518e9510
Here are some of the talks that I found particularly interesting
- Nicholas Carlini - Anthropic - https://nicholas.carlini.com/ - used Claude Code to find vulnerabilities in the Linux kernel, found 22 CVEs. He mentioned that in his career he has never found a CVE in the Linux Kernel. He believes that Claude is better than him today (https://x.com/tqbf/status/2029252008415248454) and that Claude which was unusable for security work a year ago, will be better than any human security researchers in 1 year.
- Paul McMillan/Ryan Lopopolo - OpenAI - Using Codex to write security by default code
- Heather Atkins/Four Google - Google using AI to find vulnerabilities - Big Sleep project
- Nicolas Lidzborski Google - AI security at scale in Gmail/Doc, etc
- Dan Guido - TrailofBIts - changing TrailofBits to become AI native. Not really a security talk, but a an amazing business talk on how they migrated the company to use AI pervasively
- Daniel Miessler - building his personal AI. Again not really on security, but amazingly thought provoking
Anthropic, Google and OpenAI all gave the impression that they freely use tokens to increase productivity, i.e. token abundance. That set their talks apart from others that focussed
on breaking LLMs or ceratin more detailed security aspec