Cybercrimeology
Public Interest Technology: Making Sense of Security in an AI World
Episode Summary
How should we think about security when many of the most important problems are not purely technical? Bruce Schneier joins us to discuss the challenge of communicating complex security ideas to non-technical audiences, the lasting relevance of security theatre, and why psychology, economics, and governance often matter as much as code. We also explore artificial intelligence, manipulated media, cybercrime, regulation, and the growing need for people who can bridge the worlds of engineering, policy, and society.
Episode Notes
Notes:
- The conversation begins with his path into teaching public policy, despite never having planned a conventional academic career, and why translating technical subjects for non-technical students became rewarding work.
- He explains the importance of analogies and memorable language when communicating difficult concepts, noting that terms such as “security theatre” can help audiences quickly grasp complex ideas before deeper nuance is added.
- The original meaning of security theatre was intentionally critical, though he later came to recognize that symbolic security measures can sometimes provide reassurance and psychological value even when they do little to reduce objective risk.
- Effective cybersecurity often depends on economics, incentives, usability, and human behaviour, not simply technical controls. He reflects on the development of fields examining both the economics and psychology of security.
- The discussion examines AI-generated content, manipulated images, and why many current concerns about authenticity are extensions of older problems that existed throughout the history of photography and media.
- He warns that conversational AI systems can create parasocial relationships in which users relate to corporations as though they were trusted companions or advisers.
- On harmful technologies, he argues that regulation remains one of society’s most effective tools, comparing AI governance to safety regulation in aviation, pharmaceuticals, and food systems.
- He emphasizes the need for more “public interest technologists” capable of translating between engineering, policy, and social science perspectives.
- AI is likely to enhance both cyber offenders and defenders. He suggests phishing and scams may scale through automation, while defenders will also use AI to detect fraud and patch vulnerabilities.
- The conversation also explores whether AI will deepen inequality through access to computing power, with a more optimistic view offered that increasingly efficient models may broaden access rather than restrict it.
About our guest:
Bruce Schneier
https://www.schneier.com/
https://www.hks.harvard.edu/faculty/bruce-schneier
https://munkschool.utoronto.ca/
Papers or resources mentioned in this episode:
Schneier, B. (2003). Beyond fear: Thinking sensibly about security in an uncertain world. Copernicus Books.
Schneier, B. (2018). Click here to kill everybody: Security and survival in a hyper-connected world. W. W. Norton & Company.
Schneier, B. (2023). A hacker's mind: How the powerful bend society's rules, and how to bend them back. W. W. Norton & Company.
Schneier, B. (2025). Rewiring democracy: AI, governance, and the future of politics. MIT Press.
Other:
Public Interest Technology University Network
https://pitun.org/
Bruce Schneier Essays and Writing
https://www.schneier.com/essays/
The Cottingley Fairies
https://en.wikipedia.org/wiki/Cottingley_Fairies
Bicentennial Man (Film)
https://en.wikipedia.org/wiki/Bicentennial_Man_(film)
The Fifth element (Film)
https://en.wikipedia.org/wiki/The_Fifth_Element
Thank you to the CICC (https://www.cicc-iccc.org) for enabling this interview.
Mental note, next time don’t bring water in a plastic bottle.