Practitioners’ Guide to Managing AI Security

The race to integrate AI into internal operations, and bring AI-based products and services to market, is moving faster than almost anyone could have imagined. Some security leaders have expressed concern that in the excitement over AI’s potential, critical security and assurance considerations are being overlooked. 

Recognizing the disconnect between AI innovation and AI security, Global Resilience Federation convened a working group and asked KPMG to facilitate in-depth discussions among AI and security practitioners from more than 20 leading companies, think tanks, academic institutions, and industry organizations.

The output of this working group is the Practitioners’ Guide to Managing AI Security. The guide aims to provide insights and considerations that strengthen collaboration between data scientists and AI security teams across five tactical areas identified by the working group: Securing AI, Risk & Compliance, Policy & Governance, AI Bill of Materials, and Trust & Ethics.