First published: 01 May 2026
Last updated: 01 May 2026

Content written for

Large organisations & infrastructure
Government

Agentic artificial intelligence (AI) systems offer powerful automation capabilities but also introduce significant security risks due to their autonomy, interconnected architecture and reliance on large language models. As a result, they introduce security, governance and accountability risks that differ  from those associated with traditional software or generative AI.

Together with our international partners, we have developed this guidance to support government, critical infrastructure and industry stakeholders in understanding the key security challenges and risks posed by agentic AI. It provides practical guidance to help organisations that design, develop, deploy and operate agentic AI systems to make informed risk assessments and mitigations.

Organisations should adopt agentic AI systems carefully by deploying incrementally and limiting them to low-risk tasks. Agentic AI deployments need to enforce strict privilege controls, continuous monitoring, strong identity management, human oversight and alignment with existing cyber security frameworks. 

Learn more through the Careful adoption of agentic AI services publication.

Was this helpful?
Yes this was helpful
No this was not helpful

Thanks for your feedback!

We welcome additional feedback below.

Was this information easy to understand?
Will you take action after reading this?
Did you find the information you were looking for?
Did the design and layout of this page meet your expectations?