Agentic artificial intelligence (AI) systems offer powerful automation capabilities but also introduce significant security risks due to their autonomy, interconnected architecture and reliance on large language models. As a result, they introduce security, governance and accountability risks that differ from those associated with traditional software or generative AI.
Together with our international partners, we have developed this guidance to support government, critical infrastructure and industry stakeholders in understanding the key security challenges and risks posed by agentic AI. It provides practical guidance to help organisations that design, develop, deploy and operate agentic AI systems to make informed risk assessments and mitigations.
Organisations should adopt agentic AI systems carefully by deploying incrementally and limiting them to low-risk tasks. Agentic AI deployments need to enforce strict privilege controls, continuous monitoring, strong identity management, human oversight and alignment with existing cyber security frameworks.
Learn more through the Careful adoption of agentic AI services publication.