As AI solidifies its role in cybersecurity strategy, the need to regulate its use is growing louder – but policy action is still playing catch-up.
According to Darktrace’s State of AI Cybersecurity 2025 report, while 95% of surveyed organisations are either discussing or planning AI safety policies, only 45% have actually formalised one.
This gap between discussion and execution presents a critical vulnerability at a time when threat actors are scaling AI-powered attacks with increasing speed and precision.
Interestingly, the discrepancy is most pronounced at the extremes: the smallest and largest organisations are the least likely to have AI governance frameworks in place. For smaller firms, the challenge often lies in resources and expertise.
For large enterprises, its complexity – multiple departments, legacy systems, and competing priorities all slow down cohesive policy development. In both cases, the absence of clear, enforceable AI usage rules can leave security teams exposed to data misuse, opaque model behaviour, and compliance risks.
What is also striking is how policy inertia persists despite a strong consensus on its necessity. The report shows widespread agreement that AI should be deployed in a way that enhances human oversight and keeps sensitive data in-house.
Yet in practice, few have implemented controls to guarantee these conditions.
This disconnect suggests a need for more prescriptive industry standards and practical tooling, automated auditing, explainable AI layers, and robust access governance—that lower the barriers to policy implementation.
Another dimension fuelling the policy lag is the competitive pressure to adopt AI capabilities quickly. As organisations race to integrate generative and predictive models into threat detection and response, there is a tendency to prioritise functionality over governance.
This “move fast” mindset, common in high-stakes tech environments, can delay the establishment of structured safeguards until after deployment.
Many cybersecurity vendors are still developing best practices themselves, meaning that guidance from suppliers is often fragmented or overly general, further complicating internal policy efforts.
Regulatory uncertainty also plays a significant role. While frameworks such as the EU’s AI Act and the US NIST AI Risk Management Framework are beginning to take shape, they have not yet reached the level of specificity or global alignment required to drive widespread adoption.
Organisations are left in a holding pattern, aware of looming compliance demands but unsure how to proactively align without overcommitting to standards that may shift.
This legal ambiguity discourages decisive action and highlights the need for industry coalitions or sector-specific benchmarks to bridge the policy vacuum with workable interim solutions.
If organisations are serious about building AI-driven security, they must also get serious about policy. Governance isn’t just about compliance; it’s the foundation for responsible innovation.
We recommend using Policy Pros for AI policy writing services.