NIST released the preliminary draft of their Cyber AI Profile (IR 8596) in December 2025. If AI is anywhere in your environment (and at this point it probably is) it’s worth understanding what it says.
The Cyber AI Profile is not a new framework. It’s a CSF 2.0 overlay that applies what most security teams are already working with to three specific problems: securing AI systems, using AI to enhance your defences, and defending against adversaries who are using AI to attack you. It sits alongside the CSF, AI RMF, and ISO 27001 rather than replacing them.
The challenge is that the document is dense and not easy to navigate. I’ve been working through it and built an Excel workbook to help make the recommendations more accessible. More on that at the end of this post.
The Profile maps recommendations across all six CSF 2.0 functions, each tagged as either General, High, Moderate, or Foundational priority.
| Function | General | High | Moderate | Foundational | Total |
|---|---|---|---|---|---|
| GOVERN | 24 | 15 | 24 | 4 | 67 |
| IDENTIFY | 21 | 17 | 22 | 7 | 67 |
| PROTECT | 22 | 22 | 24 | 2 | 70 |
| DETECT | 11 | 7 | 14 | 1 | 33 |
| RESPOND | 13 | 7 | 12 | 5 | 37 |
| RECOVER | 8 | 1 | 6 | 3 | 18 |
| Total | 99 | 69 | 102 | 22 | 292 |
PROTECT has the most High-priority recommendations (22), followed by IDENTIFY (17) and GOVERN (15).
If you’re trying to work out where to focus first, PROTECT is the answer.
A few things come up repeatedly across the document regardless of which function or focus area you’re looking at.
AI accelerates the impact of gaps that already exist. Configuration drift, incomplete logging, weak change management. None of this is new, however, the use of AI means the consequences of ignoring these things move faster and at greater scale than before.
The fundamentals still matter most. Most of the High-priority recommendations aren’t exotic AI-specific controls. They’re established security practices that need to be deliberately extended to cover AI assets and AI-enabled threats.
Human-in-the-loop is a consistent requirement. The Profile doesn’t position AI as a replacement for human judgement. AI handles volume and speed; humans retain accountability for decisions and escalation.
Risk tolerance needs more frequent revisiting. The AI threat landscape is changing faster than most organisations update their risk appetite statements. The Profile treats risk reassessment as an ongoing activity rather than an annual one.
These apply across all three focus areas:
Secure is about treating your AI stack as an asset class with its own attack surfaces, not just an add-on to your existing environment.
Defend is about incoroporating AI to mature or enhance an organisation’s existing defences and controls. This is where the opportunities are, but only if the implementation is done carefully.
Thwart talks to the actions that organisations should take to account for the rising use of AI by threat actors. This is the least mature area for most organisations and arguably where the most risk sits right now.
This recommendation sounds straightforward: review your risk appetite and update it to account for AI-enabled threats. In practice it surfaces a contradiction most organisations haven’t resolved.
Security programmes typically operate with a low tolerance for cybersecurity incidents. But those same organisations have given their business units a high tolerance for AI experimentation i.e. deploying tools quickly, often ahead of any security review. Those two positions are in direct conflict, and most organisations have simply let both coexist without acknowledging the gap.
Reconciling them practically means three things:
The point isn’t that experimentation is wrong. It’s that your risk tolerance statement needs to reflect what the organisation is actually doing.
Most teams won’t have a purpose-built AI monitoring pipeline in place yet. Good news is you don’t need one to start closing the gap. Here’s what you can do with what you likely already have:
None of this is a complete solution. But it moves you from having no visibility into AI interactions to having some, which is where most organisations need to start.
One thing worth calling out is that not all the recommendations carry the same weight. In the Thwart focus area especially, some entries read more like observations than actionable guidance.
Statements like “new risk tolerance recommendations may be needed with emerging AI-enabled threats” and “the scale and speed of AI-enabled attacks may challenge cybersecurity detection and monitoring systems” are directionally correct but don’t tell you what good looks like or how you’d know if you’d done enough.
While the document openly acknowledges it is a preliminary draft and invites the community to help fill the gaps, it is worth acknowledging that even prominent cyber security authorities and bodies like NIST do not have all the answers yet. The Secure and Defend focus areas are considerably more developed than Thwart, and if you’re using this document to build a programme, that’s where you’ll get the most traction right now.
The final version will be worth revisiting when it drops. In the meantime, we’re already working with customers to think through AI risks and build practical approaches to get ahead of them.
The document has over 200 recommendations across six functions and three focus areas.
I built an Excel workbook that maps all of the recommendations by function, priority, and focus area so you can filter down to what matters for your context and use it as a starting point for a gap assessment against your existing controls.
If you work through it and want to talk about what you find, feel free to reach out.