CMMC Phase 2 enforcement begins November 2026. See how to get certified →

All Insights
CSPM

CSPM Tools Promise Remediation. Here's What They Actually Deliver.

PolicyCortex Team|February 25, 2026|7 min read
CSPMautonomous remediationCMMC continuous monitoringcloud security posture managementcompliance automation

Every CSPM vendor deck includes slides about remediation. The language is confident: "automated remediation," "one-click fixes," "guided workflows." The demos are clean. The integrations look seamless.

Then you deploy the tool in a defense contractor environment with CMMC continuous monitoring requirements and discover the gap between marketing language and actual capability.

This post breaks down the remediation spectrum honestly — from alert-only to fully autonomous — so you can evaluate what a CSPM tool actually delivers before you commit to it.

The Remediation Spectrum

Not all remediation is the same. There's a meaningful difference between a tool that tells you something is broken, a tool that tells you how to fix it, and a tool that fixes it. These distinctions matter enormously in a CMMC context, where continuous monitoring isn't optional and remediation timelines are contractually significant.

Tier 1: Alert-Only

The baseline. The tool scans your environment, identifies misconfigurations, and generates findings. It does nothing else.

Every CSPM tool on the market does this. Wiz does it well. Prisma Cloud does it well. Orca does it well. Alert quality varies — some tools produce more signal, fewer false positives, better contextual risk scoring — but the fundamental output is the same: a list of things that are wrong.

The workflow this creates: finding → ticket → assign → prioritize → queue → schedule → implement → verify → close. In organizations that measure remediation in sprints and have security finding backlogs in the hundreds, alert-only tools frequently deliver a median remediation time exceeding two weeks.

For CMMC Level 2 contractors, the relevant question is whether that timeline is acceptable for continuous monitoring. In most cases, it isn't.

Tier 2: Guided Remediation

A meaningful step up. The tool not only identifies the finding, it provides remediation guidance — typically a step-by-step instruction set, the CLI command to run, or the Terraform snippet to apply.

This reduces the cognitive load on the engineer who eventually picks up the ticket. It shortens implementation time, particularly for less experienced cloud engineers who might otherwise need to research the fix. It doesn't change the fundamental workflow bottleneck: a human still has to pick up the ticket, understand the context, and implement the fix.

Many enterprise CSPM tools are in this tier. They've built extensive remediation knowledge bases and integrated them into finding views. It's genuinely useful. It doesn't change the median remediation timeline by much, because the constraint isn't knowledge of how to fix something — it's prioritization, scheduling, and human bandwidth.

Tier 3: Runbook-Based Remediation

This is where vendors start using the word "automated" seriously, and where the definitions start to diverge.

Runbook-based remediation means the tool can execute predefined remediation scripts against specific finding types. You configure a runbook: "when you find an S3 bucket with public read access, run this Lambda function to set the bucket ACL to private." The tool executes the runbook automatically — no human click required.

This is real automation. For the specific finding types covered by configured runbooks, it can reduce remediation time from days to minutes. Several CSPM vendors offer this capability, either natively or through integrations with AWS Security Hub, Azure Policy, or third-party SOAR platforms.

The limitations are significant:

Coverage gaps. Runbooks cover the finding types you explicitly configured. Novel findings, edge cases, and configuration states that don't match the runbook's expected conditions fall through to the manual queue. In a typical defense contractor environment with 47 active findings across multiple control families, the coverage gap can be substantial.

Context blindness. Runbooks execute based on the presence of a finding, not on understanding of the environment. A runbook that blocks all public S3 access may be exactly right for a CUI storage bucket and completely wrong for a bucket legitimately serving public static assets. Without contextual reasoning, runbook automation creates a meaningful risk of breaking production workloads.

Maintenance burden. Runbooks are code. They need to be written, tested, version-controlled, updated when AWS changes APIs, and audited when your compliance requirements change. For most security teams, runbook maintenance becomes a project in itself.

No approval layer. Most runbook implementations execute immediately upon finding detection, with no human review. This is fast. It's also a significant risk in environments where a misconfigured runbook could take down a production service.

Tier 4: Truly Autonomous Remediation

This is where the market description and reality diverge most sharply.

True autonomous remediation means the system can reason about a compliance finding in context, determine the appropriate remediation action, validate that the action is safe to take, execute it, and verify the result — across the full range of finding types in your environment, not just the subset covered by pre-written runbooks.

Very few tools genuinely deliver this. Most vendors who use the word "autonomous" are describing sophisticated runbook automation.

The difference matters because the failure modes are different. A runbook-based system fails by not having coverage. A truly autonomous system has coverage but may reason incorrectly about context or safety. The safety architecture that governs autonomous decisions is critical — both for operational safety and for compliance audit purposes.

What "Automated Remediation" Usually Means in Vendor Pitches

When a CSPM vendor says their product offers "automated remediation," you need to ask clarifying questions before accepting that claim. Based on how the term is used across the market, it usually means one of three things:

1. They mean the notification is automated. The tool sends a Slack message or creates a Jira ticket automatically when a finding is detected. The remediation itself is manual. This is common and useful, but it's not remediation automation.

2. They mean they have a runbook library. The vendor ships a library of pre-built remediation scripts covering common finding types. You can configure these to run automatically or on-demand. Coverage is real but partial, and the context problem exists.

3. They mean they have a one-click remediation button. In the finding view, there's a button that says "Fix this." The button executes a predetermined action. It's still human-initiated (a click), still not autonomous, and still context-unaware.

None of these is false advertising exactly — they're all forms of automation. But they deliver very different outcomes in practice, and none of them is what the word "autonomous" implies.

The Practical Difference in CMMC Continuous Monitoring

CMMC Level 2 requires continuous monitoring — specifically, the ongoing assessment of implemented controls to ensure they remain effective. NIST 800-171A and the associated guidance make clear that continuous monitoring is not a periodic scan; it's an operational posture.

The median time to remediate a cloud misconfiguration through manual ticketing workflows is 18 days. For organizations implementing runbook-based automation for their most common finding types, that number drops — often to 2-5 days for covered findings, still 18+ days for findings outside runbook coverage.

For truly autonomous remediation operating with appropriate safety controls, remediation time drops to under 4 minutes for the vast majority of finding types.

That difference is the difference between:

  • A compliance posture that's perpetually catching up with its own findings queue
  • A compliance posture that maintains a near-zero active findings state continuously

For CMMC assessors evaluating continuous monitoring implementation, the evidence looks very different. A contractor presenting 47 open findings that have been open for an average of 12 days is presenting evidence of inadequate continuous monitoring. A contractor presenting an automated finding history showing findings opened and closed within minutes is presenting evidence of a mature operational program.

CMMC-Specific Considerations

Beyond remediation speed, CMMC continuous monitoring has specific requirements that most CSPM tools weren't designed to address:

Evidence generation. CMMC assessors need to see documented evidence of compliance, not just current system state. What was the configuration on this date? When was this finding opened? When was it remediated? What action was taken? Most CSPM tools are built for real-time visibility, not historical compliance evidence. The audit trail for autonomous remediation actions is critical.

CUI boundary enforcement. CMMC applies specifically to systems processing CUI. A CSPM tool that monitors everything equally doesn't help you enforce different compliance standards within and outside your CUI boundary. The remediation system needs to understand scoping.

Human oversight requirements. For certain high-impact remediation actions, CMMC and internal governance policies require human review before execution. An autonomous remediation system operating in a CMMC environment needs a configurable approval layer, not a binary choice between full automation and manual process.

What to Look for When Evaluating CSPM Remediation Claims

When a vendor presents remediation capabilities, here are the questions that will separate genuine capability from marketing language:

Coverage breadth. What percentage of your NIST 800-171 finding types does the automated remediation actually cover? Ask for the specific list, not an estimate. Gaps matter.

Context awareness. How does the system handle remediation decisions that could break production workloads? Does it have any mechanism for understanding the purpose of a resource before modifying it? What is the rollback mechanism when a remediation creates a problem?

Approval workflow. Can you configure remediation to require human approval for specific action types or risk levels? Is the approval workflow integrated into your existing tools (Slack, Teams, email), or does it require a portal login?

Audit trail. What records does the system create when it takes a remediation action? Are those records in a format suitable for CMMC assessment evidence? Are they tamper-evident?

Failure handling. What happens when a remediation action fails partway through? What happens when the system encounters a finding type it hasn't seen before?

Write access model. How does the system obtain the permissions required to modify your cloud environment? What is the permission scope? How is that access governed and audited?

The Write Access Problem

The question of write access is the one most vendors avoid discussing in early sales conversations, and it's the most important operational and security question in autonomous cloud remediation.

To fix a misconfiguration, the tool needs permission to change it. That means write access to your cloud environment. In a CUI-handling environment, granting broad write access to a third-party tool is a significant security decision.

The approaches to this problem vary:

Broad IAM roles. Simple to configure, creates significant risk. If the remediation tool is compromised or misconfigured, it has the keys to your environment.

Just-in-time access. The tool requests specific permissions for specific actions at remediation time. More secure, more complex to implement, introduces latency into the remediation workflow.

Policy-gated architecture. The tool's actions are constrained by policy enforcement that validates each proposed action against a defined policy set before execution — independent of the AI or automation layer making the decision.

The last approach — sometimes called a "safety sandwich" architecture — is the only one that provides both operational autonomy and a reliable safety guarantee. Without an independent policy gate, autonomous remediation in a high-stakes environment is a significant risk, regardless of how good the underlying AI reasoning is.

The Honest Assessment

Most CSPM tools were built to provide visibility. Remediation was added later, either as a differentiator or because customers asked for it. The architecture of those tools — optimized for scanning, detection, and alerting — is often not well-suited to autonomous write operations.

A tool built from the ground up for autonomous remediation looks different: its policy engine is foundational, not bolted on; its write access model is purpose-built for safety; its audit trail is designed for evidence, not just logging; and its approval workflows are first-class features, not afterthoughts.

When you're evaluating CSPM tools for a CMMC environment, don't let remediation marketing obscure the underlying question: can this tool actually close findings in my environment, safely, at the speed my continuous monitoring requirements demand?

The answer is usually no. Knowing why — and what "yes" actually requires — puts you in a position to make a decision that fits your compliance reality, not your vendor's demo.

Ready to automate your cloud governance?

See how PolicyCortex replaces your disconnected compliance tools with one autonomous platform.

Related Insights