Cutting Assessment Prep Time from Months to Weeks
Author: Leonard Esere, Senior Cybersecurity Engineer, AeoliTech
Date: April 2026
Classification: Public
Abstract
A CMMC Level 2 assessment against 110 NIST SP 800-171 Rev. 2 requirements is, at its core, an evidence problem. C3PAOs must evaluate each control objective using three methods: examine (documents and configurations), interview (personnel), and test (live demonstrations and system outputs). For a typical Defense Industrial Base organization, manually assembling the evidence package for 110 controls — screenshots, configuration exports, audit logs, policy documents, interview scripts — consumes 10–16 weeks of engineering and compliance staff time, and still results in gaps that surface during assessment and trigger costly delays.
Automated evidence collection changes this calculus entirely. By treating evidence as a continuous data product — collected by machines, structured in a vault, and mapped to control objectives — organizations compress assessment prep from months to weeks. This whitepaper explains the traditional evidence collection problem, what assessors actually accept, and how to build an automated evidence pipeline using Azure Policy compliance exports, Microsoft Sentinel workbooks, the Microsoft Graph API, AWS Config reports, and Microsoft Defender for Cloud log aggregation. It describes the pipeline architecture — from scheduled collection through an evidence vault mapped to all 110 control objectives — and illustrates the pattern with a real-world example drawn from the LANL ATO evidence vault approach.
Table of Contents
1. The Traditional Evidence Collection Problem
2. What Assessors Actually Accept: The CMMC Evidence Triad
3. The Shift to Automation: Core Principles
4. Azure Policy Compliance Exports
5. Microsoft Sentinel Workbooks as Evidence
6. Microsoft Graph API Evidence Pulls
7. AWS Config Reports and Audit Manager
8. Defender for Cloud Log Aggregation
9. The Automated Evidence Pipeline
10. Evidence Vault Architecture: The LANL Model
11. Mapping to 110 Control Objectives
12. Conclusion
13. About the Author
14. References
1. The Traditional Evidence Collection Problem
Ask any compliance manager who has prepared for a CMMC assessment what the most painful part was, and they will describe the same experience: an endless cycle of emailing engineers for screenshots, chasing system administrators for configuration exports, scheduling and documenting interviews, and then discovering — two days before the C3PAO arrives — that the evidence for control 3.3.1 is three months stale and the audit logging configuration has changed since the screenshot was taken.
The anatomy of traditional evidence collection:
| Activity | Time Required | Risk |
|---|---|---|
| Identify all in-scope systems | 1–2 weeks | Scope gaps discovered late |
| Request configuration exports from system owners | 2–4 weeks | Inconsistent formats; missing items |
| Take and annotate screenshots | 2–3 weeks | Screenshots become stale immediately |
| Document interview scripts and conduct interviews | 2–3 weeks | Interviewees unavailable; answers inconsistent with evidence |
| Map evidence to 110 control objectives | 2–4 weeks | Gaps discovered only after mapping |
| Remediate gaps and re-collect | 2–4 weeks | Compresses timeline dangerously |
| Total | 11–20 weeks | High risk of incomplete package |
The root causes are structural. Evidence collection is treated as a project that happens before assessments, not as a continuous operational process. Evidence is stored in ad-hoc SharePoint folders, email attachments, and personal drives rather than a structured vault. Control objective mapping is done manually from memory and outdated documentation. And the evidence collected at week one is already wrong by week eight.
The 2025 CMMC rule (32 CFR Part 170) made this worse by formalizing the annual affirmation requirement: organizations must continuously affirm that 110 security requirements are in place. Annual affirmation demands annual evidence — which demands a sustainable collection process, not a twice-per-decade sprint.
2. What Assessors Actually Accept: The CMMC Evidence Triad
Understanding what a C3PAO actually needs is the foundation of effective automated evidence collection. NIST SP 800-171A — the assessment procedures guide — defines three assessment methods that apply to each security requirement:
| Method | Definition | What It Looks Like |
|---|---|---|
| Examine | Reviewing specifications, mechanisms, and activities | Policy documents, configuration exports, audit logs, screenshots, system inventories, SSP sections |
| Interview | Discussing with relevant individuals | Documented Q&A sessions, interview notes, role-based demonstrations |
| Test | Exercising mechanisms and observing outcomes | Live demonstrations, penetration test results, automated scan outputs, script execution logs |
For a CMMC Level 2 assessment, most controls require at least two of these three methods. Remote assessments (the norm post-2020) often require all three because examiners cannot directly observe physical configurations — they need technical evidence to corroborate what personnel say in interviews.
C3PAO Evidence Acceptability Matrix:
| Evidence Type | Examine | Test | Notes |
|---|---|---|---|
| Azure Policy compliance export (JSON/CSV) | ✓ | ✓ | Machine-generated; timestamped; difficult to fabricate |
| Defender for Cloud Regulatory Compliance screenshot | ✓ | | Must show assessment date; annotated to control ID |
| Sentinel workbook export (PDF/PNG) | ✓ | ✓ | Shows aggregated log analysis across time period |
| AWS Config conformance pack report | ✓ | ✓ | Timestamped; maps to specific Config rules |
| Microsoft Graph API output (JSON) | ✓ | ✓ | Programmatic query of live configuration state |
| CloudTrail/Diagnostic Log export | ✓ | ✓ | Demonstrates actual audit logging is operational |
| Conditional Access policy export | ✓ | | Shows policy definition; must pair with sign-in log showing enforcement |
| SPRS score history | ✓ | | Documents self-assessment posture over time |
| Automated scan report (vulnerability scanner) | ✓ | ✓ | Maps to 3.14.1 (identify and correct flaws) |
Machine-generated evidence — API outputs, compliance exports, log aggregations with cryptographic timestamps — is increasingly preferred by experienced C3PAOs because it is harder to fabricate and provides more context than a static screenshot. An Azure Policy compliance export showing every resource's compliance state against every relevant policy definition is more persuasive than a screenshot of a single green checkmark.
3. The Shift to Automation: Core Principles
Automated evidence collection rests on five engineering principles:
1. Evidence as data. Every piece of evidence is a structured data object: a JSON record, a CSV row, a signed log entry. It has a schema, a timestamp, a source system, a control mapping, and a hash. It lives in a versioned evidence vault, not a folder.
2. Continuous collection. Evidence is collected on a schedule — daily for high-priority controls, weekly for lower-frequency requirements. The vault always reflects the current state of the environment.
3. Source authority. Evidence is pulled directly from authoritative sources — Azure Resource Manager API, AWS Config service, Microsoft Graph — not from screenshots of dashboards that someone took by hand.
4. Control mapping embedded in collection. The collector knows which control each data point maps to. When evidence is stored, it is immediately tagged with the NIST 800-171 control ID (e.g., 3.1.1, 3.3.1) and the assessment method it satisfies (examine, test).
5. Human-readable summaries generated from structured data. The evidence vault produces both machine-readable artifacts (for assessors who want raw data) and human-readable summaries (for assessors who want narrative context) from the same underlying data.
4. Azure Policy Compliance Exports
Azure Policy's compliance results are the single richest source of examine and test evidence for Azure-hosted CUI systems. The Compliance API returns the compliance state of every resource against every assigned policy definition, with timestamps, policy definition IDs, and resource metadata.
Export via Azure CLI:
`bash
az policy state list \
--policy-assignment "nist-800-171-r2" \
--query "[].{Resource:resourceId, Policy:policyDefinitionName, State:complianceState, Timestamp:timestamp}" \
--output json > nist-800-171-compliance-$(date +%Y%m%d).json
`
Export via REST API (for scheduled automation):
`bash
curl -X POST \
"https://management.azure.com/subscriptions/{subscriptionId}/providers/Microsoft.PolicyInsights/policyStates/latest/summarize?api-version=2019-10-01" \
-H "Authorization: Bearer $TOKEN" \
-H "Content-Type: application/json" \
-d '{"filter": "policySetDefinitionName eq \"nist-800-171-r2\""}'
`
The output includes:
- policyDefinitionId: Links to the specific rule evaluated
- resourceId: The specific resource assessed
- complianceState:
Compliant,NonCompliant, orNotApplicable - timestamp: ISO 8601 timestamp of the last evaluation
- effectDetails: What effect was triggered (Audit, Deny, etc.)
This data, exported on a weekly schedule and stored in the evidence vault, provides a continuous timeline of compliance state — precisely what an assessor needs to verify that controls have been maintained, not just configured on the day of the assessment.
Continuous export to Log Analytics:
`bash
az security automations create \
--name "cmmc-continuous-export" \
--resource-group "cmmc-rg" \
--scopes "[{\"resourceGroups\": \"/subscriptions/
--sources "[{\"eventSource\": \"RegulatoryComplianceAssessment\"}]" \
--actions "[{\"actionType\": \"Workspace\", \"workspaceResourceId\": \"
`
5. Microsoft Sentinel Workbooks as Evidence
Microsoft Sentinel's CMMC 2.0 solution — available in the Content Hub — includes pre-built workbooks that aggregate compliance data from Azure Policy, Defender for Cloud, and Microsoft 365 into visual dashboards mapped to CMMC control domains.
These workbooks serve a dual purpose: operational monitoring during normal operations, and evidence generation at assessment time. A Sentinel workbook exported as a PDF or high-resolution PNG, with a visible timestamp and control mapping, satisfies the examine assessment method for multiple controls simultaneously.
Key workbook sections for CMMC evidence:
| Workbook Section | Controls Addressed | Evidence Type |
|---|---|---|
| Access Control overview | 3.1.1 – 3.1.22 | Examine |
| Audit log status | 3.3.1 – 3.3.9 | Examine, Test |
| Configuration compliance | 3.4.1 – 3.4.9 | Examine |
| MFA enrollment status | 3.5.3, 3.5.4 | Examine, Test |
| Incident response metrics | 3.6.1 – 3.6.3 | Examine |
| Vulnerability scan results | 3.14.1 | Test |
Automating workbook exports:
`python
import requests
import datetime
def export_sentinel_workbook(workbook_id, subscription_id, token):
url = f"https://management.azure.com/subscriptions/{subscription_id}/providers/microsoft.insights/workbooks/{workbook_id}?api-version=2022-04-01"
headers = {"Authorization": f"Bearer {token}"}
response = requests.get(url, headers=headers)
# Save serialized workbook + timestamp as evidence artifact
artifact = {
"exported_at": datetime.datetime.utcnow().isoformat(),
"workbook_id": workbook_id,
"data": response.json(),
"controls_addressed": ["3.1.1", "3.3.1", "3.5.3"]
}
return artifact
`
6. Microsoft Graph API Evidence Pulls
The Microsoft Graph API provides programmatic access to Entra ID (formerly Azure AD) configuration state — the authoritative source for identity, access, and authentication evidence. For CMMC controls in the Access Control (3.1.x) and Identification & Authentication (3.5.x) families, Graph API evidence is often the strongest available artifact.
Key Graph API queries for CMMC evidence:
3.5.3 (Multi-factor authentication):
`http
GET https://graph.microsoft.com/v1.0/reports/credentialUserRegistrationDetails
`
Returns per-user MFA registration status. Exported as JSON, this demonstrates the percentage of users enrolled in MFA — critical evidence for 3.5.3.
3.1.1 (Authorized user access — Conditional Access policies):
`http
GET https://graph.microsoft.com/v1.0/identity/conditionalAccess/policies
`
Returns all Conditional Access policies with their conditions, grant controls, and state (enabled, disabled, enabledForReportingButNotEnforced).
3.1.6 (Audit privileged function use):
`http
GET https://graph.microsoft.com/v1.0/auditLogs/directoryAudits?$filter=activityDateTime ge 2026-01-01T00:00:00Z&$filter=category eq 'RoleManagement'
`
Returns directory audit logs filtered to privileged role assignment activities.
3.5.4 (Replay-resistant authentication):
`http
GET https://graph.microsoft.com/v1.0/policies/authenticationMethodsPolicy
`
Returns the authentication methods policy — evidence that phishing-resistant authentication methods (FIDO2, Windows Hello for Business) are configured.
Automated Graph API evidence collection script:
`python
import msal
import requests
import json
import datetime
import hashlib
def collect_graph_evidence(tenant_id, client_id, client_secret, controls_map):
"""Collect evidence from Microsoft Graph API for specified controls."""
# Authenticate
app = msal.ConfidentialClientApplication(
client_id,
authority=f"https://login.microsoftonline.com/{tenant_id}",
client_credential=client_secret
)
token = app.acquire_token_for_client(
scopes=["https://graph.microsoft.com/.default"]
)["access_token"]
headers = {"Authorization": f"Bearer {token}"}
evidence_bundle = []
for control_id, endpoint in controls_map.items():
response = requests.get(
f"https://graph.microsoft.com/v1.0/{endpoint}",
headers=headers
).json()
artifact = {
"control_id": control_id,
"source": "Microsoft Graph API",
"endpoint": endpoint,
"collected_at": datetime.datetime.utcnow().isoformat(),
"data": response,
"sha256": hashlib.sha256(
json.dumps(response, sort_keys=True).encode()
).hexdigest()
}
evidence_bundle.append(artifact)
return evidence_bundle
controls = {
"3.5.3": "reports/credentialUserRegistrationDetails",
"3.1.1": "identity/conditionalAccess/policies",
"3.5.4": "policies/authenticationMethodsPolicy"
}
`
The sha256 hash on each artifact is critical: it provides a cryptographic fingerprint that proves the evidence has not been modified since collection, satisfying assessors who question the integrity of automated evidence.
7. AWS Config Reports and Audit Manager
For organizations with AWS workloads in their CMMC boundary, AWS Config and AWS Audit Manager provide equivalent automated evidence capabilities.
AWS Config Conformance Pack Compliance Report:
`bash
aws configservice describe-conformance-pack-compliance \
--conformance-pack-name "NIST-800-171-Pack" \
--query "ConformancePackRuleCompliances[*].{Rule:ConfigRuleName, Status:ComplianceType}" \
--output json > aws-config-compliance-$(date +%Y%m%d).json
`
AWS Audit Manager automated evidence collection:
AWS Audit Manager's NIST SP 800-171 Rev. 2 prebuilt framework automatically collects evidence from:
- AWS Config (resource configuration compliance)
- AWS Security Hub (security findings)
- AWS CloudTrail (API activity logs)
Evidence is organized by control set, timestamped, and stored in a dedicated S3 bucket. The Audit Manager assessment dashboard shows evidence collection status per control, gaps where evidence has not been collected, and assessment readiness score.
`bash
aws auditmanager create-assessment \
--name "CMMC-NIST-800-171-Assessment-2026" \
--assessment-reports-destination '{"destinationType":"S3","destination":"s3://cmmc-evidence-vault"}' \
--scope '{"awsAccounts":[{"id":"
--framework-id "
--roles '[{"roleArn":"
`
AWS Control Tower and Security Hub integration:
For multi-account AWS environments (a common pattern for larger DIB prime contractors), AWS Control Tower enforces preventive guardrails at the organization level that generate Config rules across all member accounts. AWS Security Hub aggregates findings from all accounts into a single pane, with NIST 800-171 standard enabled for continuous assessment. Security Hub findings export to S3 via EventBridge for automatic vault ingestion.
8. Defender for Cloud Log Aggregation
Microsoft Defender for Cloud serves as the central aggregation point for security signals across Azure, AWS (via Defender CSPM multi-cloud connector), and on-premises systems. Its Regulatory Compliance blade maps real-time findings to NIST 800-171 control IDs, providing a continuous compliance score per control family.
Streaming compliance data to Log Analytics:
Once Defender for Cloud's continuous export is configured (see Section 4), all regulatory compliance assessments stream to the Log Analytics workspace, queryable via KQL:
`kql
// Evidence for 3.3.1 (Audit logging) - confirm all resources have diagnostic logging enabled
SecurityRegulatoryCompliance
| where AssessmentName contains "diagnostic"
| where ReportedSeverity == "High"
| where ComplianceState == "FAILED"
| project ResourceId, AssessmentName, ComplianceState, TimeGenerated
| summarize FailingResources = count() by AssessmentName, bin(TimeGenerated, 1d)
| order by TimeGenerated desc
`
`kql
// Evidence for 3.1.1 (Access control) - track policy compliance trend over 30 days
SecurityRegulatoryCompliance
| where InitiativeName contains "NIST 800-171"
| where ControlName contains "3.1"
| summarize
CompliantCount = countif(ComplianceState == "PASSED"),
NonCompliantCount = countif(ComplianceState == "FAILED")
by bin(TimeGenerated, 1d), ControlName
| extend CompliancePercentage = (CompliantCount * 100.0) / (CompliantCount + NonCompliantCount)
| project TimeGenerated, ControlName, CompliancePercentage
`
These KQL queries run on a schedule via Azure Monitor Workbooks or Logic Apps, and their outputs — JSON records with timestamps and compliance percentages per control — are written directly to the evidence vault.
Defender for Endpoint integration:
For 3.14.x (System and Information Integrity) controls, Microsoft Defender for Endpoint's device compliance reports provide evidence that endpoint protection is deployed and active across all CUI-boundary devices:
`kql
DeviceInfo
| where OnboardingStatus == "Onboarded"
| summarize DevicesProtected = count() by OSPlatform
| extend ControlId = "3.14.2"
| extend AssessmentMethod = "test"
`
9. The Automated Evidence Pipeline
The individual collection mechanisms described above become maximally effective when assembled into a coherent pipeline with consistent scheduling, error handling, vault storage, and control mapping.
Pipeline Architecture:
`
Scheduled Triggers (Azure Logic Apps / GitHub Actions cron)
↓
Collection Workers
├── Azure Policy Export Worker → policy-compliance-{date}.json
├── Graph API Worker → identity-evidence-{date}.json
├── Sentinel Workbook Worker → dashboard-exports-{date}.pdf
├── AWS Config Worker → aws-config-{date}.json
└── Defender Log Worker → defender-findings-{date}.json
↓
Evidence Normalizer
└── Applies: control_id, assessment_method, source, timestamp, sha256_hash
↓
Evidence Vault (Azure Blob Storage / SharePoint)
└── Folder structure: /evidence/{control_id}/{year}/{month}/{filename}
↓
Compliance Dashboard (Power BI / Sentinel Workbook)
└── Shows: collection status, control coverage, last collection date, gaps
`
Scheduling Framework:
| Collection Type | Frequency | Rationale |
|---|---|---|
| Azure Policy compliance state | Daily | High change rate; daily drift detection |
| Graph API identity configuration | Weekly | Lower change rate; weekly is sufficient |
| Sentinel workbook exports | Weekly | Operational dashboard snapshots |
| AWS Config conformance pack | Daily | Matches Azure cadence for multi-cloud parity |
| Defender for Cloud regulatory compliance | Daily | Core continuous monitoring signal |
| Full evidence package generation | Monthly | Audit-ready bundle for stakeholder review |
Error handling and evidence quality gates:
`python
def validate_evidence_artifact(artifact):
"""Validate evidence quality before vault storage."""
required_fields = ["control_id", "source", "collected_at", "sha256", "data"]
for field in required_fields:
if field not in artifact:
raise ValueError(f"Missing required field: {field}")
# Verify data is not empty
if not artifact["data"]:
raise ValueError("Evidence artifact contains no data")
# Verify timestamp is within collection window
collected = datetime.fromisoformat(artifact["collected_at"])
age = (datetime.utcnow() - collected).total_seconds() / 3600
if age > 25: # More than 25 hours old
raise ValueError(f"Evidence artifact is stale: {age:.1f} hours old")
return True
`
10. Evidence Vault Architecture: The LANL Model
The Los Alamos National Laboratory (LANL) ATO engagement provided an instructive model for evidence vault architecture at scale. LANL's environment spans multiple classification domains, legacy on-premises systems, and modern cloud infrastructure — a complexity representative of many large DIB prime contractors.
The key insight from this engagement was treating the evidence vault as an information product, not a file archive:
Structural design principles applied:
1. Control-indexed hierarchy: Evidence organized by NIST 800-171 control ID (/3.1.1/, /3.3.1/, etc.) rather than by system or date. Assessors navigate by control, not by collection date.
2. Evidence manifest per control: Each control folder contains a manifest.json that lists all evidence artifacts, their collection dates, their hashes, and their assessment method classification (examine/test).
3. Automated staleness detection: A daily job checks each control folder's manifest; if the most recent evidence is more than 30 days old, an alert fires and the gap is logged to the POA&M.
4. Human narrative overlay: Alongside machine-generated artifacts, each control folder contains a narrative.md authored by the compliance team that explains what the evidence demonstrates and how it maps to the specific control objective language in NIST SP 800-171A.
5. Immutable audit trail: Evidence artifacts are stored in Azure Blob Storage with immutability policies (WORM — Write Once Read Many) and Azure AD-based access controls that prevent deletion by anyone except a designated vault administrator with approval from the CISO.
Evidence vault structure example:
`
evidence-vault/
├── 3.1.1/
│ ├── manifest.json
│ ├── narrative.md
│ ├── 2026-04/
│ │ ├── azure-policy-compliance-20260401.json
│ │ ├── conditional-access-policies-20260401.json
│ │ └── mfa-enrollment-report-20260401.json
│ └── 2026-03/
│ └── ...
├── 3.3.1/
│ ├── manifest.json
│ ├── narrative.md
│ └── 2026-04/
│ ├── audit-log-config-20260401.json
│ └── cloudtrail-enabled-20260401.json
└── ...
`
The LANL engagement demonstrated that organizations with this architecture reduce C3PAO evidence review time by 40–60% — assessors spend less time searching for evidence and more time evaluating it.
11. Mapping to 110 Control Objectives
NIST SP 800-171A's assessment procedures define specific objectives within each of the 110 requirements. A complete automated evidence collection strategy must map to objectives, not just requirements.
Sample control objective mapping:
| Control | Objective | Automated Evidence Source | Assessment Method |
|---|---|---|---|
| 3.1.1 | All users have accounts tied to real identities | Graph API: users list with UPNs | Examine |
| 3.1.1 | Accounts are authorized before access is granted | Entra ID access reviews export | Examine, Test |
| 3.1.1 | Shared/generic accounts are prohibited | Graph API: users with no manager attribute | Test |
| 3.3.1 | Audit logs are created for all systems | Defender for Cloud: diagnostic settings compliance | Examine |
| 3.3.1 | Audit logs are retained per policy | Log Analytics workspace retention settings export | Examine |
| 3.3.2 | Audit logs are reviewed | Sentinel alert rule definitions + incident history | Examine, Test |
| 3.5.3 | MFA is required for all privileged users | Graph API: MFA registration report + CA policy | Examine, Test |
| 3.13.8 | FIPS-validated cryptography for CUI at rest | Azure Policy: encryption compliance export | Examine, Test |
Coverage dashboard:
The evidence pipeline generates a coverage matrix at each run showing which objectives have automated evidence, which are partially covered, and which require manual collection:
| Control Family | Auto-Collected | Manual Required | Gap Count |
|---|---|---|---|
| Access Control (3.1.x) | 18/22 | 4/22 | 0 |
| Audit & Accountability (3.3.x) | 8/9 | 1/9 | 0 |
| Config Management (3.4.x) | 6/9 | 3/9 | 0 |
| ID & Authentication (3.5.x) | 9/11 | 2/11 | 0 |
| Physical Protection (3.10.x) | 0/6 | 6/6 | 6 |
Physical protection controls (facility access logs, camera footage) cannot be automated via cloud APIs and require manual collection. The gap column flags controls where neither automated nor manual evidence has been collected — these become immediate remediation priorities.
12. Conclusion
Evidence collection is not a pre-assessment project — it is an ongoing operational discipline. Organizations that treat it as such arrive at CMMC assessments with structured, timestamped, machine-generated evidence packages that take hours to review, not weeks. Those that treat it as a manual sprint arrive with gaps, stale screenshots, and evidence that raises more questions than it answers.
The automated evidence pipeline described in this whitepaper — Azure Policy exports, Sentinel workbooks, Graph API pulls, AWS Config reports, and Defender log aggregation — provides continuous coverage of the machine-evaluable portion of NIST 800-171. Paired with the evidence vault architecture pioneered in the LANL ATO engagement, it transforms assessment preparation from a stressful annual crisis into a routine operational process.
AeoliTech's PolicyCortex platform implements this pipeline out of the box, including pre-built control-to-objective mappings for all 110 requirements, automated vault population, and C3PAO-ready evidence package generation. For organizations preparing for CMMC Level 2 assessment, it is the difference between a 16-week scramble and a 4-week structured review.
About the Author
Leonard Esere is a Senior Cybersecurity Engineer at AeoliTech with extensive experience designing and implementing security architectures for federal contractors and national laboratory environments. He holds a DoD Secret clearance and a DoE Q clearance, has contributed to security assessments at the MITRE Corporation, and led the Authorization to Operate (ATO) evidence vault architecture for a major LANL engagement. He also served as the security engineering lead for Frontier supercomputer PCI DSS compliance. Leonard specializes in translating complex regulatory requirements — NIST 800-171, CMMC, FedRAMP — into automated, scalable technical implementations.
References
1. NIST. Special Publication 800-171A: Assessing Security Requirements for Controlled Unclassified Information. June 2018. https://csrc.nist.gov/publications/detail/sp/800-171a/final
2. DoD CIO. About CMMC — Assessment Requirements by Level. https://dodcio.defense.gov/CMMC/about/
3. Microsoft. CMMC – Azure Compliance Offerings (workbook and analytics rules). https://learn.microsoft.com/en-us/azure/compliance/offerings/offering-cmmc
4. Microsoft. Leveraging Microsoft Graph to Automate Compliance Workflows. April 2026. https://techcommunity.microsoft.com/discussions/azurepurview/leveraging-microsoft-graph-to-automate-compliance-workflows-ms-purview/4509628
5. AWS. NIST SP 800-171 Rev 2 – AWS Audit Manager Framework. https://docs.aws.amazon.com/audit-manager/latest/userguide/NIST-800-171-r2-1.1.html
6. MAD Security. Preparing for CMMC's Evidence Triad: Interview, Examine, and Test. December 2025. https://madsecurity.com/madsecurity-blog/preparing-for-cmmc-evidence-triad-interview-examine-test
7. Microsoft. Cloud Secure Score in Microsoft Defender for Cloud. November 2025. https://learn.microsoft.com/en-us/azure/defender-for-cloud/secure-score-security-controls
8. Pivot Point Security. What Objective Evidence Will You Need for Your CMMC Assessment? https://www.pivotpointsecurity.com/what-objective-evidence-will-you-need-for-your-cmmc-assessment/
9. Microsoft. Regulatory Compliance details for NIST SP 800-171 R2 – Azure Policy. February 2026. https://learn.microsoft.com/en-us/azure/governance/policy/samples/nist-sp-800-171-r2
10. Core Business Solutions. CMMC Assessments: What to Expect. August 2025. https://www.thecoresolution.com/cmmc-assessments-what-to-expect
© 2026 AeoliTech. All rights reserved. Contact AeoliTech to schedule a CMMC readiness assessment and evidence vault setup engagement.