FedRAMP-authorized AI vendors
AI vendors with an active FedRAMP authorization, enabling use by U.S. federal government agencies.
Federal agencies cannot use cloud services that are not FedRAMP authorized at or above the impact level required for their data. The vendors below have a FedRAMP authorization on the Marketplace. Note that authorization level matters: FedRAMP Low, Moderate, and High map to different data sensitivities, and the AI vendor's authorization may not cover all of their products. Verify in the FedRAMP Marketplace that the specific service you intend to use is in scope of the authorization, and that the authorization is currently active (not expired or in remediation).
Vendors with FedRAMP
Amazon (AWS)
Score 12.34 · low
Salesforce
Score 12.74 · low
Adobe
Score 13.74 · low
IBM
Score 14.11 · low
Microsoft
Score 14.68 · low
SAP
Score 16.63 · low
Google DeepMind
Score 18.85 · low
Oracle
Score 19.89 · low
Palo Alto Networks
Score 19.89 · low
Nuance (Microsoft)
Score 20.86 · moderate
Workday
Score 22.45 · moderate
Mosaic (Databricks)
Score 22.6 · moderate
SentinelOne
Score 22.96 · moderate
Scale AI
Score 23.3 · moderate
Snowflake
Score 24.36 · moderate
ServiceNow
Score 24.4 · moderate
Datadog
Score 24.41 · moderate
Palantir
Score 25.09 · moderate
Databricks
Score 25.4 · moderate
Cloudflare
Score 25.89 · moderate
GitHub Copilot
Score 27.12 · moderate
Zendesk
Score 30.94 · moderate
Buyer checklist
- Verify the FedRAMP Marketplace listing matches the product you intend to procure.
- Confirm the authorization level (Low / Moderate / High) meets your data sensitivity.
- Check whether the AI features you plan to use are in the authorization boundary.
- For sub-processors (upstream model APIs), confirm they are also authorized at the required level.
- Confirm StateRAMP equivalence if your buyer is a state or local government.
Compliance is necessary, not sufficient. Holding FedRAMP is a meaningful baseline, but no certification covers AI-specific risk end-to-end. Layer this on top of vendor-specific diligence — sub-processor disclosure, training-data policy, model card transparency, dependency-chain mapping.