AI Security Checklist
Track your compliance with AI security frameworks
Document organizational policies for AI development and use
Establish acceptable risk levels for AI systems
Designate responsible parties for AI security
Create oversight body for AI ethical concerns
Document all data sources used for AI training
Validate training data for quality and bias
Apply encryption to all sensitive training data
Restrict access to training data based on role
Evaluate PII exposure in training data
Restrict maximum input size for AI systems
Filter and sanitize all inputs before processing
Detect and block prompt injection attempts
Separate system and user content with clear boundaries
Filter harmful or inappropriate content from outputs
Validate AI outputs match expected schema
Detect and redact PII from AI responses
Include sources for AI-generated claims
Require authentication for AI system access
Restrict AI capabilities based on user role
Grant minimum permissions required
Prevent abuse through request rate limits
Track all model versions with metadata
Cryptographically sign model files
Test models against adversarial attacks
Scan models for vulnerabilities
Specify allowed tools for each agent
Require approval for sensitive actions
Isolate agent tool execution environments
Enable immediate agent termination capability
Log all AI system interactions and decisions
Maintain tamper-proof audit logs
Set up alerts for anomalous AI behavior
Create response procedures for AI incidents
Evaluate security of AI service providers
Verify origin of third-party models
Check for vulnerabilities in ML libraries
Document software bill of materials for AI