Learn how AI Companion protects your data, encrypts API keys, and ensures privacy when generating reply suggestions for your secure internal knowledge base assistant.
Why Security Matters When Choosing AI Vendors
When selecting an AI vendor for customer support, security is paramount. You're trusting them with:
- Customer conversations - Potentially sensitive support queries
- Company information - Your brand voice and internal knowledge
- API credentials - Keys that access AI services
- Knowledge base content - Your documentation and FAQs
Different AI vendors have different security approaches, data retention policies, and compliance certifications. SupportRetriever adds multiple layers of security on top of vendor protections to ensure your secure internal knowledge base assistant operates safely.
API Key Encryption
At-Rest Encryption
Your AI provider API keys are encrypted before storage:
- Encryption at rest - Keys encrypted using industry-standard encryption
- Secure storage - Encrypted keys stored in database
- No plaintext storage - Keys never stored in readable format
- Preview only - Settings show masked preview (e.g.,
sk-proj-...abc123)
Key Management
- Encryption on save - Keys encrypted immediately when entered
- Decryption on use - Keys decrypted only when generating suggestions
- No key sharing - Keys never shared with third parties
- Secure transmission - Keys transmitted securely to AI providers
Sensitive Data Redaction
AI Companion automatically redacts sensitive information before sending data to AI providers:
Credit Card Numbers
Patterns detected:
1234-5678-9012-34561234 5678 9012 34561234567890123456
Action: Replaced with [REDACTED CARD]
Social Security Numbers
Patterns detected:
123-45-6789
Action: Replaced with [REDACTED SSN]
Password Information
Patterns detected:
- Lines containing
password:,pwd:, orpass:
Action: Entire line replaced with [REDACTED PASSWORD LINE]
How Redaction Works
- Before AI processing - Conversation history is scanned
- Pattern detection - Sensitive patterns identified
- Redaction - Sensitive data replaced with placeholders
- AI processing - Only redacted content sent to provider
- Response generation - AI generates suggestion without sensitive data
Data Sharing
What Data is Shared
With AI Providers:
- Conversation history (redacted)
- Customer messages (redacted)
- System prompt (includes company name)
- Knowledge base URL (if configured)
Not Shared:
- Your API keys (encrypted, never shared)
- Customer email addresses (not sent to AI)
- Account information (not included)
- Billing details (not shared)
Third-Party Services
SupportRetriever only shares data with:
- Your chosen AI provider - For generating suggestions
- No other third parties - Data not sold or shared
Data Retention
- Suggestions - Stored in database until dismissed or sent
- Configuration - Stored until you delete it
- API keys - Encrypted and stored until you update or delete
Billing Relationship
Direct Billing
You pay your AI provider directly:
- No SupportRetriever charges - We don't charge for AI usage
- Provider billing - You're billed by OpenAI, Anthropic, Grok, or Gemini
- Usage monitoring - Monitor costs in your provider account
- Separate accounts - Your provider account is separate from SupportRetriever
Cost Control
To manage costs:
- Set usage limits - In your provider account
- Monitor spending - Check provider dashboard regularly
- Set up alerts - Configure billing alerts with provider
- Review usage - Track API calls and costs
GDPR and Privacy Considerations
Data Processing
AI Companion processes:
- Conversation content - To generate suggestions
- Company information - From form title (for system prompt)
- Knowledge base content - When URL is configured
User Rights
You have the right to:
- Access your data - View configuration and suggestions
- Delete your data - Remove configuration anytime
- Export data - Suggestions can be exported
- Control processing - Enable/disable anytime
Customer Data
- Customer messages - Processed to generate suggestions
- Redaction applied - Sensitive data removed before processing
- No storage by AI providers - Check provider privacy policies
- Your responsibility - Ensure compliance with data protection laws
Security Best Practices
API Key Management
- Rotate annually - Generate new keys yearly
- Use separate keys - Different keys for different services
- Set usage limits - In provider account
- Monitor usage - Check for unauthorized access
- Revoke compromised keys - Immediately if suspected breach
Account Security
- Strong passwords - Use unique, strong passwords
- Two-factor authentication - Enable on provider accounts
- Regular audits - Review API usage regularly
- Access control - Limit who can access AI Companion settings
Knowledge Base Security
- Public content only - Don't include private information
- Regular updates - Keep content current and accurate
- Access control - Ensure proper public access settings
- Content review - Review what's publicly accessible
Provider Privacy Policies
Each AI provider has its own privacy policy:
- OpenAI: openai.com/policies/privacy-policy
- Anthropic: anthropic.com/privacy
- Grok: x.ai/privacy
- Gemini: policies.google.com/privacy
Review provider policies to understand how they handle data.
Security Across AI Providers
When choosing an AI vendor for your secure knowledge base assistant, compare their security offerings:
Anthropic (Claude)
Security Strengths:
- No training on your data - API data explicitly not used for model training
- Enterprise privacy commitments - Contractual guarantees for data handling
- Compliance certifications - GDPR compliant, SOC 2 Type II certified
- AI safety focus - Strong emphasis on responsible AI and safety research
- Data retention - Short retention periods, clear policies
Best for: Teams prioritizing data privacy and regulatory compliance.
OpenAI (GPT-4o)
Security Strengths:
- API data not used for training - By default, API data excluded from training
- Enterprise options - Business agreements with additional protections
- Compliance certifications - GDPR compliant, SOC 2 certified
- API key management - Robust key rotation and access controls
- Audit logging - Comprehensive usage tracking
Best for: Teams needing enterprise agreements and established security track record.
Gemini (Google)
Security Strengths:
- Google Cloud security - Benefits from Google's infrastructure security
- Compliance - GDPR compliant, multiple certifications
- Data handling options - Varies by plan and configuration
- Integration security - Works with Google Workspace security features
Best for: Teams already using Google Cloud or Google Workspace.
Grok (xAI)
Security Strengths:
- Modern architecture - Built with recent security best practices
- Privacy policies - Review xAI documentation for current policies
- Enterprise options - Business agreements available
- Compliance progress - Certifications in development
Best for: Teams looking for newer alternatives with competitive features.
Comparing SupportRetriever Security vs. Building In-House
Building a Secure Knowledge Base Assistant In-House
Challenges:
- Encryption implementation - Building secure API key storage
- Data redaction - Implementing sensitive data filters
- Provider integration - Supporting multiple AI vendors
- Compliance - Ensuring GDPR, SOC 2 requirements
- Maintenance - Keeping security measures up-to-date
- Cost - Development time and ongoing maintenance
Using SupportRetriever
Built-in security features:
- Encrypted API keys - Industry-standard encryption at rest
- Automatic redaction - Credit cards, SSNs, passwords filtered
- Multi-vendor support - Switch providers without rebuilding
- Human-in-the-loop - No auto-sending, always human approval
- Direct billing - You pay providers directly, no data middleman
- Regular updates - Security patches and improvements
- Compliance ready - GDPR-friendly architecture
Time to secure deployment:
- In-house: Weeks to months of development
- SupportRetriever: Minutes to set up, security included
Compliance
Data Protection
AI Companion is designed with data protection in mind:
- Encryption - API keys encrypted at rest
- Redaction - Sensitive data removed before processing
- Minimal data - Only necessary data shared
- User control - You control what's processed
Your Responsibilities
As the data controller, you're responsible for:
- Legal compliance - Ensuring GDPR and local law compliance
- Customer consent - If required by law
- Data handling - Proper handling of customer data
- Privacy policies - Informing customers about AI usage (if required)
Security Features
Encryption
- API keys - Encrypted at rest
- Transmission - HTTPS for all communications
- Database - Encrypted storage
Access Control
- Authentication required - Must be logged in
- Permission checks - Only authorized users
- Audit logging - Actions logged for security
Data Protection
- Redaction - Automatic sensitive data removal
- Minimal sharing - Only necessary data sent
- Secure storage - Encrypted database storage
Reporting Security Issues
If you discover a security issue:
- Don't disclose publicly - Keep issue private
- Contact support - Report through SupportRetriever support
- Provide details - Include steps to reproduce
- Wait for response - Allow time for investigation
Related Topics
- How to Choose an AI Vendor for Secure Customer Support Assistants - Comprehensive security and vendor comparison
- Managing AI Companion - Update security settings
- Choosing an AI provider - Compare provider security features
- Setting up AI Companion - Secure setup process
- Troubleshooting AI Companion - Security-related issues
- Privacy and data handling - General privacy information
