AI Transparency & Ethics
Last Updated: October 9, 2025
Introduction
At backend.chat, we believe in transparent and responsible AI. This page explains how our AI Agent works, its capabilities and limitations, and our commitment to ethical AI practices.
Our AI is designed to assist, not replace, human judgment.
Questions? Email us at [email protected]
1. How Our AI Agent Works
1.1 Overview
backend.chat uses AI-powered automated responses to help businesses respond to customer questions faster and more accurately.
Key technology: Retrieval Augmented Generation (RAG)
- Retrieval: Search your knowledge base for relevant information
- Augmented: Provide that context to an AI model
- Generation: AI generates a response based on the context
Result: Accurate, grounded responses that reduce AI "hallucinations" (made-up information).
1.2 How RAG Works (Step-by-Step)
When a customer sends a message:
-
Message Received
- Customer asks: "What are your business hours?"
- Message arrives via chat widget
-
Intent Detection
- AI analyzes the question to understand intent
- Determines if this is a question that can be answered from knowledge base
-
Knowledge Base Search (Retrieval)
- Convert question into a vector embedding (mathematical representation)
- Search your knowledge base for similar embeddings (semantic search)
- Retrieve top 3-5 most relevant chunks of information
-
Context Assembly
- Combine retrieved knowledge base chunks
- Add conversation history (last 5 messages for context)
- Add customer information (if available: name, previous conversations)
-
AI Response Generation
- Send question + context to LLM (Large Language Model: GPT or Claude)
- LLM generates a response based ONLY on the provided context
- AI also provides a confidence score (0.0 to 1.0)
-
Quality Checks
- Check confidence score (is AI confident in its answer?)
- Check for inappropriate content (profanity filter, harmful content)
- Check for "I don't know" responses (escalate to human if AI doesn't know)
-
Response Delivery
- If confidence ≥ threshold: Send AI response automatically (if auto-reply enabled)
- If confidence < threshold: Escalate to human agent for review
- Log the interaction for analytics and improvement
1.3 AI Models We Use
We support multiple AI providers:
| Provider | Models | Strengths | Use Cases |
|---|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini, GPT-4-turbo, GPT-3.5-turbo | Fast, accurate, well-tested | General customer support |
| Anthropic | Claude 3.5 Sonnet, Claude 3 Opus, Claude 3 Haiku | Safe, nuanced, longer context | Complex queries, detailed responses |
You choose which model to use in your organization settings.
Model selection tips:
- GPT-4o-mini / Claude 3 Haiku: Faster, cheaper, good for simple questions
- GPT-4o / Claude 3.5 Sonnet: More accurate, better for complex questions
- Claude 3 Opus: Best quality, highest cost, use for critical support
1.4 Vector Embeddings and Semantic Search
What are embeddings?
- Embeddings convert text into numbers (vectors) that capture meaning
- Similar concepts have similar vectors (e.g., "dog" and "puppy" are close in vector space)
How we use them:
- Each knowledge base document is split into chunks (paragraphs)
- Each chunk is converted to a 1536-dimension vector (using OpenAI's text-embedding-ada-002 model)
- Vectors are stored in our database (pgvector extension for PostgreSQL)
- When a customer asks a question, we convert it to a vector and search for similar vectors
Benefit: Semantic search understands meaning, not just keywords.
- Traditional search: "What time do you open?" → Must contain words "time" or "open"
- Semantic search: "When do you start?" → Finds "business hours" even without exact keywords
2. AI Capabilities
What Our AI CAN Do
✅ Answer factual questions from your knowledge base
- "What are your business hours?"
- "How do I reset my password?"
- "What's your refund policy?"
✅ Provide step-by-step instructions
- "How do I install your product?"
- "What are the steps to cancel my subscription?"
✅ Retrieve relevant information
- "Tell me about your pricing plans"
- "What features are included in the Pro plan?"
✅ Understand context and follow-up questions
- Customer: "Do you offer refunds?"
- AI: "Yes, we offer a 30-day money-back guarantee."
- Customer: "How do I request one?"
- AI: (Understands "one" = refund and provides instructions)
✅ Detect when it doesn't know
- If no relevant information is found, AI says "I don't know" or escalates to a human
- Better to admit ignorance than make up an answer!
✅ Multilingual support (depending on model)
- GPT and Claude support 50+ languages
- Can respond in the language the customer uses
3. AI Limitations
What Our AI CANNOT Do (or Should Not Be Relied Upon For)
❌ Guarantee 100% accuracy
- AI can make mistakes, even with RAG
- Always have human oversight for critical questions
❌ Understand emotions perfectly
- AI can detect sentiment (positive/negative) but may miss sarcasm, nuance, or cultural context
❌ Make legal, medical, or financial decisions
- AI should NOT provide legal advice, medical diagnoses, or financial recommendations
- Always disclose "This is not professional advice" for such topics
❌ Access real-time or external data
- AI only knows what's in your knowledge base
- Can't check inventory, account balances, or external APIs (future feature: tool use)
❌ Handle highly complex or subjective queries
- "What's the best plan for my use case?" may require human judgment
- AI can provide options but shouldn't make the final decision
❌ Replace human empathy
- For angry, frustrated, or emotional customers, human agents are better
- AI can escalate based on sentiment detection
❌ Creative or open-ended tasks
- AI is great for factual Q&A but not for creative writing, brainstorming, or strategy
4. AI Safety and Ethics
4.1 Reducing Hallucinations
Problem: AI models can "hallucinate" (make up information that sounds plausible but is false).
Our solution: RAG (Retrieval Augmented Generation)
- AI only generates responses based on your knowledge base
- If no relevant information is found, AI says "I don't know"
- Reduces hallucinations by ~80% compared to plain LLMs
Additional safeguards:
- Confidence scores: Only send high-confidence responses
- Human review: Low-confidence responses go to human agents
- Feedback loop: Track which responses are helpful/not helpful
4.2 Bias and Fairness
AI models can inherit biases from training data.
What we do:
- Model selection: Use OpenAI and Anthropic models with bias mitigation
- Prompt engineering: Design system prompts to encourage fair, neutral responses
- Knowledge base control: You control the content AI learns from (your own knowledge base)
- Monitoring: Track user feedback to identify biased or problematic responses
What you can do:
- Review AI responses for bias
- Provide feedback (thumbs up/down) on responses
- Update knowledge base to correct biases
- Set escalation rules for sensitive topics
4.3 Privacy and Data Protection
AI providers (OpenAI, Anthropic) see your conversation data.
Safeguards:
- No training on your data: We have agreements that prohibit OpenAI/Anthropic from using backend.chat customer data to train their models
- Encryption in transit: All data sent to AI providers is encrypted (TLS 1.3)
- Data retention: AI providers delete data after processing (per their API terms)
- GDPR compliance: Standard Contractual Clauses (SCCs) for EU data
Your control:
- Opt-out of AI: Disable AI Agent in settings (use human-only mode)
- Choose AI provider: Select OpenAI or Anthropic based on your privacy preferences
- Self-hosting: Host backend.chat yourself and use your own AI API keys
See our Privacy Policy and Subprocessor List for details.
4.4 Content Moderation
AI may encounter inappropriate content (spam, abuse, harmful content).
Filters:
- Profanity filter: Block offensive language
- Hate speech detection: Flag discriminatory content
- Harmful content: Detect violence, self-harm, illegal content
Actions:
- Block inappropriate responses: AI won't generate harmful content
- Escalate to human: Sensitive topics go to human agents
- Log incidents: Track and report violations of Acceptable Use Policy
5. Transparency and Disclosure
5.1 Disclosing AI Usage to Customers
Best practice: Inform customers when they're interacting with AI.
Recommended disclosures:
- "This chat may be assisted by AI"
- "You're chatting with our AI assistant. A human agent will join if needed."
- "Powered by AI. Ask a question to get started!"
Why disclose?
- Transparency: Customers deserve to know
- Trust: Honesty builds trust
- Legal compliance: Some jurisdictions may require AI disclosure (future laws)
Where to disclose:
- Chat widget greeting message
- Your website's privacy policy
- "Powered by backend.chat" branding (includes AI mention)
5.2 Human Escalation
Customers should always have access to a human agent.
Escalation triggers:
- Customer types "human" or "agent" or "talk to a person"
- AI confidence score below threshold
- Customer sentiment is negative (angry, frustrated)
- Topic is on escalation keyword list (e.g., "refund," "cancel," "complaint")
How it works:
- AI marks conversation for human review
- Notification sent to online agents
- Conversation reassigned to human agent
- Customer notified: "A human agent will assist you shortly."
5.3 Explainability
How did AI arrive at this answer?
In your dashboard, you can see:
- Knowledge base chunks used: Which documents AI referenced
- Confidence score: How confident AI was (0.0 to 1.0)
- AI provider and model: Which LLM generated the response
- Token usage: How much AI processing was used
- Tools called: Which tools AI used (knowledge base search, customer history, etc.)
Future: "Explain this answer" feature (shows reasoning trace)
6. Your Control Over AI
6.1 AI Configuration
You have full control over AI behavior:
Settings you can configure:
- Enable/Disable AI: Turn AI on or off entirely
- Auto-reply: Should AI send responses automatically, or just suggest responses for human approval?
- Confidence threshold: Minimum confidence for auto-reply (e.g., 0.7 = 70%)
- Escalation threshold: Confidence below this triggers human escalation (e.g., 0.5 = 50%)
- Operating hours: AI only responds during business hours (escalate outside hours)
- System prompt: Customize AI's personality and instructions
- Temperature: Control randomness (0.0 = deterministic, 1.0 = creative)
- Max tokens: Limit response length
Advanced settings:
- Escalation keywords: Words that trigger human escalation (e.g., "refund," "cancel")
- Enabled tools: Choose which tools AI can use (knowledge base search, customer history, etc.)
- Model selection: Choose OpenAI or Anthropic, and which specific model
6.2 Opt-Out of AI Training
Your data, your choice.
How we use your data:
- Improve backend.chat AI features
- Develop our own models (future)
- Create anonymized datasets for research
You can opt out:
- Organization-level: Settings > AI Agent > "Opt out of AI training"
- Email request: [email protected] (Subject: "Opt-Out of AI Training")
What happens when you opt out:
- Your conversation data is NOT used to train our models
- Your data may still be processed by OpenAI/Anthropic (but they don't train on it per our agreement)
- All other features continue to work normally
See our Privacy Policy for details.
6.3 Feedback and Improvement
Help us improve AI quality.
Feedback mechanisms:
- Thumbs up/down: Rate AI responses as helpful or not helpful
- Edit response: Correct AI's answer (shows us what the right answer should be)
- Report issue: Flag problematic responses (offensive, inaccurate, etc.)
How we use feedback:
- Identify low-quality responses
- Improve knowledge base (add missing info)
- Fine-tune AI prompts
- Train future models (if you haven't opted out)
7. Use Cases and Best Practices
7.1 When to Use AI
✅ Good use cases:
- FAQ automation: "What are your business hours?" "How do I reset my password?"
- Product information: "What features are in the Pro plan?" "Is this compatible with Mac?"
- Troubleshooting: "My widget isn't loading" (if you have troubleshooting guides in knowledge base)
- Policy questions: "What's your refund policy?" "Do you offer discounts?"
✅ Best results when:
- You have a comprehensive knowledge base
- Questions are factual and well-documented
- Customer queries are common and repetitive
7.2 When to Use Humans
👤 Better with human agents:
- Complex or subjective questions: "What's the best plan for my business?"
- Emotional customers: Angry, frustrated, or upset users
- Sales conversations: Upselling, negotiation, custom quotes
- Edge cases: Questions not covered in knowledge base
- High-stakes decisions: Cancellations, refunds, escalations
Hybrid approach (AI + Human):
- AI suggests a response → Human reviews and edits → Human sends
7.3 Best Practices
To get the most out of AI:
-
Build a comprehensive knowledge base
- Add FAQs, troubleshooting guides, product docs
- Keep it up-to-date (AI is only as good as your content)
- Use clear, concise language
-
Set appropriate confidence thresholds
- Start with 0.7 (70% confidence) for auto-reply
- Adjust based on accuracy (if too many errors, increase to 0.8)
-
Review AI responses regularly
- Check dashboard for AI-generated responses
- Look for patterns in low-confidence responses
- Update knowledge base to fill gaps
-
Use escalation keywords
- Add words like "refund," "cancel," "speak to manager" to escalation list
- Ensures sensitive conversations go to humans
-
Customize the system prompt
- Define AI's tone and style (e.g., "friendly and professional")
- Add specific instructions (e.g., "Always ask for clarification if unsure")
-
Monitor and iterate
- Track AI metrics: auto-reply rate, escalation rate, user feedback
- Continuously improve based on data
8. Prohibited AI Uses
Do NOT use our AI Agent for:
❌ High-risk applications (without proper safeguards):
- Medical diagnosis or treatment advice
- Legal advice
- Financial advice (trading, investment)
- Safety-critical decisions
❌ Harmful or unethical purposes:
- Generating misinformation or disinformation
- Harassment or abuse
- Spam or phishing
- Impersonation (without disclosure)
- Bypassing content moderation
❌ Violating laws or regulations:
- Illegal activities
- Privacy violations (e.g., processing children's data without consent)
- Discrimination or bias
See our Acceptable Use Policy for full list.
9. Future AI Features (Roadmap)
Coming soon:
Q2 2025:
- ✅ Tool use: AI can call external APIs (e.g., check inventory, account balance)
- ✅ Sentiment analysis: Better detection of customer emotion (escalate angry customers)
- ✅ Multi-turn conversations: AI remembers context across multiple messages
Q3 2025:
- ✅ Fine-tuning: Train AI on your specific domain (custom models)
- ✅ A/B testing: Test different AI prompts and models
- ✅ Advanced analytics: Detailed AI performance metrics
Q4 2025:
- ✅ Voice AI: AI-powered voice support (phone, voice messages)
- ✅ Multilingual knowledge base: Automatic translation of knowledge base
- ✅ Proactive AI: AI suggests responses before customer asks
10. Compliance and Regulations
10.1 AI Regulations We Monitor
We stay informed about emerging AI laws:
-
EU AI Act (European Union, 2024)
- Classifies AI systems by risk (minimal, limited, high, unacceptable)
- Customer support AI is likely "limited risk" (transparency requirements)
- We will comply with disclosure and transparency obligations
-
US AI regulations (California AI Transparency Act, etc.)
- Disclosure of AI-generated content
- Consumer rights regarding automated decisions
-
India AI regulations (Future)
- India is developing AI governance frameworks
- We will monitor and comply
10.2 Responsible AI Principles
We commit to:
✅ Transparency: Clear disclosure of AI usage, how it works, limitations ✅ Fairness: Minimize bias, ensure equitable treatment ✅ Privacy: Protect user data, comply with GDPR/CCPA ✅ Safety: Prevent harmful outputs, content moderation ✅ Accountability: Human oversight, escalation to humans, audit trails ✅ Reliability: Reduce errors, monitor performance, continuous improvement
11. Contact and Feedback
Questions about AI? Email: [email protected]
Report AI issues:
- Inaccurate responses
- Biased or offensive content
- Privacy concerns
- Hallucinations or made-up information
Feedback: Help us improve! Share your suggestions at [email protected]
12. Additional Resources
- Privacy Policy – How we handle your data (including AI training)
- Terms of Service – AI Agent terms and disclaimers
- Acceptable Use Policy – Prohibited AI uses
- Subprocessor List – AI providers (OpenAI, Anthropic)
- Security & Compliance – How we protect your data
External resources:
Last Updated: October 9, 2025
AI is a powerful tool, but it's not perfect. We believe in augmenting human intelligence, not replacing it. Use AI responsibly, keep humans in the loop, and always prioritize transparency and customer trust.
Questions? Feedback? Email [email protected]