LibreChat supports multiple AI providers. Configure them using environment variables and the librechat.yaml file.
OpenAI
OpenAI API keyOPENAI_API_KEY=user_provided
Set to user_provided to allow users to provide their own keys
Comma-separated list of available OpenAI modelsOPENAI_MODELS=gpt-5,gpt-5-codex,gpt-5-mini,o3-pro,o3,o4-mini,gpt-4o,gpt-4o-mini
Custom reverse proxy URL for OpenAI API
Enable OpenAI debug logging
Conversation Titles
Enable automatic conversation title generation
OPENAI_TITLE_MODEL
string
default:"gpt-4o-mini"
Model used for generating conversation titles
Summarization
Enable conversation summarization
OPENAI_SUMMARY_MODEL
string
default:"gpt-4o-mini"
Model used for conversation summarization
Anthropic (Claude)
Anthropic API keyANTHROPIC_API_KEY=user_provided
Comma-separated list of Anthropic modelsANTHROPIC_MODELS=claude-sonnet-4-6,claude-opus-4-6,claude-opus-4-20250514,claude-3-7-sonnet-20250219,claude-3-5-sonnet-20241022,claude-3-5-haiku-20241022
Custom reverse proxy for Anthropic API
Anthropic via Google Vertex AI
Use Anthropic models through Google Vertex AI instead of direct API
Google Vertex AI region for Anthropic models
endpoints:
anthropic:
streamRate: 20
titleModel: claude-3.5-haiku
vertex:
region: "us-east5"
serviceKeyFile: "/path/to/service-account.json"
models:
claude-opus-4.5:
deploymentName: claude-opus-4-5@20251101
claude-sonnet-4:
deploymentName: claude-sonnet-4-20250514
Google (Gemini)
Google API key (for Gemini API/AI Studio)
Comma-separated list of Google modelsGOOGLE_MODELS=gemini-3.1-pro-preview,gemini-2.5-pro,gemini-2.5-flash,gemini-2.0-flash
Custom reverse proxy for Google API
Pass API key in Authorization header instead of X-goog-api-keySome reverse proxies don’t support the X-goog-api-key header
Vertex AI Configuration
Path to Google Cloud service account JSON fileGOOGLE_SERVICE_KEY_FILE=/path/to/service-account.json
GOOGLE_LOC
string
default:"us-central1"
Google Cloud region for Vertex AI
Alternative region for Gemini Image Generation
Safety Settings
Google Safety Settings (apply to both Vertex AI and Gemini API)GOOGLE_SAFETY_SEXUALLY_EXPLICIT=BLOCK_ONLY_HIGH
GOOGLE_SAFETY_HATE_SPEECH=BLOCK_ONLY_HIGH
GOOGLE_SAFETY_HARASSMENT=BLOCK_ONLY_HIGH
GOOGLE_SAFETY_DANGEROUS_CONTENT=BLOCK_ONLY_HIGH
GOOGLE_SAFETY_CIVIC_INTEGRITY=BLOCK_ONLY_HIGH
Options: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVEFor Vertex AI, BLOCK_NONE requires allowlist access or monthly invoiced billing
AWS Bedrock
BEDROCK_AWS_DEFAULT_REGION
AWS region for BedrockBEDROCK_AWS_DEFAULT_REGION=us-east-1
BEDROCK_AWS_ACCESS_KEY_ID
AWS access key ID
BEDROCK_AWS_SECRET_ACCESS_KEY
AWS secret access key
BEDROCK_AWS_SESSION_TOKEN
AWS session token (for temporary credentials)
Comma-separated list of Bedrock model IDsBEDROCK_AWS_MODELS=anthropic.claude-sonnet-4-6,anthropic.claude-opus-4-6-v1,meta.llama3-1-8b-instruct-v1:0
If omitted, all known supported models are included
Bedrock Configuration (YAML)
endpoints:
bedrock:
models:
- "anthropic.claude-3-7-sonnet-20250219-v1:0"
- "anthropic.claude-3-5-sonnet-20241022-v2:0"
inferenceProfiles:
"us.anthropic.claude-sonnet-4-20250514-v1:0": "${BEDROCK_INFERENCE_PROFILE_CLAUDE_SONNET}"
"anthropic.claude-3-7-sonnet-20250219-v1:0": "arn:aws:bedrock:us-west-2:123456789012:application-inference-profile/abc123"
guardrailConfig:
guardrailIdentifier: "your-guardrail-id"
guardrailVersion: "1"
trace: "enabled"
Azure OpenAI
Azure environment variables are DEPRECATED. Use librechat.yaml configuration instead.
See the Custom Endpoints documentation for Azure OpenAI configuration.
Assistants API
API key for OpenAI AssistantsASSISTANTS_API_KEY=user_provided
Custom base URL for Assistants API
Comma-separated list of models available for AssistantsASSISTANTS_MODELS=gpt-4o,gpt-4o-mini,gpt-4-turbo-preview
Assistants Configuration (YAML)
endpoints:
assistants:
disableBuilder: false
pollIntervalMs: 3000
timeoutMs: 180000
supportedIds: ["asst_supportedAssistantId1", "asst_supportedAssistantId2"]
retrievalModels: ["gpt-4-turbo-preview"]
capabilities: ["code_interpreter", "retrieval", "actions", "tools", "image_vision"]
Known AI Providers
These providers can be configured via environment variables and used with custom endpoints:
ANYSCALE_API_KEY=
APIPIE_API_KEY=
COHERE_API_KEY=
DEEPSEEK_API_KEY=
DATABRICKS_API_KEY=
FIREWORKS_API_KEY=
GROQ_API_KEY=
HUGGINGFACE_TOKEN=
MISTRAL_API_KEY=
OPENROUTER_KEY=
PERPLEXITY_API_KEY=
SHUTTLEAI_API_KEY=
TOGETHERAI_API_KEY=
UNIFY_API_KEY=
XAI_API_KEY=
See Custom Endpoints for configuration examples.
Agents Endpoint
endpoints:
agents:
recursionLimit: 50
maxRecursionLimit: 100
disableBuilder: false
maxCitations: 30
maxCitationsPerFile: 7
minRelevanceScore: 0.45
capabilities: ["deferred_tools", "execute_code", "file_search", "actions", "tools"]
Default recursion depth for agents
Maximum recursion depth for agents
Maximum total citations in agent responses
Minimum relevance score for sources (0.0-1.0)Set to 0.0 to show all sources without filtering