Skip to main content
LibreChat supports multiple AI providers. Configure them using environment variables and the librechat.yaml file.

OpenAI

OPENAI_API_KEY
string
required
OpenAI API key
OPENAI_API_KEY=user_provided
Set to user_provided to allow users to provide their own keys
OPENAI_MODELS
string
Comma-separated list of available OpenAI models
OPENAI_MODELS=gpt-5,gpt-5-codex,gpt-5-mini,o3-pro,o3,o4-mini,gpt-4o,gpt-4o-mini
OPENAI_REVERSE_PROXY
string
Custom reverse proxy URL for OpenAI API
OPENAI_ORGANIZATION
string
OpenAI organization ID
DEBUG_OPENAI
boolean
default:"false"
Enable OpenAI debug logging

Conversation Titles

TITLE_CONVO
boolean
default:"true"
Enable automatic conversation title generation
OPENAI_TITLE_MODEL
string
default:"gpt-4o-mini"
Model used for generating conversation titles

Summarization

OPENAI_SUMMARIZE
boolean
Enable conversation summarization
OPENAI_SUMMARY_MODEL
string
default:"gpt-4o-mini"
Model used for conversation summarization

Anthropic (Claude)

ANTHROPIC_API_KEY
string
required
Anthropic API key
ANTHROPIC_API_KEY=user_provided
ANTHROPIC_MODELS
string
Comma-separated list of Anthropic models
ANTHROPIC_MODELS=claude-sonnet-4-6,claude-opus-4-6,claude-opus-4-20250514,claude-3-7-sonnet-20250219,claude-3-5-sonnet-20241022,claude-3-5-haiku-20241022
ANTHROPIC_REVERSE_PROXY
string
Custom reverse proxy for Anthropic API

Anthropic via Google Vertex AI

ANTHROPIC_USE_VERTEX
boolean
Use Anthropic models through Google Vertex AI instead of direct API
ANTHROPIC_VERTEX_REGION
string
default:"us-east5"
Google Vertex AI region for Anthropic models
endpoints:
  anthropic:
    streamRate: 20
    titleModel: claude-3.5-haiku
    vertex:
      region: "us-east5"
      serviceKeyFile: "/path/to/service-account.json"
      models:
        claude-opus-4.5:
          deploymentName: claude-opus-4-5@20251101
        claude-sonnet-4:
          deploymentName: claude-sonnet-4-20250514

Google (Gemini)

GOOGLE_KEY
string
required
Google API key (for Gemini API/AI Studio)
GOOGLE_KEY=user_provided
GOOGLE_MODELS
string
Comma-separated list of Google models
GOOGLE_MODELS=gemini-3.1-pro-preview,gemini-2.5-pro,gemini-2.5-flash,gemini-2.0-flash
GOOGLE_REVERSE_PROXY
string
Custom reverse proxy for Google API
GOOGLE_AUTH_HEADER
boolean
Pass API key in Authorization header instead of X-goog-api-key
Some reverse proxies don’t support the X-goog-api-key header

Vertex AI Configuration

GOOGLE_SERVICE_KEY_FILE
string
Path to Google Cloud service account JSON file
GOOGLE_SERVICE_KEY_FILE=/path/to/service-account.json
GOOGLE_LOC
string
default:"us-central1"
Google Cloud region for Vertex AI
GOOGLE_CLOUD_LOCATION
string
default:"global"
Alternative region for Gemini Image Generation

Safety Settings

GOOGLE_SAFETY_*
string
Google Safety Settings (apply to both Vertex AI and Gemini API)
GOOGLE_SAFETY_SEXUALLY_EXPLICIT=BLOCK_ONLY_HIGH
GOOGLE_SAFETY_HATE_SPEECH=BLOCK_ONLY_HIGH
GOOGLE_SAFETY_HARASSMENT=BLOCK_ONLY_HIGH
GOOGLE_SAFETY_DANGEROUS_CONTENT=BLOCK_ONLY_HIGH
GOOGLE_SAFETY_CIVIC_INTEGRITY=BLOCK_ONLY_HIGH
Options: BLOCK_NONE, BLOCK_ONLY_HIGH, BLOCK_MEDIUM_AND_ABOVE, BLOCK_LOW_AND_ABOVE
For Vertex AI, BLOCK_NONE requires allowlist access or monthly invoiced billing

AWS Bedrock

BEDROCK_AWS_DEFAULT_REGION
string
required
AWS region for Bedrock
BEDROCK_AWS_DEFAULT_REGION=us-east-1
BEDROCK_AWS_ACCESS_KEY_ID
string
AWS access key ID
BEDROCK_AWS_SECRET_ACCESS_KEY
string
AWS secret access key
BEDROCK_AWS_SESSION_TOKEN
string
AWS session token (for temporary credentials)
BEDROCK_AWS_MODELS
string
Comma-separated list of Bedrock model IDs
BEDROCK_AWS_MODELS=anthropic.claude-sonnet-4-6,anthropic.claude-opus-4-6-v1,meta.llama3-1-8b-instruct-v1:0
If omitted, all known supported models are included

Bedrock Configuration (YAML)

endpoints:
  bedrock:
    models:
      - "anthropic.claude-3-7-sonnet-20250219-v1:0"
      - "anthropic.claude-3-5-sonnet-20241022-v2:0"
    
    inferenceProfiles:
      "us.anthropic.claude-sonnet-4-20250514-v1:0": "${BEDROCK_INFERENCE_PROFILE_CLAUDE_SONNET}"
      "anthropic.claude-3-7-sonnet-20250219-v1:0": "arn:aws:bedrock:us-west-2:123456789012:application-inference-profile/abc123"
    
    guardrailConfig:
      guardrailIdentifier: "your-guardrail-id"
      guardrailVersion: "1"
      trace: "enabled"

Azure OpenAI

Azure environment variables are DEPRECATED. Use librechat.yaml configuration instead.
See the Custom Endpoints documentation for Azure OpenAI configuration.

Assistants API

ASSISTANTS_API_KEY
string
required
API key for OpenAI Assistants
ASSISTANTS_API_KEY=user_provided
ASSISTANTS_BASE_URL
string
Custom base URL for Assistants API
ASSISTANTS_MODELS
string
Comma-separated list of models available for Assistants
ASSISTANTS_MODELS=gpt-4o,gpt-4o-mini,gpt-4-turbo-preview

Assistants Configuration (YAML)

endpoints:
  assistants:
    disableBuilder: false
    pollIntervalMs: 3000
    timeoutMs: 180000
    supportedIds: ["asst_supportedAssistantId1", "asst_supportedAssistantId2"]
    retrievalModels: ["gpt-4-turbo-preview"]
    capabilities: ["code_interpreter", "retrieval", "actions", "tools", "image_vision"]

Known AI Providers

These providers can be configured via environment variables and used with custom endpoints:
ANYSCALE_API_KEY=
APIPIE_API_KEY=
COHERE_API_KEY=
DEEPSEEK_API_KEY=
DATABRICKS_API_KEY=
FIREWORKS_API_KEY=
GROQ_API_KEY=
HUGGINGFACE_TOKEN=
MISTRAL_API_KEY=
OPENROUTER_KEY=
PERPLEXITY_API_KEY=
SHUTTLEAI_API_KEY=
TOGETHERAI_API_KEY=
UNIFY_API_KEY=
XAI_API_KEY=
See Custom Endpoints for configuration examples.

Agents Endpoint

endpoints:
  agents:
    recursionLimit: 50
    maxRecursionLimit: 100
    disableBuilder: false
    maxCitations: 30
    maxCitationsPerFile: 7
    minRelevanceScore: 0.45
    capabilities: ["deferred_tools", "execute_code", "file_search", "actions", "tools"]
agents.recursionLimit
number
default:"25"
Default recursion depth for agents
agents.maxRecursionLimit
number
default:"25"
Maximum recursion depth for agents
agents.maxCitations
number
default:"30"
Maximum total citations in agent responses
agents.minRelevanceScore
number
default:"0.45"
Minimum relevance score for sources (0.0-1.0)
Set to 0.0 to show all sources without filtering