Everruns supports multiple LLM providers through a unified abstraction layer. Configure providers to access different models and capabilities.Documentation Index
Fetch the complete documentation index at: https://mintlify.com/everruns/everruns/llms.txt
Use this file to discover all available pages before exploring further.
Supported Providers
OpenAI
OpenAI provider supports two API protocols:- OpenAI (Responses API) - Uses the Open Responses API protocol (https://www.openresponses.org/). Recommended for new projects.
- OpenAI Completions - Uses the Chat Completions API (
/v1/chat/completions). For backward compatibility.
- GPT-4o, GPT-4o-mini models
- O-series reasoning models (o1, o1-mini, o1-pro, o3-mini)
- Vision capabilities (image inputs)
- Function/tool calling
- Extended thinking/reasoning
Anthropic
Anthropic provider supports:- Claude Sonnet 4, Claude Opus 4 (latest)
- Claude 3.5 Sonnet, Claude 3.5 Haiku
- Claude 3 Opus, Claude 3 Sonnet, Claude 3 Haiku
- Vision capabilities
- Tool calling
- Extended thinking (interleaved thinking with tools)
Google Gemini
Gemini provider supports:- Gemini Pro models
- Vision and multimodal capabilities
- Tool calling
Adding a Provider
Create provider
Navigate to Settings > LLM Providers and click Add Provider.Select the provider type and enter:
- Name - Display name for the provider
- API Key - Your API key from the provider
- Base URL (optional) - Custom endpoint URL for OpenAI-compatible APIs
Sync models
After creating a provider, sync available models from the provider’s API:This discovers available models and adds them to your organization.Background sync: Model discovery runs automatically every 24 hours for all active providers.
Configuration Options
- OpenAI
- Anthropic
- Gemini
base_url to use OpenAI-compatible APIs:DEFAULT_OPENAI_API_KEY environment variable.Model Synchronization
Manual Sync
Trigger model discovery via API:- created: New models discovered and added
- updated: Existing models updated (metadata refreshed)
- stale: Models no longer returned by the provider API
Background Sync
Automatic model discovery runs every 24 hours for all active providers. Discovered models are:- Created with
source: "discovered" - Updated with latest metadata on each sync
- Marked as stale if not seen in recent sync
last_seen_at < provider.last_synced_at.
Model Sources
Models can be added via:- Manual - User-created via API or UI
- Discovered - Automatically synced from provider API
- Predefined - Seeded during system initialization
Environment Variables
For development convenience, Everruns supports API key fallbacks:- A provider exists but has no API key configured
- Model sync needs to authenticate
Environment variable fallbacks are for development only. In production, always configure API keys via the UI or API.
API Key Security
API keys are protected with envelope encryption:- Encryption at rest - AES-256-GCM encryption before storage
- Never exposed - API responses show
api_key_set: truebut never return the key - Decryption on-demand - Keys are decrypted only during LLM calls
- Memory lifetime - Decrypted keys exist in worker memory only during the API call
Provider Status
Providers can be:- Active - Available for model selection
- Disabled - Hidden from model selection, existing sessions unaffected
Custom Base URLs
Use custom endpoints for:- Azure OpenAI
- Self-hosted OpenAI-compatible APIs
- Proxy services
Rate Limiting
Everruns automatically retries transient errors with exponential backoff:- Max retries: 2
- Initial backoff: 1 second
- Max backoff: 60 seconds
- Jitter: ±25%
408- Request Timeout429- Rate Limited5xx- Server Errors
retry-after headers from providers to respect rate limits.
Next Steps
- Configure models - Manage available models
- Create agents - Build agents with configured models
- Security - Learn about API key encryption