Portkey enhances any OpenAI API compliant project by adding enterprise-grade features like observability, reliability, rate limiting, access control, and budget management—all without requiring code changes. It is a drop-in replacement for your existing OpenAI-compatible applications. This guide explains how to integrate Portkey with minimal changes to your project settings. While OpenAI (or any other provider) provides an API for AI model access. Commercial usage often require additional features like:Documentation Index
Fetch the complete documentation index at: https://portkey-docs-feat-support-overview-page.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
- Advanced Observability: Real-time usage tracking for 40+ key metrics and logs for every request
- Unified AI Gateway - Single interface for 250+ LLMs with API key management
- Governance - Real-time spend tracking, set budget limits and RBAC in your AI systems
- Security Guardrails - PII detection, content filtering, and compliance controls
1. Getting Started with Portkey
Portkey allows you to use 250+ LLMs with your Project setup, with minimal configuration required. Let’s set up the core components in Portkey that you’ll need for integration.Create Virtual Key
- Budget limits for API usage
- Rate limiting capabilities
- Secure API key storage

Create Default Config
- Go to Configs in Portkey dashboard
- Create new config with:
- Save and note the Config name for the next step

Configure Portkey API Key
- Go to API Keys in Portkey and Create new API key
- Select your config from
Step 2 - Generate and save your API key

2. Integrating Portkey with Your Project
You can integrate Portkey with any OpenAI API-compatible project through a simple configuration change. This integration enables advanced monitoring, security features, and analytics for your LLM applications. Here’s how you do it:- Locate LLM Settings Navigate to your project’s LLM settings page and find the OpenAI configuration section (usually labeled ‘OpenAI-Compatible’ or ‘Generic OpenAI’).”
-
Configure Base URL
Set the base URL to:
- Add API Key Enter your Portkey API key in the appropriate field. You can generate this key from your Portkey dashboard under API Keys section.
-
Configure Model Settings
If your integration allows direct model configuration, you can specify it in the LLM settings. Otherwise, create a configuration object:
3. Set Up Enterprise Governance for your Project
Why Enterprise Governance? When you are using any AI tool in an enterprise setting, you need to consider several governance aspects:- Cost Management: Controlling and tracking AI spending across teams
- Access Control: Managing which teams can use specific models
- Usage Analytics: Understanding how AI is being used across the organization
- Security & Compliance: Maintaining enterprise security standards
- Reliability: Ensuring consistent service across all users
Enterprise Implementation Guide
Step 1: Implement Budget Controls & Rate Limits
Step 1: Implement Budget Controls & Rate Limits
Step 1: Implement Budget Controls & Rate Limits
Virtual Keys enable granular control over LLM access at the team/department level. This helps you:- Set up budget limits
- Prevent unexpected usage spikes using Rate limits
- Track departmental spending
Setting Up Department-Specific Controls:
- Navigate to Virtual Keys in Portkey dashboard
- Create new Virtual Key for each department with budget limits and rate limits
- Configure department-specific limits

Step 2: Define Model Access Rules
Step 2: Define Model Access Rules
Step 2: Define Model Access Rules
As your Project scales, controlling which teams can access specific models becomes crucial. Portkey Configs provide this control layer with features like:Access Control Features:
- Model Restrictions: Limit access to specific models
- Data Protection: Implement guardrails for sensitive data
- Reliability Controls: Add fallbacks and retry logic
Example Configuration:
Here’s a basic configuration to route requests to OpenAI, specifically using GPT-4o:Step 3: Implement Access Controls
Step 3: Implement Access Controls
Step 3: Implement Access Controls
Create User-specific API keys that automatically:- Track usage per user/team with the help of virtual keys
- Apply appropriate configs to route requests
- Collect relevant metadata to filter logs
- Enforce access permissions
Step 4: Deploy & Monitor
Step 4: Deploy & Monitor
Step 4: Deploy & Monitor
After distributing API keys to your team members, your enterprise-ready Project setup is ready to go. Each team member can now use their designated API keys with appropriate access levels and budget controls. Apply your governance setup using the integration steps from earlier sections Monitor usage in Portkey dashboard:- Cost tracking by department
- Model usage patterns
- Request volumes
- Error rates
Enterprise Features Now Available
Your Project now has:- Departmental budget controls
- Model access governance
- Usage tracking & attribution
- Security guardrails
- Reliability features
Portkey Features
Now that you have set up your enterprise-grade Project environment, let’s explore the comprehensive features Portkey provides to ensure secure, efficient, and cost-effective AI operations.1. Comprehensive Metrics
Using Portkey you can track 40+ key metrics including cost, token usage, response time, and performance across all your LLM providers in real time. You can also filter these metrics based on custom metadata that you can set in your configs. Learn more about metadata here.
2. Advanced Logs
Portkey’s logging dashboard provides detailed logs for every request made to your LLMs. These logs include:- Complete request and response tracking
- Metadata tags for filtering
- Cost attribution and much more…

3. Unified Access to 250+ LLMs
You can easily switch between 250+ LLMs. Call various LLMs such as Anthropic, Gemini, Mistral, Azure OpenAI, Google Vertex AI, AWS Bedrock, and many more by simply changing thevirtual key in your default config object.
4. Advanced Metadata Tracking
Using Portkey, you can add custom metadata to your LLM requests for detailed tracking and analytics. Use metadata tags to filter logs, track usage, and attribute costs across departments and teams.Custom Metata
5. Enterprise Access Management
Budget Controls
Single Sign-On (SSO)
Organization Management
Access Rules & Audit Logs
6. Reliability Features
Fallbacks
Conditional Routing
Load Balancing
Caching
Smart Retries
Budget Limits
7. Advanced Guardrails
Protect your Project’s data and enhance reliability with real-time checks on LLM inputs and outputs. Leverage guardrails to:- Prevent sensitive data leaks
- Enforce compliance with organizational policies
- PII detection and masking
- Content filtering
- Custom security rules
- Data compliance checks
Guardrails
FAQs
How do I update my Virtual Key limits after creation?
How do I update my Virtual Key limits after creation?
Can I use multiple LLM providers with the same API key?
Can I use multiple LLM providers with the same API key?
How do I track costs for different teams?
How do I track costs for different teams?
- Create separate Virtual Keys for each team
- Use metadata tags in your configs
- Set up team-specific API keys
- Monitor usage in the analytics dashboard
What happens if a team exceeds their budget limit?
What happens if a team exceeds their budget limit?
- Further requests will be blocked
- Team admins receive notifications
- Usage statistics remain available in dashboard
- Limits can be adjusted if needed

