Step 3: Cloud Provider Authentication Secrets
LiteLLM Proxy requires credentials to authenticate with cloud provider services. The recommended authentication method depends on where your Kubernetes cluster is hosted and which AI services you intend to use.
Cloud Authentication Methods
| Cluster Environment | AWS Bedrock | GCP Vertex AI | Azure OpenAI | GitHub Copilot |
|---|---|---|---|---|
| AWS (EKS Cluster) |
|
|
|
|
| GCP (GKE Cluster) |
|
|
|
|
| Azure (AKS Cluster) |
|
|
|
|
AWS Bedrock Authentication
Required only if you plan to use models from AWS Bedrock.
Option 1: IRSA (IAM Roles for Service Accounts) – Recommended for EKS
This method securely associates an IAM role with the LiteLLM Proxy's Kubernetes service account, avoiding the need to store static AWS credentials as secrets.
The required IAM Role ARN is automatically generated during the Terraform deployment. You can find it as EKS_AWS_ROLE_ARN in the deployment_outputs.env file.
To enable IRSA, replace %%EKS_AWS_ROLE_ARN%% with the EKS_AWS_ROLE_ARN value in your litellm/values-aws.yaml file:
litellm-helm:
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: '%%EKS_AWS_ROLE_ARN%%'
Option 2: AWS User Credentials
Use this method if you are not running on EKS or prefer to use static credentials.
You must create the litellm-aws-auth secret manually before deploying the Helm chart.
Create the secret using the following command, replacing the placeholders with your actual credentials:
kubectl create secret generic litellm-aws-auth \
--namespace litellm \
--from-literal=AWS_ACCESS_KEY_ID="YOUR_AWS_ACCESS_KEY_ID" \
--from-literal=AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_ACCESS_KEY" \
--type=Opaque
Then, ensure your litellm/values-aws.yaml file is configured to mount this secret:
litellm-helm:
# ... other components
environmentSecrets:
- litellm-aws-auth
Azure OpenAI Authentication
Required only if you plan to use models from Azure OpenAI.
Option 1: Azure Entra ID Application (Client Credentials)
Authentication is configured via an Azure Entra ID Application. The deployment process requires the following credentials:
AZURE_TENANT_IDAZURE_CLIENT_IDAZURE_CLIENT_SECRET
The variables AZURE_TENANT_ID, AZURE_CLIENT_ID, and AZURE_CLIENT_SECRET are available in the deployment_outputs.env file. This file is automatically generated during the Terraform deployment.
When running the installation script and selecting Azure as your cloud provider, you will be prompted to enter them.
Option 2: Direct API key authentication
Documentation for configuring direct API key authentication will be added soon.
Google Vertex AI Authentication
Required only if you plan to use models from Google Vertex AI.
If you select GCP as your cloud provider during the automated installation, you must provide credentials for Vertex AI.
Prerequisite: Before running the script, ensure a valid gcp-service-account.json file is present in the root of the repository. This file is necessary for authentication.
During the script's execution, you will be prompted to enter the following value:
VERTEX_PROJECT: Your Google Cloud project ID where Vertex AI is enabled.
GitHub Copilot Authentication
Required only if you plan to use models from GitHub Copilot.
Prerequisites: A GitHub account with an active Copilot subscription.
GitHub Copilot authenticates via an OAuth access token mounted as a file into the LiteLLM container.
Obtain GitHub Copilot Token
Create a script file get_copilot_token.sh. When run, it will:
- Display a verification URL and a one-time code
- Wait until you open the URL in a browser and enter the code using a GitHub account with access to a Copilot subscription
- Save the token automatically to
access-token
get_copilot_token.sh
#!/bin/bash
CLIENT_ID="Iv1.b507a08c87ecfe98"
echo "Requesting device code..."
response=$(curl -s -X POST "https://github.com/login/device/code" \
-H "accept: application/json" \
-H "editor-version: Neovim/0.6.1" \
-H "editor-plugin-version: copilot.vim/1.16.0" \
-H "content-type: application/json" \
-H "user-agent: GithubCopilot/1.155.0" \
-d "{\"client_id\":\"$CLIENT_ID\",\"scope\":\"read:user\"}")
device_code=$(echo "$response" | jq -r '.device_code')
user_code=$(echo "$response" | jq -r '.user_code')
verification_uri=$(echo "$response" | jq -r '.verification_uri')
echo ""
echo "========================================="
echo "Please visit: $verification_uri"
echo "Enter code: $user_code"
echo "========================================="
echo ""
echo "Waiting for authentication..."
while true; do
sleep 10
response=$(curl -s -X POST "https://github.com/login/oauth/access_token" \
-H "accept: application/json" \
-H "editor-version: Neovim/0.6.1" \
-H "content-type: application/json" \
-H "user-agent: GithubCopilot/1.155.0" \
-d "{\"client_id\":\"$CLIENT_ID\",\"device_code\":\"$device_code\",\"grant_type\":\"urn:ietf:params:oauth:grant-type:device_code\"}")
access_token=$(echo "$response" | jq -r '.access_token // empty')
if [ -n "$access_token" ]; then
echo ""
echo "Authentication success!"
echo "Access Token: $access_token"
echo "$access_token" > access-token
echo "Token saved to: access-token"
break
fi
echo -n "."
done
Run the script. Follow the prompts to authenticate with GitHub. The token will be saved to access-token.
Mount the Token
Helm
Create a Kubernetes secret from the token file:
kubectl create secret generic litellm-github-copilot \
--namespace litellm \
--from-file=access-token=./access-token \
--type=Opaque
Then configure your litellm/values.yaml to mount the secret and set the required environment variable:
litellm-helm:
# ... additional configuration fields
volumes:
- name: github-copilot-token
secret:
secretName: litellm-github-copilot
volumeMounts:
- name: github-copilot-token
mountPath: "/app/github_copilot_custom/access-token"
subPath: access-token
readOnly: true
envVars: {
# ... additional configuration fields
GITHUB_COPILOT_TOKEN_DIR: "/app/github_copilot_custom",
# ... additional configuration fields
}
Docker Compose
Optional – for local verification.
Mount the access-token file directly as a volume and set the environment variable:
docker-compose.yml
services:
postgres:
image: pgvector/pgvector:pg17
container_name: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=litellm
ports:
- 5432:5432
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
litellm:
image: ghcr.io/berriai/litellm-database:main-v1.81.0-stable
volumes:
- ./litellm_config.yaml:/app/config.yaml
- ./access-token:/app/github_copilot_custom/access-token
command:
- "--config=/app/config.yaml"
ports:
- "4000:4000"
environment:
GITHUB_COPILOT_TOKEN_DIR: "/app/github_copilot_custom"
DATABASE_URL: "postgresql://postgres:password@postgres:5432/litellm"
STORE_MODEL_IN_DB: "True"
depends_on:
- postgres
Next Steps
Continue to LiteLLM Model Configuration.