LiteLLM CVE-2026-42208: Pre-Auth SQL Injection in the AI Gateway, Targeted Exploitation Within 36 Hours
Introduction
A critical pre-authentication SQL injection in LiteLLM, the open-source LLM gateway with more than 22,000 GitHub stars that fronts OpenAI, Anthropic, Bedrock, and others for thousands of teams, is being actively exploited. The bug — CVE-2026-42208 (CVSS 9.3, GHSA-r75f-5x8p-qvmc) — sits in the proxy's Authorization: Bearer verification path, which concatenates the bearer value straight into a SELECT against PostgreSQL. Sysdig's Threat Research Team observed the first deliberate exploitation 36 hours and seven minutes after the advisory landed in the GitHub Advisory Database, and the operator they captured went straight for the three tables that hold provider API keys, configured secrets, and the proxy's environment variables.
What Happened
LiteLLM's proxy intercepts every incoming AI API call, looks up the supplied virtual API key in the LiteLLM_VerificationToken PostgreSQL table, and then forwards the request to the right upstream provider. In versions >=1.81.16, <1.83.7, that lookup was implemented with raw string interpolation rather than a parameterised query. A bearer value containing a single quote breaks the SQL literal and lets an attacker append arbitrary statements. Crucially, the lookup happens before authentication is decided, so anything that can reach the proxy port can inject — no API key required.
Sysdig's honeypots saw two phases. In phase one, the operator ran a textbook UNION SELECT column-count discovery against three specific tables — LiteLLM_VerificationToken (virtual API keys), litellm_credentials (provider keys for OpenAI, Anthropic, Bedrock), and litellm_config (proxy environment variables, which usually carry the PostgreSQL DSN, the master key, callback webhook URLs, and cache backends). The probes used the exact Prisma-generated PostgreSQL identifier casing, meaning the operator had read the LiteLLM source and was not running a generic SQLmap spray. There were zero probes against benign tables like litellm_users or litellm_team.
In phase two, the same operator rotated source IPs (between 65.111.27.132 and 65.111.25.67, both inside the same autonomous system) and re-ran cleaner, more precise queries against the same three targets. The fix in LiteLLM 1.83.7 replaces the string concatenation with a parameterised query; for operators who cannot upgrade immediately, the maintainers point to general_settings.disable_error_logs: true as a workaround that closes the path the malicious input takes to reach the vulnerable query.
Why It Matters
A LiteLLM proxy is, by design, a key concentrator. It exists precisely so individual developers and applications do not have to hold an OpenAI or Anthropic key directly — instead, the proxy holds the upstream secrets and issues short-lived virtual keys downstream. That makes it the highest-value SQL injection target a typical AI engineering team operates: a single successful read on litellm_credentials hands an attacker the production OpenAI bill and conversational metadata for whatever the team has been building.
The 36-hour exploit-after-disclosure window is also the story. It is slower than the recent marimo Python notebook RCE that was hit in hours, but the targeting was tighter — Sysdig's capture showed the attacker had read the source, knew the schema, and knew which three tables to read first. AI infrastructure projects are now firmly in the "advisory-as-blueprint" tier where the public bug report is sufficient to drive a focused breach without ever publishing a PoC.
Who Is Affected
- Operators of LiteLLM versions 1.81.16 through 1.83.6, especially internet-reachable instances or instances exposed to internal networks where any host can talk to them.
- Any team whose LiteLLM proxy holds OpenAI, Anthropic, Bedrock, Azure OpenAI, Vertex, or other provider credentials — those credentials must now be assumed exposed if the proxy was reachable during the window.
- Anyone using LiteLLM to issue virtual API keys to applications: those keys are stored in
LiteLLM_VerificationTokenand were also reachable. - By extension, every downstream service whose only authentication to your AI infrastructure is a LiteLLM virtual key, since those keys can be enumerated and reused.
How to Protect Yourself
Patch immediately, regardless of network exposure:
pip install --upgrade 'litellm>=1.83.7'
# or, for the proxy distribution
pip install --upgrade 'litellm[proxy]>=1.83.7'
# verify
python -c "import litellm, sys; print(litellm.__version__); sys.exit(0 if litellm.__version__ >= '1.83.7' else 1)"
If you ship LiteLLM as a container, repin to the patched tag and redeploy:
docker pull ghcr.io/berriai/litellm:main-stable
# pin to a digest in production after pulling
docker inspect --format='{{index .RepoDigests 0}}' ghcr.io/berriai/litellm:main-stable
If your proxy was internet-reachable at any point during the exposure window, treat it as compromised and rotate everything it touched:
# rotate every upstream provider key the proxy holds
openai api keys.delete <id> # then create a new one and update the proxy
# Anthropic, Bedrock, Vertex etc. via their respective consoles/CLIs
# rotate the LiteLLM master key and every virtual API key
litellm-proxy auth rotate-master-key
litellm-proxy auth rotate-virtual-keys
# rotate the PostgreSQL credentials and the DSN stored in the env
psql -h db -U postgres -c "ALTER USER litellm WITH PASSWORD '<new>';"
If you cannot upgrade right now, apply the maintainer-recommended workaround and put a reverse proxy in front to drop obviously hostile bearer values:
# config.yaml on LiteLLM
general_settings:
disable_error_logs: true
# Nginx in front of the LiteLLM proxy
map $http_authorization $litellm_block {
default "0";
"~*Bearer\s.*['\(\)\;\-\-]" "1";
"~*Bearer\s.*\bUNION\b" "1";
"~*Bearer\s.*\bSELECT\b" "1";
}
server {
listen 443 ssl;
location / {
if ($litellm_block) { return 401; }
proxy_pass http://litellm-upstream;
}
}
Hunt for exploitation in your existing logs. Even a single malformed bearer is high-confidence:
# proxy / LB access logs
grep -E "Authorization: Bearer [^ ]*['(\\)\-\-]" /var/log/nginx/access.log* /var/log/litellm/*.log
# PostgreSQL slow/error log: look for syntax errors on the verification table
grep -E "LiteLLM_VerificationToken|syntax error|UNION" /var/log/postgresql/*.log
Cross-reference billing on every upstream provider for the period since 1.81.16 was current. Anomalous usage from unknown source IPs (especially the two Sysdig captured) is a strong indicator your provider keys were exfiltrated and used.
Source
- https://www.sysdig.com/blog/cve-2026-42208-targeted-sql-injection-against-litellms-authentication-path-discovered-36-hours-following-vulnerability-disclosure
- https://www.bleepingcomputer.com/news/security/hackers-are-exploiting-a-critical-litellm-pre-auth-sqli-flaw/
- https://thehackernews.com/2026/04/litellm-cve-2026-42208-sql-injection.html
- https://github.com/BerriAI/litellm/releases/tag/v1.83.7-stable