SecureStartKit
SecurityFeaturesPricingDocsBlogChangelog
Sign inBuy Now
Mar 12, 2026·Security·SecureStartKit Team

Exposed API Keys: How AI Tools Leak Your Secrets (And How to Lock Them Down)

Claude Code CVEs, Google's $82K API key incident, 5,000+ repos leaking ChatGPT keys. Learn how AI tools expose your secrets and how to lock them down in Next.js.

Summarize with AI

On this page

  • Table of Contents
  • The Three Ways AI Tools Leak Your Keys
  • The Google API Key Wake-Up Call
  • How Next.js Apps Leak Secrets
  • The NEXT_PUBLIC_ Trap
  • Securing Supabase and Stripe Keys
  • Safe Server Actions Pattern
  • Automated Secret Detection
  • There Are No Harmless Keys Anymore

On this page

  • Table of Contents
  • The Three Ways AI Tools Leak Your Keys
  • The Google API Key Wake-Up Call
  • How Next.js Apps Leak Secrets
  • The NEXT_PUBLIC_ Trap
  • Securing Supabase and Stripe Keys
  • Safe Server Actions Pattern
  • Automated Secret Detection
  • There Are No Harmless Keys Anymore

In February 2026, a developer reported an $82,000 bill after their Google Cloud API key was stolen. Their normal spend was $180 per month. The key had been embedded in client-side code for a Google Maps integration. It was never meant to be a secret. Then Google enabled the Generative Language API, and that same "harmless" key now granted access to Gemini endpoints [3].

This is not an isolated incident. It is the new normal.

Cyble Research found over 5,000 GitHub repositories and 3,000 live production websites leaking ChatGPT API keys through hardcoded source code and client-side JavaScript. Check Point discovered two critical vulnerabilities in Claude Code (CVE-2025-59536, CVE-2026-21852) that allow attackers to exfiltrate API keys simply by opening a malicious repository. Veracode reports that 45% of AI-generated code contains security flaws, and Aikido Security's 2026 report found that AI-generated code now causes one in five breaches.

If you are building a SaaS with Next.js, your API keys are the keys to your kingdom. A leaked Stripe secret key lets attackers issue refunds or access customer payment data. An exposed Supabase service role key bypasses all Row Level Security and gives full database access. This guide shows you exactly how keys get exposed and how to lock them down.

Table of Contents

  • The Three Ways AI Tools Leak Your Keys
  • The Google API Key Wake-Up Call
  • How Next.js Apps Leak Secrets
  • The NEXT_PUBLIC_ Trap
  • Securing Supabase and Stripe Keys
  • Safe Server Actions Pattern
  • Automated Secret Detection
  • There Are No Harmless Keys Anymore

The Three Ways AI Tools Leak Your Keys

AI coding tools create three distinct attack surfaces that did not exist two years ago.

1. Code generation leaks. Tools like Cursor, Copilot, and v0 generate code that hardcodes API keys directly in client components. They default to the fastest path, not the safest one. A vibe-coded prototype becomes production code, and the keys ship with it.

2. Tool-level vulnerabilities. Claude Code's CVE-2025-59536 allows arbitrary code execution through malicious project hooks. CVE-2026-21852 exfiltrates API keys during the project-load flow. Simply cloning and opening a crafted repository is enough to steal a developer's active Anthropic API key and redirect authenticated API traffic to an attacker's infrastructure.

3. Context window exposure. Developers paste environment variables, configuration files, and error logs containing keys directly into AI chat interfaces. These conversations may be logged, used for training, or accessible through API history endpoints.

The Google API Key Wake-Up Call

Google API keys were designed to be non-secrets. The official documentation told developers to embed them in client-side code for Maps, Fonts, and YouTube embeds. Thousands of applications followed this guidance.

Then Google enabled the Generative Language API. Truffle Security scanned for publicly exposed Google API keys and found nearly 3,000 embedded in client-side code. With the Generative Language API enabled on the associated project, these keys now grant access to Gemini endpoints. Attackers can read uploaded files, retrieve cached prompts, and run AI queries on the key owner's billing account [3].

One developer's bill jumped from $180 to $82,314 in 48 hours. The key was never rotated because it was never considered sensitive.

How Next.js Apps Leak Secrets

Next.js has a strict boundary between server and client code. Only variables prefixed with NEXT_PUBLIC_ are bundled into browser JavaScript. Everything else stays on the server. This is your primary line of defense.

But developers break this boundary constantly, especially when prototyping fast or using AI code generation tools. Here are the three most common leaks:

1. Hardcoded keys in client components. An AI assistant generates a component that calls an API directly from the browser. The API key is right there in the source code. Open DevTools, view the JavaScript bundle, extract the key.

2. Wrong environment variable prefix. A developer adds NEXT_PUBLIC_STRIPE_SECRET_KEY to their .env.local file because the client component "needs" it. That secret key is now in every visitor's browser.

3. Git history exposure. A key was briefly committed to .env before being moved to .env.local. The key is gone from the working directory but lives forever in git history. Automated scanners like TruffleHog and GitLeaks find these within minutes of a repository going public.

The NEXT_PUBLIC_ Trap

This is the most dangerous pattern in Next.js development. The NEXT_PUBLIC_ prefix is an explicit instruction to bundle a variable into client-side JavaScript. Anything with this prefix is visible to every visitor.

Safe to prefix with NEXT_PUBLIC_:

  • Supabase anon key (designed for client use, protected by RLS)
  • Supabase project URL
  • Your app's public domain
  • Analytics IDs (Google Analytics, PostHog)

Never prefix with NEXT_PUBLIC_:

  • STRIPE_SECRET_KEY — gives full Stripe account access
  • SUPABASE_SERVICE_ROLE_KEY — bypasses all RLS policies
  • Any AI provider key (OpenAI, Anthropic, Google AI)
  • DATABASE_URL — direct database connection string
  • Email provider keys (Resend, SendGrid)
  • Webhook signing secrets

If you need to use a secret key in response to a user action, use a Server Action. If you need it during page rendering, use a Server Component. Both execute on the server, keeping secrets out of the browser bundle.

Securing Supabase and Stripe Keys

Your SaaS depends on third-party services. Apply the principle of least privilege to every connection.

Supabase has two keys — treat them differently:

  • The anon key is designed for client-side use. It is safe to expose because Row Level Security (RLS) policies control what each user can access. But RLS must be enabled and correctly configured on every table, or the anon key becomes a skeleton key.
  • The service role key bypasses all RLS. It must never touch client code. Use it only in Server Actions and API routes via createAdminClient().

Stripe has test and live keys — scope them:

  • Use restricted keys in development with only the permissions you need.
  • Never use your live secret key in local development. Use test mode keys.
  • Store webhook signing secrets (STRIPE_WEBHOOK_SECRET) server-side only. Always verify webhook signatures before processing events.

Google Cloud keys — restrict by API and IP:

  • Lock each key to specific APIs. A key for Maps should not have access to Gemini.
  • Set application restrictions (HTTP referrers for client keys, IP addresses for server keys).
  • Audit which APIs are enabled on each project quarterly [3].

Safe Server Actions Pattern

The safest place for an API key is a Server Action. The key lives in process.env on the server and never appears in any client bundle.

Here is the pattern for safely calling an external API:

// actions/ai.ts
'use server'

import { z } from 'zod'

const promptSchema = z.object({
  message: z.string().min(1).max(500),
})

export async function generateResponse(data: z.infer<typeof promptSchema>) {
  // 1. Validate input on the server
  const parsed = promptSchema.safeParse(data)
  if (!parsed.success) {
    return { error: 'Invalid prompt' }
  }

  // 2. Access the API key safely
  const apiKey = process.env.GEMINI_API_KEY
  if (!apiKey) {
    throw new Error('Server configuration error')
  }

  // 3. Make the API request
  const response = await fetch('https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent', {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'x-goog-api-key': apiKey, // The key never leaves the server
    },
    body: JSON.stringify({
      contents: [{ parts: [{ text: parsed.data.message }] }]
    })
  })

  if (!response.ok) {
    return { error: 'Failed to generate response' }
  }

  const result = await response.json()
  return { data: result }
}

The client calls this function, but the browser never sees the API key. Validation happens on the server before the external request is made. This prevents attackers from bypassing client-side checks and abusing your billing quota.

The same pattern applies to every sensitive operation in your SaaS: Stripe checkout, email sending, database writes, AI calls. The rule is simple: if a key is sensitive, it belongs in a Server Action, never in a client component.

Automated Secret Detection

Manual code review does not catch leaked keys reliably. You need automated scanning at three levels:

Pre-commit scanning. Tools like git-secrets or detect-secrets catch keys before they enter your git history. Once a key is committed, even to a private repo, it is compromised — git history is forever.

CI pipeline scanning. Integrate TruffleHog or GitHub's built-in secret scanning into your CI pipeline. These tools scan every pull request for API keys, connection strings, and tokens [1]. GitHub Advanced Security's push protection blocks commits containing detected secrets before they are pushed.

Runtime monitoring. Set up billing alerts on every cloud provider. The Google API key incident ($82K in 48 hours) would have been caught by a simple billing alert set at 2x normal spend. Stripe, Supabase, OpenAI, and Anthropic all offer usage alerts.

Automatic invalidation is coming. Platforms like Octopus Deploy will begin automatically invalidating API keys detected in public GitHub repositories in 2026 [1]. Google Cloud already defaults to auto-disabling exposed service account keys. This protects you from attackers, but it also means a leaked key will immediately break your production application. Better to catch it before it ships.

There Are No Harmless Keys Anymore

The Google API key incident proves that a key's risk profile changes without warning. A read-only mapping key today might access AI billing endpoints tomorrow. A "test" key might have production permissions you forgot about.

The security model that SecureStartKit enforces is not paranoia. It is the minimum viable approach:

  • Backend-only data access. The Supabase service role key never touches client code. All database writes go through Server Actions that validate input with Zod before executing.
  • No exposed secrets. Only the Supabase anon key and project URL use the NEXT_PUBLIC_ prefix. Every other key lives exclusively in server-side environment variables.
  • Webhook verification. Stripe webhook signatures are verified before processing any event. An attacker cannot forge a checkout completion.

Audit your .env files today. Search your codebase for NEXT_PUBLIC_ and verify that every prefixed variable is genuinely safe to expose. Check your git history for accidentally committed secrets. Set up billing alerts on every third-party service.

The cost of one leaked key is measured in thousands of dollars and months of incident response. The cost of getting this right is an afternoon.

Built for developers who care about security

SecureStartKit ships with these patterns out of the box.

Backend-only data access, Zod validation on every input, RLS enabled, Stripe webhooks verified. One purchase, lifetime updates.

View PricingSee the template in action

References

  1. Automatic API Key Invalidation Coming In 2026 | Octopus blog— octopus.com
  2. When AI Secrets Go Public: The Rising Risk of Exposed ChatGPT API Keys— cyble.com
  3. Google API Keys Exposed: How Gemini Changed Security— vulnu.com
  4. RCE and API Token Exfiltration Through Claude Code Project Files— research.checkpoint.com
  5. Top 10 Security Vulnerabilities in AI-Generated Code (2026 Edition)— getshipready.com
  6. The AI Code Security Crisis of 2026: What Every CTO Needs to Know— growexx.com

Related Posts

Mar 3, 2026·Security

The Vibe Coding Security Checklist: How to Audit Your AI-Generated App

Vibe coding tools like Cursor and v0 build apps fast, but they often ship vulnerabilities. Here is the technical audit checklist for Next.js and Supabase apps.

Feb 21, 2026·Security

Why 170+ Vibe-Coded Apps Got Hacked, And How to Actually Secure Your Supabase App

The Lovable hack exposed 170+ apps through missing RLS. Here's what went wrong and the exact steps to secure your Supabase database.

Feb 19, 2025·Security

Why Security-First Matters for Your SaaS

Most SaaS templates expose your database to the browser. Here's why that's dangerous and how SecureStartKit does it differently.