SecureStartKit
SecurityFeaturesPricingDocsBlogChangelog
Sign inBuy Now
Mar 23, 2026·Tutorial·SecureStartKit Team

How to Rate Limit Next.js Server Actions (Before They Get Abused)

Server Actions are public HTTP endpoints that anyone can call directly. Here's how to add rate limiting before your login, checkout, and contact form actions get abused.

Summarize with AI

On this page

  • Table of Contents
  • The Attack Surface
  • What Rate Limiting Actually Prevents
  • The In-Memory Approach and Its Limits
  • Why In-Memory Breaks in Serverless
  • The Key Problem: Shared Keys vs. Caller Keys
  • Production Rate Limiting with Upstash
  • Choosing Your Identifier: IP vs. User ID
  • Which Actions to Protect
  • Putting It Together

On this page

  • Table of Contents
  • The Attack Surface
  • What Rate Limiting Actually Prevents
  • The In-Memory Approach and Its Limits
  • Why In-Memory Breaks in Serverless
  • The Key Problem: Shared Keys vs. Caller Keys
  • Production Rate Limiting with Upstash
  • Choosing Your Identifier: IP vs. User ID
  • Which Actions to Protect
  • Putting It Together

Every 'use server' function in your Next.js app is a public HTTP POST endpoint. That's not a bug. It's how Server Actions work. The problem is most developers treat them like ordinary TypeScript functions and forget they're callable from outside the application entirely.

Anyone who discovers the action ID can call your login action, your checkout action, or your contact form with as many requests as their connection allows. Without rate limiting, you're one script away from brute-forced passwords, Stripe checkout spam, and a serverless bill that keeps climbing while you sleep.

This guide covers why Server Actions need rate limiting, where the simple in-memory approach breaks down, and how to wire up a production-ready solution with Upstash Redis.

Table of Contents

  • The Attack Surface
  • What Rate Limiting Actually Prevents
  • The In-Memory Approach and Its Limits
  • Why In-Memory Breaks in Serverless
  • The Key Problem: Shared Keys vs. Caller Keys
  • Production Rate Limiting with Upstash
  • Choosing Your Identifier: IP vs. User ID
  • Which Actions to Protect
  • Putting It Together

The Attack Surface

Server Actions look like function calls in your component code, but they compile to POST requests against a generated endpoint. The action ID ships in the client bundle. It shows up in the Next.js action manifest, which any visitor to your site can download.

Calling a Server Action from outside your app requires two things: the action ID and the request shape. Both are discoverable. The Next-Action request header carries the action ID. The payload is serialized form data or JSON depending on how you invoke the action. Someone motivated enough can reconstruct both from your page source.

Next.js does add two mitigations by default: encrypted action IDs that regenerate on each build, and dead-code elimination that strips unused actions from the bundle [1]. Neither prevents repeated calls to a visible action. They raise the bar slightly. They don't enforce limits.

The bottom line: every Server Action your users can trigger is something an attacker can trigger too, as many times as they want, unless you stop them.

What Rate Limiting Actually Prevents

The threat model breaks into three categories.

Credential brute force. Your login action accepts an email and password. Without rate limiting, an attacker can run through a wordlist indefinitely. Supabase's auth layer has its own rate limits, but those are account-level, not endpoint-level. An attacker burning through credentials still consumes your serverless invocations for every attempt.

Cost amplification. Serverless functions cost money per invocation. A checkout action that calls Stripe's API on each request is an expensive target. A few hundred requests per second isn't a sophisticated attack. It's a for loop with curl. The financial damage isn't from the attacker succeeding at anything. It's from your infrastructure dutifully processing every request.

Spam and third-party throttling. Public contact forms, waitlist signups, and password reset requests are free to call without a rate limiter. The result is junk in your database, deliverability problems from your email provider, and API throttling from Stripe, Supabase, and Resend.

None of these require sophisticated tools. A shell script is enough.

The In-Memory Approach and Its Limits

The simplest rate limiter is a Map that tracks request counts per key and resets them after a time window. Here's the pattern:

// lib/rate-limit.ts
const rateLimitStore = new Map<string, { count: number; resetTime: number }>()

export async function rateLimit(
  key: string,
  limit: number,
  windowSeconds: number
): Promise<{ success: boolean; remaining: number }> {
  const now = Date.now()
  const windowMs = windowSeconds * 1000
  const current = rateLimitStore.get(key)

  if (!current || now > current.resetTime) {
    rateLimitStore.set(key, { count: 1, resetTime: now + windowMs })
    return { success: true, remaining: limit - 1 }
  }

  if (current.count >= limit) {
    return { success: false, remaining: 0 }
  }

  current.count++
  return { success: true, remaining: limit - current.count }
}

Used in a Server Action:

// actions/auth.ts
export async function login(data: LoginInput) {
  const { success } = await rateLimit('login', 5, 60)
  if (!success) {
    return { error: 'Too many attempts. Please try again later.' }
  }
  // ...auth logic
}

This works in development. On Vercel, it has two problems that combine badly.

Why In-Memory Breaks in Serverless

Vercel runs Server Actions as serverless functions. Each cold start creates a fresh process with an empty Map. Two concurrent requests can land on two different instances, each with a counter starting from zero.

In practice, your limit of "5 requests per minute" becomes "5 requests per minute per active serverless instance." If Vercel runs 10 instances under a traffic burst, the effective limit is 50 requests from the attacker's perspective. The limiter is there. The limit you configured isn't being enforced.

The second problem: in-memory state disappears on cold start, and Vercel aggressively cold-starts functions after periods of inactivity. A patient attacker waits for the function to go cold and starts fresh. The counter resets for them at no cost.

For local development and low-traffic apps running on a single persistent server, in-memory rate limiting is reasonable. For anything deployed to a serverless platform under real traffic, you need shared state.

The Key Problem: Shared Keys vs. Caller Keys

There's a third problem independent of the serverless issue: the key you're using.

// Global key — one attacker hitting the limit blocks everyone
const { success } = await rateLimit('login', 5, 60)

A key of just 'login' is global across the entire action on that instance. The first user to make 5 login attempts triggers the limit for every other user trying to log in during that window. That's an accidental denial-of-service condition you introduced yourself.

The fix is to include the caller's identity in the key:

// Per-IP key — limits apply to the specific caller, not all callers
const { success } = await rateLimit(`login:${ip}`, 5, 60)

With a per-IP or per-user key, exceeding the limit only affects the caller who triggered it. A user who gets rate limited doesn't block anyone else.

Production Rate Limiting with Upstash

Upstash Redis solves the shared-state problem. It's an HTTP-based Redis service designed for serverless and edge runtimes, where persistent TCP connections are unreliable or unavailable. The @upstash/ratelimit package provides rate limiting logic on top of it, and it's what the Next.js documentation recommends for serverless deployments [2].

Install both packages:

npm install @upstash/ratelimit @upstash/redis

Create a Redis database in the Upstash console, then add the credentials to your environment:

UPSTASH_REDIS_REST_URL=https://...
UPSTASH_REDIS_REST_TOKEN=...

Replace your in-memory implementation:

// lib/rate-limit.ts
import { Ratelimit } from '@upstash/ratelimit'
import { Redis } from '@upstash/redis'

export const ratelimit = new Ratelimit({
  redis: Redis.fromEnv(),
  limiter: Ratelimit.slidingWindow(10, '10 s'),
  analytics: true,
  prefix: 'rl',
})

The prefix namespaces all rate limit keys in Redis so they don't collide with other data your app stores. The analytics: true flag enables request tracking in the Upstash dashboard.

Sliding window is the right default algorithm for Server Actions [3]. Fixed window counters reset at predictable boundaries, which lets an attacker time bursts at the window edge: 5 requests at the end of one window plus 5 at the start of the next equals 10 requests in a short span, despite the limit being 5. Sliding window calculates the limit over a rolling period, making boundary timing irrelevant.

Choosing Your Identifier: IP vs. User ID

The identifier you pass to ratelimit.limit() determines who gets rate limited.

Use IP for public actions. Login, signup, password reset, and contact forms are reachable before authentication. The only stable identifier available is the request IP. In Server Actions, you get it from the x-forwarded-for header, which Vercel sets on every request:

import { headers } from 'next/headers'

const headerList = await headers()
const ip = (headerList.get('x-forwarded-for') ?? '127.0.0.1').split(',')[0]
const { success } = await ratelimit.limit(`login:${ip}`)

Split on the comma and take the first value. The x-forwarded-for header can contain a chain of proxy addresses. You want the originating client address, not an intermediate proxy.

Use user ID for authenticated actions. After authentication, the user's ID is a stable, non-spoofable identifier. IP addresses can be shared across offices, VPNs, and NAT gateways, or rotated rapidly on mobile networks. User ID is more accurate and avoids penalizing legitimate users who share an IP:

const user = await getUser()
if (!user) redirect('/login')

const { success } = await ratelimit.limit(`checkout:${user.id}`)

Including the action name in the key keeps each action's counter independent. checkout:${user.id} and profile-update:${user.id} don't share a bucket, so a user hitting the checkout limit doesn't affect their ability to update their profile.

Which Actions to Protect

Not every action carries the same risk. Here's a prioritized view based on threat profile:

ActionIdentifierSuggested LimitRisk
loginIP5 per 60sBrute force
signupIP3 per 60sSpam accounts
resetPasswordIP3 per 60sEmail spam
createCheckoutSessionUser ID10 per 60sCost amplification
createPortalSessionUser ID10 per 60sStripe abuse
Contact formIP3 per 60sSpam
updateProfileUser ID20 per 60sLow risk, cheap to protect

Auth actions get the tightest limits because the payload is small and the attack value is high. Billing actions get more room because authenticated users have few legitimate reasons to hammer a checkout button, but they still need limits against cost amplification. Profile updates are low-risk but protecting them is two lines of code, so there's no reason not to.

One category that often gets missed: any action that sends an email. Password reset, email verification, invite links. Each invocation costs a transactional email credit and risks getting your domain flagged for spam.

Putting It Together

Here's what a production-ready login action looks like with all three concerns addressed: per-IP key, Upstash shared state, and Zod validation chained after the rate check:

// actions/auth.ts
'use server'

import { headers } from 'next/headers'
import { ratelimit } from '@/lib/rate-limit'
import { createServerClientWithCookies } from '@/lib/supabase/server'
import { loginSchema, type LoginInput } from '@/lib/schemas/auth'
import { redirect } from 'next/navigation'
import { revalidatePath } from 'next/cache'

export async function login(data: LoginInput, redirectTo?: string) {
  const headerList = await headers()
  const ip = (headerList.get('x-forwarded-for') ?? '127.0.0.1').split(',')[0]

  const { success } = await ratelimit.limit(`login:${ip}`)
  if (!success) {
    return { error: 'Too many attempts. Please try again later.' }
  }

  const parsed = loginSchema.safeParse(data)
  if (!parsed.success) {
    return { error: parsed.error.errors[0].message }
  }

  const supabase = await createServerClientWithCookies()
  const { error } = await supabase.auth.signInWithPassword({
    email: parsed.data.email,
    password: parsed.data.password,
  })

  if (error) {
    return { error: error.message }
  }

  revalidatePath('/', 'layout')
  const next =
    redirectTo?.startsWith('/') && !redirectTo.startsWith('//')
      ? redirectTo
      : '/dashboard'
  redirect(next)
}

And a checkout action upgraded from unprotected to rate-limited:

// actions/billing.ts
'use server'

import { ratelimit } from '@/lib/rate-limit'
import { getUser, createAdminClient } from '@/lib/supabase/server'
import { getStripe } from '@/lib/stripe/client'
import { checkoutSchema } from '@/lib/schemas/billing'
import { redirect } from 'next/navigation'
import { z } from 'zod'

export async function createCheckoutSession(
  data: z.infer<typeof checkoutSchema>
) {
  const user = await getUser()
  if (!user) redirect('/login?next=/#pricing')

  const { success } = await ratelimit.limit(`checkout:${user.id}`)
  if (!success) {
    return { error: 'Too many checkout attempts. Please wait and try again.' }
  }

  const parsed = checkoutSchema.safeParse(data)
  if (!parsed.success) {
    return { error: 'Invalid input' }
  }

  // ...Stripe checkout creation
}

A few things worth noting about the ordering. Rate limiting runs before Zod validation, which runs before any database or third-party API call. The order matters: reject abusive requests before spending compute on them. For authenticated actions, the auth check runs before the rate limit check because you need the user ID to construct the key. The error messages are intentionally vague. "Too many checkout attempts" reveals nothing about your parameters or window sizes.

The most common failure mode isn't skipping rate limiting entirely. It's adding a Map-based limiter in development, shipping to Vercel, and assuming the limit holds in production. Serverless instances don't share memory. Your per-action limit may be silently inactive without any indication something is wrong.

For a broader view of the hardening steps that belong alongside rate limiting, the Next.js Security Hardening Checklist covers eleven other production security steps. And if you're also thinking about route-level protection, proxy.ts authentication with Supabase covers the layer that sits above your Server Actions and handles redirect-based access control.

SecureStartKit ships with rate limiting already in place on auth actions, with the architecture ready to upgrade from in-memory to Upstash when traffic warrants it. The same ratelimit.limit() call extends to every action in the template with two lines.

Built for developers who care about security

SecureStartKit ships with these patterns out of the box.

Backend-only data access, Zod validation on every input, RLS enabled, Stripe webhooks verified. One purchase, lifetime updates.

View PricingSee the template in action

References

  1. Next.js Security: Server Components and Server Actions— nextjs.org
  2. Upstash Ratelimit SDK Overview— upstash.com
  3. Rate Limiting Algorithms | Upstash— upstash.com

Related Posts

Mar 1, 2026·Tutorial

How to Send Emails in Next.js with React Email and Resend (2026 Guide)

Stop writing HTML strings for emails. Learn how to build type-safe, component-based email workflows in Next.js using Resend and React Email.

Feb 22, 2026·Tutorial

How to Add Stripe Payments to Next.js Using Server Actions (2026 Guide)

A production-ready guide to integrating Stripe one-time payments in Next.js 15 with Server Actions, Zod validation, webhooks, and automated email delivery.

Mar 20, 2026·Tutorial

Next.js proxy.ts Authentication: How to Protect Routes with Supabase (2026)

Next.js 16 renamed middleware.ts to proxy.ts. Here's how to migrate your Supabase route protection and understand what actually changed.