SecureStartKit
SecurityFeaturesPricingDocsBlogChangelog
Sign inBuy Now
Apr 28, 2026·Tutorial·SecureStartKit Team

Secure File Uploads in Next.js + Supabase Storage [2026]

Most Supabase upload tutorials skip RLS on the bucket and trust the client. Here's how to upload securely in Next.js with Server Actions, signed URLs, and validation.

Summarize with AI

On this page

  • Table of Contents
  • Why Supabase file uploads fail in production
  • The bucket setup: private by default, RLS on storage.objects
  • Pattern 1: Upload via Server Action
  • Pattern 2: Signed Upload URLs (recommended)
  • Validating uploads: size, type, and the magic byte check
  • Serving private files with signed read URLs
  • Five mistakes that turn private buckets public
  • How we handle uploads in SecureStartKit

On this page

  • Table of Contents
  • Why Supabase file uploads fail in production
  • The bucket setup: private by default, RLS on storage.objects
  • Pattern 1: Upload via Server Action
  • Pattern 2: Signed Upload URLs (recommended)
  • Validating uploads: size, type, and the magic byte check
  • Serving private files with signed read URLs
  • Five mistakes that turn private buckets public
  • How we handle uploads in SecureStartKit

A secure Supabase file upload in Next.js has three pieces: a private bucket, RLS policies on the storage.objects table that match your access pattern, and either a Server Action that uploads with service_role or a short-lived signed upload URL your server issues per file. Skip any of them and your bucket is either public to anyone with a URL or open to abuse from the browser.

Most upload tutorials get one of the three wrong. They use the anon key on the client and forget RLS on storage.objects, so the bucket looks private but a leaked URL exposes every file. Or they ship service_role to the browser to "make uploads work," which gives any visitor full read and write access to your project. Or they accept arbitrary filenames from the client and end up with path traversal in their object keys.

This guide covers the bucket setup that actually denies by default, the two upload patterns you'll use in 95% of SaaS apps, the validation rules that block the four classes of bad upload, and how we wire it into SecureStartKit.

Table of Contents

  • Why Supabase file uploads fail in production
  • The bucket setup: private by default, RLS on storage.objects
  • Pattern 1: Upload via Server Action
  • Pattern 2: Signed Upload URLs (recommended)
  • Validating uploads: size, type, and the magic byte check
  • Serving private files with signed read URLs
  • Five mistakes that turn private buckets public
  • How we handle uploads in SecureStartKit

Why Supabase file uploads fail in production

Supabase Storage sits on top of Postgres. Every object you upload becomes a row in the storage.objects table, and every read or write goes through RLS just like any other table [1]. That detail is the source of most upload security failures, because developers configure their normal tables carefully but treat the storage bucket as if it lives somewhere else.

The failures cluster into four shapes:

  • Public bucket with private content. The dashboard's "make public" toggle is convenient and fatal. A public bucket means any URL of the form /storage/v1/object/public/<bucket>/<path> is readable by the entire internet. If your filename pattern is guessable (receipts/user-id-1.pdf), the whole bucket is scrapeable.
  • Private bucket with no RLS. RLS is enforced on storage.objects only if you write policies. A bucket marked private but with no SELECT policy is queryable by any authenticated user. They just need the key, which the SDK exposes.
  • Service role in the browser. A common "fix" when uploads fail is to drop the service role key into the client. That key bypasses RLS entirely. Once it's in your bundle, anyone can read, write, or delete any object in any bucket.
  • Client-supplied paths. Letting the user pick the object key (bucket.upload(file.name, file)) opens path traversal and overwrite attacks. A user uploads ../profiles/<other-user-id>/avatar.jpg and rewrites someone else's avatar.

Each fix is straightforward. The architecture below assumes a private bucket, RLS policies on storage.objects, server-issued paths, and either Server Actions or signed upload URLs for the actual transfer.

The bucket setup: private by default, RLS on storage.objects

Create the bucket through the SDK or the dashboard, but always with public: false:

-- Run in the Supabase SQL Editor
insert into storage.buckets (id, name, public)
values ('user-uploads', 'user-uploads', false);

A private bucket isn't enough on its own. The storage.objects table needs RLS policies that name who can do what to which paths. The pattern most SaaS apps want is "users can read and write objects under their own user ID prefix":

-- Allow authenticated users to read their own objects
create policy "user_uploads_select_own"
on storage.objects for select
to authenticated
using (
  bucket_id = 'user-uploads'
  and (storage.foldername(name))[\[1\]](https://supabase.com/docs/guides/storage) = (select auth.uid())::text
);

-- Allow authenticated users to insert into their own folder
create policy "user_uploads_insert_own"
on storage.objects for insert
to authenticated
with check (
  bucket_id = 'user-uploads'
  and (storage.foldername(name))[\[1\]](https://supabase.com/docs/guides/storage) = (select auth.uid())::text
);

-- Allow authenticated users to delete their own objects
create policy "user_uploads_delete_own"
on storage.objects for delete
to authenticated
using (
  bucket_id = 'user-uploads'
  and (storage.foldername(name))[\[1\]](https://supabase.com/docs/guides/storage) = (select auth.uid())::text
);

Three details that aren't obvious from the docs.

First, storage.foldername(name) returns the path segments of the object name as an array. The first segment becomes the user ID prefix, which is why all your object keys should start with {user_id}/. If you let the user pick arbitrary paths, this policy collapses.

Second, wrapping auth.uid() in (select auth.uid()) triggers the same initPlan caching that any other RLS policy needs. Without it, the function fires once per row the planner inspects, and listing 10,000 objects in a folder takes seconds instead of milliseconds. The same performance rules from the RLS policies guide apply to storage.objects because it's just another Postgres table.

Third, service_role bypasses these policies entirely. That's intentional. Your Server Actions use service_role to do administrative things like generate thumbnails, scan uploads, or move files between buckets. The policies above gate the API surface that the client touches, not the surface your backend touches.

Pattern 1: Upload via Server Action

The simplest secure pattern uploads through a Server Action. The browser sends a FormData body to your Next.js server, the Server Action validates it, and the server forwards the bytes to Supabase using service_role.

// actions/upload.ts
'use server'

import { z } from 'zod'
import { createAdminClient, getUser } from '@/lib/supabase/server'

const MAX_BYTES = 5 * 1024 * 1024 // 5MB
const ALLOWED_MIME = ['image/jpeg', 'image/png', 'image/webp']

const uploadSchema = z.object({
  file: z
    .instanceof(File)
    .refine((f) => f.size > 0, 'File is empty')
    .refine((f) => f.size <= MAX_BYTES, 'File exceeds 5MB')
    .refine((f) => ALLOWED_MIME.includes(f.type), 'Unsupported file type'),
})

export async function uploadAvatar(formData: FormData) {
  const user = await getUser()
  if (!user) {
    return { error: 'Not authenticated' }
  }

  const parsed = uploadSchema.safeParse({ file: formData.get('file') })
  if (!parsed.success) {
    return { error: parsed.error.issues[0].message }
  }

  const { file } = parsed.data
  const ext = file.name.split('.').pop()?.toLowerCase() ?? 'bin'
  const objectKey = `${user.id}/avatar-${Date.now()}.${ext}`

  const admin = createAdminClient()
  const { error } = await admin.storage
    .from('user-uploads')
    .upload(objectKey, file, {
      contentType: file.type,
      upsert: false,
    })

  if (error) {
    return { error: error.message }
  }

  return { ok: true, path: objectKey }
}

Three things this code is doing that most tutorials don't.

The Server Action authenticates the caller before reading the file. getUser() reads the session from cookies, so this won't run for an anonymous request. The pattern matches the rest of the Server Actions + Zod validation guide: every Server Action that mutates state runs an auth check first.

The object key is generated server-side from the authenticated user ID and a timestamp. The user's submitted filename is used only to extract the extension, never the path. This blocks the "user uploads ../other-user/avatar.png" attack where the path traverses out of the caller's folder.

The Zod schema validates size and MIME type before a single byte is forwarded to Supabase. Most tutorials skip this and rely on Supabase's bucket-level limits, which run after the upload completes. Your Server Action is already paying the bandwidth and the function execution time by then.

The corresponding form is plain HTML. No client-side JavaScript needed:

// components/avatar-form.tsx
import { uploadAvatar } from '@/actions/upload'

export function AvatarForm() {
  return (
    <form action={uploadAvatar}>
      <input type="file" name="file" accept="image/*" required />
      <button type="submit">Upload avatar</button>
    </form>
  )
}

There's one limit that catches people. Vercel's default Server Action body limit is 4MB [4]. Files larger than that fail with a 413 before the Server Action runs. You can raise the cap up to 4.5MB on Hobby or higher on Pro by setting serverActions.bodySizeLimit in next.config.ts, but anything over a few MB is the wrong fit for this pattern. That's where signed upload URLs come in.

Pattern 2: Signed Upload URLs (recommended)

For files larger than a few MB, or for any flow where you don't want bytes routed through your Vercel functions, signed upload URLs are the right tool. The client uploads directly to Supabase Storage. Your server's only job is to authenticate the user and hand them a single-use URL that's scoped to one specific object key.

This is the pattern we use by default. It's faster (no round-trip through your server), cheaper (no Vercel bandwidth on the upload path), and just as secure as the Server Action pattern when you validate the metadata before issuing the URL.

The Server Action that issues the URL:

// actions/upload.ts
'use server'

import { z } from 'zod'
import { createAdminClient, getUser } from '@/lib/supabase/server'

const MAX_BYTES = 25 * 1024 * 1024 // 25MB
const ALLOWED_MIME = ['image/jpeg', 'image/png', 'image/webp', 'application/pdf']

const requestUrlSchema = z.object({
  filename: z.string().min(1).max(200),
  size: z.number().int().positive().max(MAX_BYTES),
  mimeType: z.enum(ALLOWED_MIME as [string, ...string[]]),
})

export async function requestUploadUrl(input: z.infer<typeof requestUrlSchema>) {
  const user = await getUser()
  if (!user) {
    return { error: 'Not authenticated' }
  }

  const parsed = requestUrlSchema.safeParse(input)
  if (!parsed.success) {
    return { error: parsed.error.issues[0].message }
  }

  const { filename, mimeType } = parsed.data
  const ext = filename.split('.').pop()?.toLowerCase() ?? 'bin'
  const objectKey = `${user.id}/${crypto.randomUUID()}.${ext}`

  const admin = createAdminClient()
  const { data, error } = await admin.storage
    .from('user-uploads')
    .createSignedUploadUrl(objectKey)

  if (error || !data) {
    return { error: error?.message ?? 'Failed to issue upload URL' }
  }

  return {
    ok: true,
    uploadUrl: data.signedUrl,
    token: data.token,
    path: data.path,
    mimeType,
  }
}

The client takes the returned token and uploads with uploadToSignedUrl [3]:

// components/upload-button.tsx
'use client'

import { createBrowserClient } from '@supabase/ssr'
import { requestUploadUrl } from '@/actions/upload'

const supabase = createBrowserClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
)

export function UploadButton() {
  async function onChange(e: React.ChangeEvent<HTMLInputElement>) {
    const file = e.target.files?.[0]
    if (!file) return

    const result = await requestUploadUrl({
      filename: file.name,
      size: file.size,
      mimeType: file.type,
    })

    if ('error' in result) {
      alert(result.error)
      return
    }

    const { error } = await supabase.storage
      .from('user-uploads')
      .uploadToSignedUrl(result.path, result.token, file, {
        contentType: result.mimeType,
        upsert: false,
      })

    if (error) {
      alert(error.message)
      return
    }

    // Trigger a Server Action to record the upload in your DB
  }

  return <input type="file" onChange={onChange} />
}

Why this is safe. The browser never has service_role and never picks the object key. The token is single-use and scoped to one path. The client-supplied filename and MIME type are validated by Zod on the server, then the server picks the actual storage path. Even if a malicious client modifies the request body, they can't escape their user prefix, can't exceed the size limit, and can't upload a banned MIME type.

Why this is fast. The actual bytes flow from the browser straight to Supabase's storage layer over a single TLS connection. Your Vercel function runs for the few milliseconds it takes to issue the token, not for however long it takes the user to upload a 20MB PDF on a hotel WiFi connection.

There's a footgun worth naming. createSignedUploadUrl requires service_role to call. If you use the regular client SDK with the anon key, the call fails. That's why the Server Action wraps it. Don't get clever and try to call createSignedUploadUrl from the browser by elevating privileges. Once service_role is in the bundle, your project is the next exposed-API-keys breach story.

Validating uploads: size, type, and the magic byte check

Zod validates what the client says about the file. That's a first line, not the only line. The actual bytes arriving at Supabase can be anything. A request that claims mimeType: 'image/png' and a 200KB size can ship a 200KB ELF binary or a malformed image with embedded JavaScript.

The pragmatic defense for a SaaS template is layered:

Layer 1: Zod on the metadata. Block obvious nonsense before issuing a token. Size, MIME type, filename length, allowed extensions. Cheap and runs first.

Layer 2: Bucket-level constraints. Set fileSizeLimit and allowedMimeTypes when creating the bucket. Supabase enforces these on the upload itself, so even a token that escapes your validation can't be used to upload a 1GB file:

update storage.buckets
set
  file_size_limit = 26214400, -- 25MB
  allowed_mime_types = array['image/jpeg', 'image/png', 'image/webp', 'application/pdf']
where id = 'user-uploads';

Layer 3: Magic byte verification on read. The MIME type the client sent is a label. The actual file content is the truth. For files where that distinction matters (executables disguised as images, polyglot files), inspect the leading bytes server-side after upload. The file-type library on npm reads the first 4100 bytes of a buffer and tells you what the file actually is, regardless of what the client claimed.

import { fileTypeFromBuffer } from 'file-type'

const buffer = await admin.storage
  .from('user-uploads')
  .download(objectKey)
  .then((r) => r.data?.arrayBuffer())

const detected = await fileTypeFromBuffer(new Uint8Array(buffer!))
if (!detected || !ALLOWED_MIME.includes(detected.mime)) {
  await admin.storage.from('user-uploads').remove([objectKey])
  // record this attempt, rate-limit the user
}

Run this in a background job after the upload completes, not on the upload path itself. You don't want a 500ms file-type check blocking every avatar upload. For high-risk content (anything you'll execute, render with JavaScript, or pass to another service), do it inline.

Layer 4: Rate limit the upload endpoint. Even with all of the above, an attacker can issue 10,000 token requests per second and chew through your Supabase Storage quota or your Vercel function budget. The same Upstash + sliding-window pattern from the Server Actions rate-limiting guide applies to upload endpoints: count requests per user ID, not per IP, since users sit behind shared NAT.

Serving private files with signed read URLs

A private bucket means nothing has a public URL. To show a user their own avatar, you generate a short-lived signed URL [2]:

// In a Server Component or Server Action
const admin = createAdminClient()
const { data } = await admin.storage
  .from('user-uploads')
  .createSignedUrl(`${user.id}/avatar.png`, 3600) // 1 hour

return <img src={data?.signedUrl} alt="Avatar" />

Three rules that cost people production bugs.

Don't cache signed URLs in your database. They expire. Cache the object key ({user_id}/avatar.png), regenerate the signed URL on every render, and let Next.js's RSC cache hold the page output for the natural cache window. If you cache the URL itself, the page works for 60 minutes and then breaks.

Don't issue 10-year signed URLs as a workaround. A signed URL is a bearer token. Anyone who copies the URL out of a browser screenshot or a shared link gets unfettered access for the URL's lifetime. Keep the expiration short (minutes to hours, not days) and regenerate on demand.

Use signed URLs in the Next.js Image component cautiously. <Image> rewrites the source through /_next/image for optimization, which means Vercel caches the optimized output keyed on the source URL. If the source URL changes every render (because it's a fresh signed URL), you bust the optimizer's cache on every load. The fix is to set a stable cache key (the object path) and rotate the underlying signed URL only when the file changes. Or skip the optimizer for uploaded content and serve the signed URL directly.

For files that don't need access control (public marketing assets), use a separate public bucket. Mixing public and private content in one bucket is an audit nightmare that ends with someone enabling "public" on the wrong bucket.

Five mistakes that turn private buckets public

The same handful of mistakes appear in every audit:

  • Using the anon key on the client to upload directly. This works only if your storage.objects policies allow it, which means you've written an INSERT policy keyed on auth.uid(). That policy is correct, but every audit we've run on customer codebases finds at least one bucket missing it. Default to the signed upload URL pattern above so the security doesn't depend on whether you remembered to write the policy.
  • Setting the bucket to public "just to test." Public buckets enumerate. Anyone who learns one valid path (from a cached page, a Slack screenshot, a leaked log) can probe adjacent paths. Once a bucket is public, you can't safely make it private later because the URLs are already cached by browsers and CDNs.
  • Trusting Content-Type from the client. A client can ship Content-Type: image/png with arbitrary bytes. If you render the file with HTML or pass it to a parser, the actual content matters more than the label. Magic byte checks (Layer 3 above) close this gap.
  • Letting the client pick the object key. This is the path traversal vector. Always concatenate {user_id}/{random_id}.{validated_ext} server-side. Never use file.name as the path.
  • Forgetting RLS on storage.objects because the bucket is private. Private buckets and RLS are separate switches. A private bucket with no RLS is queryable by every authenticated user with a Supabase client. Both have to be set.

The full deploy-time check for buckets, policies, and key handling is in our SaaS security checklist, and the pattern for storage RLS lives in the Supabase RLS policy generator alongside the ownership and multi-tenant policies for your normal tables.

How we handle uploads in SecureStartKit

The architecture in SecureStartKit defaults to signed upload URLs issued from a Server Action that runs getUser(), validates with Zod against a per-bucket schema, generates the object key from the user ID and a UUID, and writes the resulting path to a regular Postgres table that does have RLS policies. The storage.objects row is the file. The application table row is the metadata: who uploaded what, when, what it's for, and whether it's been scanned. Authorization decisions ride on the application table, not on storage RLS guesswork.

That structure does two things at once. It keeps the upload path off our Vercel functions, which keeps cost predictable. And it puts every authorization check in the same place as the rest of the app's authorization, which keeps the surface auditable. The full picture of how this slots into authentication, validation, and the rest of the security stack lives in the Next.js security hardening checklist.

The rule that matters is the same as it is for any other piece of the stack: assume the client is hostile, validate on the server, never put service_role in the browser, and write the RLS policy even when you think you don't need it.

Frequently Asked Questions

How do you upload files securely in Next.js with Supabase Storage?
A secure upload uses three pieces: a private bucket, RLS policies on the storage.objects table that match your access pattern, and either a Server Action that uploads with the service_role key or a short-lived signed upload URL your server issues per file. The client never holds the service_role key, and the server picks the object path from the authenticated user ID.
When should you use a signed upload URL instead of a Server Action?
Use signed upload URLs for files larger than the Vercel Server Action body limit (4MB by default, up to 4.5MB on Hobby) or any flow where you don't want bytes routing through your functions. The browser uploads directly to Supabase, and your function only runs long enough to issue the token. For small avatars and documents, a Server Action is simpler.
Do you need RLS on storage.objects if the bucket is private?
Yes. A private bucket and RLS are separate switches. A private bucket with no policies on storage.objects is queryable by every authenticated user using your anon key. Both have to be set: mark the bucket private to disable the public URL pattern, then write SELECT, INSERT, and DELETE policies on storage.objects that restrict access to the calling user's path.
What is the largest file you can upload through a Next.js Server Action?
Vercel's default Server Action body limit is 4MB. You can raise it to 4.5MB on Hobby and higher on Pro by setting serverActions.bodySizeLimit in next.config.ts, but for anything over a few MB the right pattern is a signed upload URL where the client uploads directly to Supabase and your function only issues the token.
Can you use the Supabase anon key to upload files from the browser?
Only if you've written an INSERT policy on storage.objects that allows it. The anon key works against bucket APIs the same way it works against any other table: RLS gates the access. The safer default is to use signed upload URLs issued from a Server Action, which removes the dependency on remembering to write the storage.objects policy correctly.

Built for developers who care about security

SecureStartKit ships with these patterns out of the box.

Backend-only data access, Zod validation on every input, RLS enabled, Stripe webhooks verified. One purchase, lifetime updates.

View PricingSee the template in action

References

  1. Storage | Supabase Docs— supabase.com
  2. Storage Access Control | Supabase Docs— supabase.com
  3. createSignedUploadUrl | Supabase JavaScript Reference— supabase.com
  4. Server Actions and Mutations | Next.js— nextjs.org

Related Posts

Mar 23, 2026·Tutorial

Rate Limit Next.js Server Actions Before Abuse

Server Actions are public HTTP endpoints anyone can call. Here's how to add rate limiting to login, checkout, and contact forms.

Apr 20, 2026·Tutorial

Next.js Testing: Vitest + Playwright for SaaS Apps [2026]

Vitest for Server Actions and Zod schemas, Playwright for async Server Components and auth flows. The complete Next.js testing setup for SaaS.

Apr 15, 2026·Tutorial

Supabase RLS Policies That Actually Work [2026 Guide]

Most Supabase RLS tutorials stop at 'enable RLS.' Here's how to write policies for ownership, multi-tenant access, admin roles, and fast queries.