SecureStartKit
SecurityFeaturesPricingDocsBlogChangelog
Sign inBuy Now
Apr 15, 2026·Tutorial·SecureStartKit Team

Supabase RLS Policies That Actually Work [2026 Guide]

Most Supabase RLS tutorials stop at 'enable RLS.' Here's how to write policies for ownership, multi-tenant access, admin roles, and fast queries.

Summarize with AI

On this page

  • Table of Contents
  • Why RLS policies fail in production
  • The four-part anatomy of an RLS policy
  • Pattern 1: Ownership (users access their own rows)
  • Pattern 2: Multi-tenant (teams isolated from each other)
  • Pattern 3: Role-based (admins read everything)
  • The INSERT gotcha: WITH CHECK is not optional
  • RLS performance: the optimizations that matter
  • How to test policies correctly
  • The backend-only alternative

On this page

  • Table of Contents
  • Why RLS policies fail in production
  • The four-part anatomy of an RLS policy
  • Pattern 1: Ownership (users access their own rows)
  • Pattern 2: Multi-tenant (teams isolated from each other)
  • Pattern 3: Role-based (admins read everything)
  • The INSERT gotcha: WITH CHECK is not optional
  • RLS performance: the optimizations that matter
  • How to test policies correctly
  • The backend-only alternative

A working Supabase RLS policy has four parts: the operation (SELECT, INSERT, UPDATE, or DELETE), the role it applies to (usually authenticated), a USING clause that filters which existing rows are visible, and a WITH CHECK clause that validates new or modified rows. Miss any of them and your policy either blocks legitimate users or leaves the table exposed.

Most Supabase tutorials stop after "enable RLS." That's the easy part. The hard part is writing policies that match real SaaS access patterns: users editing only their own rows, tenants isolated from each other, admins who can read everything, and queries that don't collapse under load once your table has a million rows.

This guide covers the four-part anatomy, the three policy patterns you'll use in 90% of SaaS apps, the performance traps that turn an RLS query from 0.1ms into 11 seconds, and why we architect around RLS entirely in SecureStartKit.

Table of Contents

  • Why RLS policies fail in production
  • The four-part anatomy of an RLS policy
  • Pattern 1: Ownership (users access their own rows)
  • Pattern 2: Multi-tenant (teams isolated from each other)
  • Pattern 3: Role-based (admins read everything)
  • The INSERT gotcha: WITH CHECK is not optional
  • RLS performance: the optimizations that matter
  • How to test policies correctly
  • The backend-only alternative

Why RLS policies fail in production

Supabase auto-generates a REST API from your Postgres schema [1]. Anyone with your project URL and anon key (both of which ship in the browser bundle) can query any table that doesn't have RLS enforced. That's the class of failure behind the Lovable breach and the 170+ apps that got hacked in early 2026. The fix isn't mystery. It's writing policies that hold up under the queries your users actually make.

The failures split into three buckets:

  • No policies at all. The table has RLS enabled, which denies everything, so the developer disables RLS "just to test something" and ships it that way.
  • Policies that don't cover the access pattern. A SELECT policy exists but no INSERT policy, so users can read but not write. Or the reverse. Developers flip RLS off to unblock themselves.
  • Policies that work but crash under load. The policy is correct. The query takes 11 seconds on a 100k-row table because auth.uid() is being called once per row instead of once per query [2].

The rest of this post walks through each one with real code you can paste.

The four-part anatomy of an RLS policy

Every Supabase policy follows the same shape:

create policy "policy name here"
on public.table_name
for [ select | insert | update | delete | all ]
to [ authenticated | anon | public | service_role ]
using ( /* which rows are visible */ )
with check ( /* which new/modified rows are allowed */ );

FOR names the operation. Use one of select, insert, update, delete, or all. Separate policies per operation are easier to read and debug than a single for all catchall.

TO names the role. Almost always authenticated. Using public (the default when you omit TO) runs the policy against every role including anon, which is both slower and riskier. Explicitly targeting authenticated short-circuits the policy for anonymous users and yields roughly a 99.78% evaluation speedup in Supabase's own benchmarks [2].

USING is the filter for rows that already exist. It applies to SELECT, UPDATE, and DELETE. If the expression returns false for a row, that row is invisible to the operation.

WITH CHECK is the filter for rows being inserted or modified. It applies to INSERT and UPDATE. If it returns false, the write is rejected. For UPDATE, both USING (can the user touch this row at all?) and WITH CHECK (is the new version allowed?) run.

The two clauses exist for different jobs. USING asks "can you see this row?" WITH CHECK asks "is this row valid for you to write?" Confuse them and you'll end up with policies that let a user read their row and then rewrite it with someone else's user_id.

Pattern 1: Ownership (users access their own rows)

This is the bread and butter of SaaS. Each row belongs to one user, identified by a user_id column that maps to auth.users(id). Users can read, write, and delete their own rows, and nothing else.

Start with the schema:

create table public.notes (
  id uuid primary key default gen_random_uuid(),
  user_id uuid not null references auth.users(id) on delete cascade,
  title text not null,
  body text,
  created_at timestamptz default now()
);

alter table public.notes enable row level security;

Now the four policies, one per operation:

-- Read: users see only their own notes
create policy "notes_select_own"
on public.notes for select
to authenticated
using ( (select auth.uid()) = user_id );

-- Insert: new row must belong to the caller
create policy "notes_insert_own"
on public.notes for insert
to authenticated
with check ( (select auth.uid()) = user_id );

-- Update: can only modify own rows, and can't reassign ownership
create policy "notes_update_own"
on public.notes for update
to authenticated
using ( (select auth.uid()) = user_id )
with check ( (select auth.uid()) = user_id );

-- Delete: can only delete own rows
create policy "notes_delete_own"
on public.notes for delete
to authenticated
using ( (select auth.uid()) = user_id );

Three details worth pausing on.

First, (select auth.uid()) isn't a stylistic choice. Wrapping auth.uid() in a subquery triggers Postgres to evaluate it once per statement via an initPlan, then cache the result [2]. Without the wrapper, auth.uid() fires for every row the planner inspects. On a 100k-row table this takes a policy from milliseconds into seconds. The Supabase team has measured specific cases where an admin check went from 11,000ms to 7ms just by adding the select wrapper [2].

Second, UPDATE needs both USING and WITH CHECK. USING stops the user from touching other people's rows. WITH CHECK stops them from rewriting their own row with someone else's user_id. If you omit WITH CHECK, a user could update notes set user_id = '<someone else>'s id>' where id = '<my note>' and silently donate their row to another account.

Third, there's no service_role policy because service_role bypasses RLS entirely. That's intentional: your server-side code uses service_role to do things the user can't do directly, like sync a Stripe webhook or cascade a delete.

Pattern 2: Multi-tenant (teams isolated from each other)

Multi-tenant SaaS adds a layer. Rows belong to an organization or team, and every user in that team sees the team's rows. Users in other teams never see anything.

The schema needs a tenant column plus a membership table:

create table public.teams (
  id uuid primary key default gen_random_uuid(),
  name text not null
);

create table public.team_members (
  team_id uuid not null references public.teams(id) on delete cascade,
  user_id uuid not null references auth.users(id) on delete cascade,
  role text not null default 'member',
  primary key (team_id, user_id)
);

create table public.projects (
  id uuid primary key default gen_random_uuid(),
  team_id uuid not null references public.teams(id) on delete cascade,
  name text not null
);

alter table public.team_members enable row level security;
alter table public.projects enable row level security;

The naive policy is a subquery:

create policy "projects_select_team"
on public.projects for select
to authenticated
using (
  team_id in (
    select team_id from public.team_members
    where user_id = (select auth.uid())
  )
);

This works, but it pays a subquery cost per row on large tables. The faster approach wraps the membership lookup in a security definer function, which Postgres can inline and cache:

create or replace function public.user_team_ids()
returns setof uuid
language sql
stable
security definer
set search_path = public
as $$
  select team_id from public.team_members
  where user_id = (select auth.uid())
$$;

revoke execute on function public.user_team_ids from public, anon;
grant execute on function public.user_team_ids to authenticated;

Then the policy becomes:

create policy "projects_select_team"
on public.projects for select
to authenticated
using ( team_id in (select public.user_team_ids()) );

The function runs once per query, caches the caller's team IDs, and every row check becomes an IN comparison against a small fixed set. Benchmarks show this pattern taking a 178,000ms join down to 12ms on medium-sized tables [2].

The security definer and locked-down search path matter. security definer means the function runs with its owner's privileges (typically postgres), which is what lets it read team_members even when RLS on that table would block the caller. Setting search_path = public and revoking execute from public and anon closes the hijack surface where an attacker creates a function in a temp schema with the same name.

The membership table itself needs policies too. A member can see their own membership row and nothing else:

create policy "team_members_select_self"
on public.team_members for select
to authenticated
using ( user_id = (select auth.uid()) );

Writes to team_members should go through Server Actions with the service_role client, not client-side inserts. An RLS policy that lets authenticated users insert their own membership row is a policy that lets any user join any team they want.

Pattern 3: Role-based (admins read everything)

A common requirement: admins can see and edit every row; regular users only their own. The wrong way is to read a role column from public.profiles inside the policy. That forces a lookup for every policy check on every row. The right way is to put the role in the JWT as a custom claim and read it from there.

Set the claim when the user signs up (or promote an existing user via a Server Action using service_role). Then reference it through auth.jwt():

create policy "notes_select_admin_or_own"
on public.notes for select
to authenticated
using (
  ((select auth.jwt()) -> 'app_metadata' ->> 'role') = 'admin'
  or (select auth.uid()) = user_id
);

Two things to verify. app_metadata is the right bucket for security claims because it can only be written by service_role, not by the user. user_metadata is user-writable and must never be used for authorization decisions. And the role value gets embedded in the JWT at sign-in time, so promoting someone to admin doesn't take effect until they get a fresh access token (usually on the next refresh).

Role-in-JWT policies stay fast because reading a JWT claim is constant-time. Role-in-table policies incur a join per row check, which is why they buckle under load.

The INSERT gotcha: WITH CHECK is not optional

INSERT policies need WITH CHECK, not USING. Supabase silently accepts the wrong clause and the policy becomes a no-op that either blocks every insert or lets everything through, depending on how Postgres falls back:

-- WRONG: USING on INSERT is silently ignored in some cases
create policy "broken_insert"
on public.notes for insert
to authenticated
using ( (select auth.uid()) = user_id );

-- RIGHT: WITH CHECK runs against the row being inserted
create policy "notes_insert_own"
on public.notes for insert
to authenticated
with check ( (select auth.uid()) = user_id );

Three specific failure modes to watch for:

  • Omitting WITH CHECK entirely on INSERT makes the policy unusable: Postgres defaults to blocking writes, and developers disable RLS to "fix" the block.
  • Omitting WITH CHECK on UPDATE lets authenticated users rewrite their row's user_id to any value. Always supply both clauses on UPDATE.
  • Using auth.uid() is not null as the whole check means any authenticated user can insert rows with any user_id, which is an authorization bypass dressed as an authentication check.

Supabase's own AI prompt for generating RLS policies [3] is worth keeping open as a reference while you write policies, because the correct clause on the correct operation is the single most common mistake in RLS review. Our free Supabase RLS Policy Generator outputs the same four-part patterns for ownership, multi-tenant, and admin access if you'd rather start from a known-good template than hand-write the SQL.

RLS performance: the optimizations that matter

Four changes account for most of the real-world speedup. Apply them in this order:

1. Wrap auth.uid() and auth.jwt() in a subquery. (select auth.uid()) instead of auth.uid(). This is free. It's compatible with every policy. Do it everywhere.

2. Add TO authenticated (or whichever role applies). Without TO, policies evaluate for the anon role too, which adds overhead to every query. Restricting to authenticated is a 99%+ speedup on the anonymous path [2].

3. Index every column referenced in a policy. If the policy filters on user_id, create create index on public.notes (user_id). Supabase measures this as a 170x improvement on large tables (171ms to under 0.1ms) [2]. Missing policy indexes is the single biggest performance killer in production.

4. Replace subqueries with security definer functions when joining across tables. A join inside a policy runs per row. A security definer function runs once and caches. The 178,000ms to 12ms benchmark is real for any policy that joins a membership table [2].

A good rule: any RLS policy that mentions a column should have that column indexed, and any RLS policy that touches another table should live behind a helper function.

How to test policies correctly

Three rules, each of which cost somebody a breach:

Never test policies from the Supabase SQL Editor. The SQL Editor runs queries as the postgres superuser, which bypasses RLS. Your policies could be completely broken and your SQL Editor tests would all pass. Every time.

Test from a real authenticated session via the client SDK. Spin up a test user, sign them in, and run queries with the anon key plus their session. That's the actual code path your users hit. Anything else is theater.

Test the negative cases. Can user A read user B's rows? Can user A write a row with user B's user_id? Can an unauthenticated session read anything? Write those as assertions, not as "I tried it once and it failed."

A quick test harness in a Server Action looks like this:

// actions/admin-rls-check.ts
'use server'

import { createAdminClient } from '@/lib/supabase/server'
import { createClient } from '@supabase/supabase-js'

export async function rlsSmokeTest() {
  const admin = createAdminClient()

  // Create two test users
  const { data: userA } = await admin.auth.admin.createUser({
    email: 'a@test.local', password: 'test-password', email_confirm: true,
  })
  const { data: userB } = await admin.auth.admin.createUser({
    email: 'b@test.local', password: 'test-password', email_confirm: true,
  })

  // Seed a note for user A using service_role
  await admin.from('notes').insert({
    user_id: userA.user!.id,
    title: 'User A private note',
  })

  // Attempt to read it as user B using the anon key + their session
  const anon = createClient(
    process.env.NEXT_PUBLIC_SUPABASE_URL!,
    process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!,
  )
  await anon.auth.signInWithPassword({ email: 'b@test.local', password: 'test-password' })

  const { data, error } = await anon.from('notes').select('*')

  // Expectation: data is empty, no error. User B sees no rows.
  return { leaked: data && data.length > 0, error: error?.message }
}

Run this against staging after every schema change. If leaked ever comes back true, your policies are broken and you ship nothing until they're fixed. For the broader deploy-time verification list, the SaaS Security Checklist covers RLS alongside the other database and auth checks you'd want running before production.

The backend-only alternative

Here's the uncomfortable part. Every RLS policy you write is a security surface you have to maintain. Every new table is four more policies. Every refactor of your access model means auditing every policy that references the changed columns. On a large schema, the policy sprawl becomes its own failure mode, and at some point an engineer in a hurry writes using (true) on a table "just to unblock a deploy."

The alternative we use in SecureStartKit: enable RLS on every table with no policies at all, and funnel every read and write through Server Actions using a service_role client. RLS stays on as a failsafe (a leaked anon key from an env var misconfiguration returns zero rows), but authorization lives in one place: the Server Action code, where it's type-checked, version-controlled, and testable with the same tools you use for the rest of your application logic.

That's the schema you'll see if you buy the template: RLS enabled, zero policies, every mutation through a Server Action. The anon key can't read your data even if someone grabs it, because there's nothing for it to read. If you do need client-side reads for a specific table (realtime subscriptions, for instance), you add policies to that one table with the patterns from this guide, not to the whole schema by default.

Which approach is right depends on your app. If you're building something like Notion where users directly edit rows in real time, RLS policies with the optimizations from this post are the right choice. If you're building a more conventional SaaS with CRUD flowing through your own API, backend-only data access removes an entire category of bug. The Next.js Security Hardening Checklist and the Supabase Authentication guide cover the rest of the stack that has to sit on top of whichever one you pick.

Either way, the rule that matters is the same: enable RLS on every table, know exactly what your access model is, and test it from a real authenticated session before you call it done.

Built for developers who care about security

SecureStartKit ships with these patterns out of the box.

Backend-only data access, Zod validation on every input, RLS enabled, Stripe webhooks verified. One purchase, lifetime updates.

View PricingSee the template in action

References

  1. Row Level Security | Supabase Docs— supabase.com
  2. RLS Performance and Best Practices— github.com
  3. AI Prompt: Database: Create RLS policies | Supabase Docs— supabase.com

Related Posts

Mar 23, 2026·Tutorial

Rate Limit Next.js Server Actions Before Abuse

Server Actions are public HTTP endpoints anyone can call. Here's how to add rate limiting to login, checkout, and contact forms.

Mar 20, 2026·Tutorial

Next.js proxy.ts Auth: Protect Routes with Supabase

Next.js 16 renamed middleware.ts to proxy.ts. Here's how to migrate your Supabase route protection and understand what actually changed.

Feb 26, 2026·Tutorial

Supabase Auth in Next.js App Router [2026 Guide]

Server-side Supabase auth in Next.js App Router. Move beyond outdated client-side patterns with production-ready code.