Update
Update rows with Drizzle-style builders
In this guide, we'll learn how to update rows using the ORM's Drizzle-style update() builder. You'll see basic updates, returning clauses, paginated execution, and async batching for large workloads.
Basic Update
Let's start with a mutation that renames a user by id:
import { z } from 'zod';
import { eq } from 'better-convex/orm';
import { publicMutation } from '../lib/crpc';
import { users } from '../schema';
export const renameUser = publicMutation
.input(z.object({ userId: z.string(), name: z.string() }))
.mutation(async ({ ctx, input }) => {
await ctx.orm
.update(users)
.set({ name: input.name })
.where(eq(users.id, input.userId));
});Important: update() without .where(...) throws unless you call .allowFullScan(). See Querying Data for details on allowFullScan.
Returning
Use .returning() to get back the updated rows. You can return all fields or pick specific columns:
const updated = await ctx.orm
.update(users)
.set({ name: 'Mr. Dan' })
.where(eq(users.id, userId))
.returning();
const ids = await ctx.orm
.update(users)
.set({ name: 'Mr. Dan' })
.where(eq(users.id, userId))
.returning({ id: users.id });Safety Limits
The ORM collects matching rows in bounded pages before applying writes. The key defaults are:
mutationBatchSize:100mutationMaxRows:1000mutationLeafBatchSize:900(async FK fan-out)
If matched rows exceed mutationMaxRows, the update throws. You can customize these values in your schema:
export default defineSchema({ users, posts }, {
defaults: {
mutationBatchSize: 200,
mutationMaxRows: 5000,
},
});For the full list of configurable defaults, see Schema Definition -- Runtime Defaults.
Paginated Update Execution
For large workloads that exceed safety limits, you can process updates page-by-page. This follows Convex's batching pattern and avoids one large transaction.
Here's how to process updates across multiple pages. This requires an index on the filtered field:
// Schema: index('by_role').on(t.role) on users tableconst page1 = await ctx.orm
.update(users)
.set({ role: 'member' })
.where(eq(users.role, 'pending'))
.paginate({ cursor: null, limit: 100 });
if (!page1.isDone) {
const page2 = await ctx.orm
.update(users)
.set({ role: 'member' })
.where(eq(users.role, 'pending'))
.paginate({ cursor: page1.continueCursor, limit: 100 });
}Each page returns:
continueCursor-- cursor for the next batchisDone--truewhen no more pages remainnumAffected-- rows updated in this pagepage-- returned rows (only when.returning()is used)
Note: paginate() currently supports single-range index plans. Multi-probe filters (inArray, some OR patterns, complement ranges) are not yet supported in paged mutation mode.
Async Batched Update
When an update can affect large sets of rows, use async mode. The first batch runs in the current mutation, then remaining batches are scheduled automatically.
You can enable async mode in three ways:
- Per call:
.execute({ mode: 'async' }) - Convenience alias:
.executeAsync() - Global default:
defineSchema(..., { defaults: { mutationExecutionMode: 'async' } })
Here's an async update with custom batch size:
const firstBatch = await ctx.orm
.update(users)
.set({ role: 'member' })
.where(eq(users.role, 'pending'))
.returning({ id: users.id })
.execute({ mode: 'async', batchSize: 200, delayMs: 0 });Key behaviors to keep in mind:
execute()in async mode returns the same shape as sync mode (with.returning(), you get rows from the first batch only)- Remaining batches are scheduled asynchronously
- Async APIs (
execute({ mode: 'async' })/executeAsync()) cannot be combined with.paginate() batchSizeresolves as: per-callbatchSize>defaults.mutationBatchSize>100delayMsresolves as: per-calldelayMs>defaults.mutationAsyncDelayMs>0- Async FK update fan-out (
onUpdate: 'cascade',set null,set default) usesmutationLeafBatchSize
Important: Async execution requires wiring ormFunctions and scheduledMutationBatch in your ORM setup. See Mutations -- Async Wiring Setup for the setup steps.
Drizzle Differences
A few SQL-only features from Drizzle are not applicable in Convex:
limit,orderBy,UPDATE ... FROM, andWITHclauses are not supportedundefinedvalues passed to.set(...)are ignored (treated as "not provided"). If everything isundefined, the update is a no-op.- to explicitly remove a field, use
unsetToken:.set({ nickname: unsetToken })(shallow: unsets the top-level field only)
Note: Unique constraints, foreign keys, and RLS policies are enforced at runtime for ORM mutations. Direct native Convex writes like ctx.db.patch(...) bypass these checks (and are intentionally not exposed on ctx.orm).
You now have everything you need to update data, from simple field changes to large-scale async batching.