API
Complete API reference for the ORM with functions, operators, and types
This is the full API surface for the ORM. The goal is to match Drizzle’s relational query builder while adapting to Convex’s document model.
Schema
convexTable()
convexTable<TName extends string, TColumns>(
name: TName,
columns: TColumns,
extraConfig?: (t) => [
index('by_field').on(t.someField),
uniqueIndex('by_unique').on(t.someField),
check('check_name', gt(t.someField, 0)),
searchIndex('by_text').on(t.title),
vectorIndex('embedding_vec').on(t.embedding).dimensions(1536),
]
): ConvexTable<TableConfig<TName, TColumns>>Example
const users = convexTable('users', {
name: text().notNull(),
email: text().notNull(),
age: integer(),
});Auto‑generated fields
id: Id<TName>createdAt: number(default)createdAt: Datewhen your table definescreatedAt: timestamp().notNull().defaultNow()_creationTimeis internal-only; public ORM APIs always usecreatedAt- Use
timestamp()for point-in-time fields anddate()for calendar-only fields
convexTable.withRLS()
convexTable.withRLS(name, columns, (t) => [
rlsPolicy('policy_name', {
for: 'select',
using: (ctx) => eq(t.ownerId, ctx.viewerId),
}),
])Enables RLS for the table. If RLS is enabled and no permissive policies apply, access is denied by default.
index() / uniqueIndex()
import { index, uniqueIndex } from 'better-convex/orm';
const posts = convexTable(
'posts',
{
title: text().notNull(),
authorId: id('users'),
},
(t) => [
index('by_author').on(t.authorId),
uniqueIndex('by_title').on(t.title),
]
);unique()
import { unique } from 'better-convex/orm';
const users = convexTable(
'users',
{
firstName: text(),
lastName: text(),
},
(t) => [unique('full_name').on(t.firstName, t.lastName)]
);Use .nullsNotDistinct() to treat null as a value (PG parity).
foreignKey()
import { foreignKey } from 'better-convex/orm';
const memberships = convexTable(
'memberships',
{
userSlug: text().notNull(),
},
(t) => [foreignKey({ columns: [t.userSlug], foreignColumns: [users.slug] })]
);Use .onUpdate() / .onDelete() to configure foreign key actions.
Actions are enforced at runtime by ORM mutations.
For cascade / set null / set default, add an index on the referencing columns.
Without an index, the ORM will throw when it detects referencing rows.
check()
import { check, convexTable, gt, integer } from 'better-convex/orm';
const users = convexTable(
'users',
{
age: integer(),
},
(t) => [check('adult_age', gt(t.age, 18))]
);Checks are enforced at runtime by ORM mutations and follow SQL-like NULL semantics (the check fails only when the expression is false).
searchIndex()
import { searchIndex } from 'better-convex/orm';
const posts = convexTable(
'posts',
{
title: text().notNull(),
authorId: id('users'),
},
(t) => [searchIndex('by_title').on(t.title).filter(t.authorId)]
);Use .staged() to stage a search index for later activation.
vectorIndex()
import { vector, vectorIndex } from 'better-convex/orm';
const posts = convexTable(
'posts',
{
authorId: id('users'),
embedding: vector(1536).notNull(),
},
(t) => [
vectorIndex('embedding_vec')
.on(t.embedding)
.dimensions(1536)
.filter(t.authorId),
]
);Vector indexes require .dimensions(n) and can be staged with .staged().
deletion()
import { deletion } from 'better-convex/orm';
const users = convexTable(
'users',
{
slug: text().notNull(),
deletionTime: integer(),
},
() => [deletion('scheduled', { delayMs: 60_000 })]
);Sets table-level default delete mode:
deletion('hard')deletion('soft')deletion('scheduled', { delayMs? })
Per-query delete mode methods still take precedence: .hard(), .soft(), .scheduled({ delayMs }).
vector()
vector(1536)
vector('embedding', 768)Column Modifiers
const users = convexTable('users', {
slug: text().notNull(),
});
text().notNull()
text().default('member')
text().$type<'admin' | 'member'>()
integer().$defaultFn(() => Date.now())
integer().$onUpdateFn(() => Date.now())
text().unique()
text().unique('handle_unique', { nulls: 'not distinct' })
id('users').references(() => users.id)
text().references(() => users.slug, { onDelete: 'cascade' })rlsPolicy()
rlsPolicy(name, {
as: 'permissive' | 'restrictive',
for: 'all' | 'select' | 'insert' | 'update' | 'delete',
to: 'public' | 'current_user' | RlsRole | string | Array<...>,
using: (ctx, t) => FilterExpression<boolean>,
withCheck: (ctx, t) => FilterExpression<boolean>,
})using controls read/update/delete visibility. withCheck controls insert/update
values. When omitted, withCheck falls back to using.
rlsRole()
rlsRole('admin', { createRole: true }).existing();defineRelations()
defineRelations(schema, (r) => ({
users: { posts: r.many.posts() },
posts: {
author: r.one.users({ from: r.posts.authorId, to: r.users.id }),
},
}))Database
createOrm()
const orm = createOrm({ schema: relations });
const db = orm.db(ctx);Creates a unified ORM setup object. Use orm.db(ctx) to build a typed query/mutation client.
createdAt is always public in ORM rows. Its type is:
numberfor the system aliasDatewhen your schema definescreatedAt: timestamp().notNull().defaultNow()
For async mutation batching, pass scheduler function refs and export generated handlers:
import { createOrm } from 'better-convex/orm';
import { internal } from './_generated/api';
import { internalMutation } from './functions';
const orm = createOrm({
schema: relations,
ormFunctions: internal.orm,
internalMutation,
});
export const { scheduledMutationBatch, scheduledDelete } = orm.api();Recommended: create orm once in your context layer and attach orm.db(ctx) to ctx.orm.
RLS options
const ormDb = orm.db(ctx, {
rls: {
ctx,
roleResolver: (ctx) => ctx.roles ?? [],
},
vectorSearch: ctx.vectorSearch,
})Use ctx.orm.skipRules (or ormDb.skipRules) for an explicit bypass of RLS.
Pass vectorSearch when you want ORM findMany({ vectorSearch }) support.
db.system
db.system exposes raw Convex system-table reads for _storage and _scheduled_functions.
await db.system.get(jobId);
await db.system.query('_scheduled_functions').collect();
await db.system.query('_storage').take(20);Scope
- Raw passthrough methods like
getandquery - Not ORM query-builder APIs (
findMany,findFirst) - Not relation loading (
with) or ORM-specific helpers/rules
Query Builder
findMany()
await db.query.users.findMany({
where: { role: 'admin' },
orderBy: { createdAt: 'desc' },
limit: 10,
offset: 0,
with: { posts: true },
columns: { name: true },
});Options
where: object filter, callback expression ((table, ops) => ...), or callback predicate viaops.predicate(...)search:{ index, query, filters? }(full‑text search mode; only on tables with search indexes)vectorSearch:{ index, vector, limit, includeScore?, filter? }(vector similarity mode; only on tables with vector indexes)orderBy:{ field: 'asc' | 'desc' }limit,offsetcursor:string | null(cursor pagination; first page isnull)maxScan:number(only valid with cursor pagination; caps scan-fallback paths like predicate/object post-filter and index-union pagination)allowFullScan: opt‑in for unsized non-cursor scans and relation fan-out overrides (not supported with cursor pagination; not used withsearchorvectorSearch)with: relation loadingcolumns: post‑fetch projectionextras: post‑fetch computed fields
Sizing rules (non-paginated)
- Provide
limit, or - Use cursor pagination (
cursor+limit), or - Configure
defineSchema(..., { defaults: { defaultLimit } }), or - Use
allowFullScan: trueto opt into unbounded collection.
For with + many() relations, set relation limit or defaults.defaultLimit; otherwise use allowFullScan.
Relation loading also enforces fan-out key caps: if unique relation lookup keys exceed defaults.relationFanOutMaxKeys (default 1000), the query throws unless allowFullScan is set.
Search mode constraints
orderByis not allowed (results stay in Convex relevance order)- function
where(callback/predicate) is not allowed - relation-based
whereis not allowed - object
whereon base table fields is allowed (post-search filter) withis allowed
Vector mode constraints
limitis required insidevectorSearch(1..256)orderByis not allowedcursoris not allowedmaxScanis not allowedwhereis not allowed.withIndex(...)is not allowedoffsetis not allowed- top-level
limitis not allowed (usevectorSearch.limit) includeScore: trueadds_scoreto each returned rowwithis allowed
findMany({ search })
await db.query.posts.findMany({
search: {
index: 'text_search',
query: 'galaxy',
filters: { type: 'news' },
},
cursor: null,
limit: 20,
});findMany({ vectorSearch })
await db.query.posts.findMany({
vectorSearch: {
index: 'embedding_vec',
vector: args.embedding,
limit: 10,
includeScore: true,
filter: (q) => q.eq('type', 'news'),
},
with: { author: true },
});findFirst()
await db.query.users.findFirst({ where: { email: 'a@b.com' } });
await db.query.posts.findFirst({
search: { index: 'text_search', query: 'galaxy' },
});findFirstOrThrow()
Like findFirst(), but throws if no row exists.
await db.query.users.findFirstOrThrow({ where: { email: 'a@b.com' } });
await db.query.posts.findFirstOrThrow({
search: { index: 'text_search', query: 'galaxy' },
});findMany({ cursor, limit })
await db.query.posts.findMany({
where: { published: true },
orderBy: { createdAt: 'desc' },
cursor: null,
limit: 20,
});select() composition (advanced)
const page = await db.query.messages
.withIndex('by_from_to')
.select()
.union([
{ where: { from: me, to: them } },
{ where: { from: them, to: me } },
])
.interleaveBy(['createdAt', 'id'])
.filter(async (m) => !m.deletedAt)
.map(async (m) => ({ ...m, preview: m.body.slice(0, 120) }))
.paginate({ cursor: null, limit: 20 });Composition stage output is inferred in order:
const page = await db.query.messages
.select()
.map(async (m) => ({ ...m, preview: m.body.slice(0, 120) }))
.filter(async (m) => m.preview.length > 0)
.paginate({ cursor: null, limit: 20 });
page.page[0].preview; // stringflatMap infers relation-targeted output:
const expanded = await db.query.users
.select()
.flatMap('posts', { includeParent: false })
.paginate({ cursor: null, limit: 20 });
expanded.page[0].text; // post fieldUse select() when you need pre-pagination transforms (filter, map, distinct, relation flatMap) or SQL-like unions/interleaving.
findMany({ pageByKey }) (advanced)
const page = await db.query.messages.findMany({
pageByKey: {
index: 'by_channel',
order: 'asc',
startKey: previousEndKey,
targetMaxRows: 100,
},
});pageByKey returns:
{
page: T[];
indexKeys: (Value | undefined)[][];
hasMore: boolean;
}Use this mode when you need deterministic key boundaries (explicit adjacency control) instead of opaque cursor tokens.
Query Filters (object where)
Core operators
eq,ne,gt,gte,lt,ltebetween,notBetweenin,notInisNull,isNotNullAND,OR,NOT
String operators (post‑fetch)
like,ilike,notLike,notIlikestartsWith,endsWith,contains
Not supported (SQL subquery operators)
exists,notExists(require SQL subqueries, which Convex does not support at runtime)- Use relation existence filters instead (
where: { posts: true }orwhere: { NOT: { posts: true } })
Index compilation
- Index-backed by default when possible.
- Currently index-compiled:
eq,ne,gt,gte,lt,lte,between,notBetween,in,notIn,isNull,isNotNull,startsWith,like('prefix%'), and same-field equalityOR. - Object/callback expression filters can run without
.withIndex(...); if not index-compiled, they run as bounded scans. - Post-fetch operators are typed to require explicit
.withIndex(...)(contains,endsWith,ilike,notLike,notIlike, array operators,NOT,RAW).
Workarounds for remaining full-scan operators
arrayContains/arrayContained/arrayOverlaps: use an inverted/join table indexed by element.contains: use a search index or tokenized denormalized field.endsWith: store and index a reversed-string column, then usestartsWith.ilike/notIlike: normalize to lowercase in a dedicated indexed field.notLike: index a positive pre-filter, then post-filter withnotLike.RAW: narrow with indexed pre-filters and call.withIndex(...).- Predicate
where: call.withIndex(name, range?)and usemaxScanfor bounded scans (cursor pagination only).
Mutations
insert()
await db.insert(users).values({ name: 'Ada', email: 'ada@example.com' });update()
await db
.update(users)
.set({ name: 'Ada Lovelace' })
.where(eq(users.id, userId));set() semantics
undefinedvalues passed to.set(...)are ignored (no-op).- To remove a top-level field, use
unsetToken:.set({ field: unsetToken }).
Full-scan guardrail
update()withoutwhere()throws unless you call.allowFullScan().- Non‑
idfilters require.allowFullScan()only when no index is available. - Matched rows are capped by
mutationMaxRows(default1000), collected inmutationBatchSizepages (default100). - Async fan-out uses
mutationLeafBatchSize(default900) for non-recursive FK actions. - Async fan-out enforces
mutationMaxBytesPerBatch(default2_097_152measured bytes) andmutationScheduleCallCap(default100). - Override defaults via
defineSchema(..., { defaults: { mutationMaxRows, mutationBatchSize, mutationLeafBatchSize, mutationMaxBytesPerBatch, mutationScheduleCallCap, mutationExecutionMode, mutationAsyncDelayMs } }). - For large workloads, use
.paginate({ cursor, limit })to execute one batch per mutation call. - Use
execute({ mode: 'async', batchSize?, delayMs? })(or.executeAsync(...)) to run the first batch inline and schedule continuation. - Explicit async APIs cannot be combined with
.paginate(). - Paged mode returns
{ continueCursor, isDone, numAffected }(+pagewhen using.returning()). - Async
execute()returns the same shape as syncexecute()(voidor first-batchreturning()rows). - Paged mode currently supports single-range index plans only.
await db
.update(users)
.set({ name: 'Ada Lovelace' })
.where(eq(users.email, email));
// If email is not indexed:
// .allowFullScan();delete()
await db.delete(users).where(eq(users.id, userId));Full-scan guardrail
delete()withoutwhere()throws unless you call.allowFullScan().- Non‑
idfilters require.allowFullScan()only when no index is available. - Matched rows are capped by
mutationMaxRows(default1000), collected inmutationBatchSizepages (default100). - Async fan-out uses
mutationLeafBatchSize(default900) for non-recursive FK actions. - Async fan-out enforces
mutationMaxBytesPerBatch(default2_097_152measured bytes) andmutationScheduleCallCap(default100). - Override defaults via
defineSchema(..., { defaults: { mutationMaxRows, mutationBatchSize, mutationLeafBatchSize, mutationMaxBytesPerBatch, mutationScheduleCallCap, mutationExecutionMode, mutationAsyncDelayMs } }). - For large workloads, use
.paginate({ cursor, limit })to execute one batch per mutation call. - Use
execute({ mode: 'async', batchSize?, delayMs? })(or.executeAsync(...)) to run the first batch inline and schedule continuation. - Explicit async APIs cannot be combined with
.paginate()or resolved scheduled delete mode (.scheduled(...)or table-leveldeletion('scheduled')). - Paged mode returns
{ continueCursor, isDone, numAffected }(+pagewhen using.returning()). - Async
execute()returns the same shape as syncexecute()(voidor first-batchreturning()rows). - Paged mode currently supports single-range index plans only.
await db
.delete(users)
.where(eq(users.email, email));
// If email is not indexed:
// .allowFullScan();Delete helpers
// Soft delete (sets deletionTime)
await db.delete(users).where(eq(users.id, userId)).soft();
// Force hard delete even if table default is soft/scheduled
await db.delete(users).where(eq(users.id, userId)).hard();
// Cascade behavior for FK actions
await db.delete(users).where(eq(users.id, userId)).cascade({ mode: 'hard' });
// Scheduled delete (soft now, hard later)
await db
.delete(users)
.where(eq(users.id, userId))
.scheduled({ delayMs: 60_000 });Scheduled deletes require passing { scheduler, scheduledDelete } when building ctx.orm
and exposing scheduledDeleteFactory(schema, edges, scheduledMutationBatchRef) as an internal mutation.
The scheduled worker validates the original deletionTime token before hard-deleting, so clearing/changing deletionTime cancels a pending scheduled delete.
Async mutation batches (execute({ mode: 'async' }) / .executeAsync() for update/delete) require passing
{ scheduler, scheduledMutationBatch } when building ctx.orm and exposing
scheduledMutationBatchFactory(schema, edges, internalRef) as an internal mutation.
returning()
const [user] = await db
.insert(users)
.values({ name: 'Ada', email: 'ada@example.com' })
.returning({ id: users.id, email: users.email });onConflictDoUpdate()
await db
.insert(users)
.values({ email: 'ada@example.com', name: 'Ada' })
.onConflictDoUpdate({
target: users.email,
set: { name: 'Ada Lovelace' },
});If the conflict target has no index, call .allowFullScan() to opt in:
await db
.insert(users)
.values({ email: 'ada@example.com', name: 'Ada' })
.onConflictDoUpdate({ target: users.email, set: { name: 'Ada' } })
.allowFullScan();Filter Expression Helpers
Use these in mutation where(...) clauses:
eq(field, value)
ne(field, value)
gt(field, value)
gte(field, value)
lt(field, value)
lte(field, value)
between(field, min, max)
notBetween(field, min, max)
inArray(field, values)
notInArray(field, values)
and(...filters)
or(...filters)
not(filter)
isNull(field)
isNotNull(field)between is inclusive of endpoints.
notBetween matches values strictly outside the range.
Type Helpers
InferSelectModel<typeof users>
InferInsertModel<typeof users>
users.$inferSelect
users.$inferInsertIntrospection Helpers
getTableColumns(users) // includes id/createdAt
getTableConfig(users) // indexes/unique/fks/rls/checksNotes
columnsprojection is post‑fetch- String operators are post‑fetch
extrasare computed post‑fetchfindMany({ cursor, limit })uses the firstorderByfield for cursor ordering