Skip to content

Batching

Use the BatchGet and BatchWrite commands to perform read and write batch operations in DynamoDB. These are the Document Builder commands for BatchGetItem and BatchWriteItem operations.

The BatchGet command enables you to retrieve multiple items by primary key in a single operation:

const batchGet = new BatchGet({
keys: [
{ userId: '123', todoId: '456' },
{ userId: '789', todoId: '101' },
],
});
const { items } = await todoEntity.send(batchGet);
console.log(items);

If you need the read to be strongly consistent for all items, you can set the consistent parameter to true:

const batchGet = new BatchGet({
keys: [
{ userId: '123', todoId: '456' },
{ userId: '789', todoId: '101' },
],
consistent: true,
});
const { items } = await todoEntity.send(batchGet);

Use the BatchProjectedGet command when you only need specific attributes from multiple items.

Due to the items being a subset of your entity schema, batch projected gets require you to provide a Zod schema defining the shape of the projected items.

const batchProjectedGet = new BatchProjectedGet({
keys: [
{ userId: '123', todoId: '456' },
{ userId: '789', todoId: '101' },
],
projection: ['title', 'completed'],
projectionSchema: z.object({
title: z.string(),
completed: z.boolean(),
}),
});
const { items } = await todoEntity.send(batchProjectedGet);
// items is typed as Array<{ title: string; completed: boolean; }>
console.log(items);

The BatchWrite command enables you to put (create or replace) and/or delete multiple items in a single operation:

const batchWrite = new BatchWrite({
items: [
{ userId: '123', todoId: '456', title: 'Take out the trash', completed: false },
{ userId: '789', todoId: '101', title: 'Buy groceries', completed: true },
],
deletes: [
{ userId: '111', todoId: '222' },
],
});

Both BatchGet and BatchWrite commands may return unprocessed keys or items if the operation exceeds provisioned throughput limits. You can retry these unprocessed items in subsequent batch operations.

BatchGet will return unprocessedKeys, while BatchWrite will return unprocessedPuts and unprocessedDeletes.

Use the TableBatchWrite and TableBatchGet commands to perform batch operations across multiple entity types in a single DynamoDB request. These are the Document Builder table-level commands executed via table.send().

Before passing entity operations to a table-level command, use entity.prepare() to bind each batch command to its entity. This returns a prepared group that carries the entity’s schema, key builders, and the requested operation.

// Prepare a batch write for users
const userWrites = userEntity.prepare(new BatchWrite({
items: [
{ userId: 'u1', name: 'Alice', email: 'alice@example.com' },
{ userId: 'u2', name: 'Bob', email: 'bob@example.com' },
],
deletes: [
{ userId: 'u3', email: 'charlie@example.com' },
],
}));
// Prepare a batch get for orders
const orderGets = orderEntity.prepare(new BatchGet({
keys: [{ orderId: 'o1' }, { orderId: 'o2' }],
}));

Pass prepared write groups to TableBatchWrite and execute via table.send():

const { unprocessedPuts, unprocessedDeletes } = await myTable.send(
new TableBatchWrite({
writes: [
userEntity.prepare(new BatchWrite({
items: [
{ userId: 'u1', name: 'Alice', email: 'alice@example.com' },
{ userId: 'u2', name: 'Bob', email: 'bob@example.com' },
],
})),
orderEntity.prepare(new BatchWrite({
items: [{ orderId: 'o1', userId: 'u1', total: 99.99, status: 'pending' }],
deletes: [{ orderId: 'o0' }],
})),
],
}),
);

The result fields unprocessedPuts and unprocessedDeletes are typed tuples that match the order of the input writes array. This means each index is typed to its corresponding entity:

const [userUnprocessedPuts, orderUnprocessedPuts] = unprocessedPuts;
// userUnprocessedPuts: User[] | undefined
// orderUnprocessedPuts: Order[] | undefined
const [userUnprocessedDeletes, orderUnprocessedDeletes] = unprocessedDeletes;
// userUnprocessedDeletes: Partial<User>[] | undefined
// orderUnprocessedDeletes: Partial<Order>[] | undefined

Pass prepared get groups to TableBatchGet and execute via table.send():

const { items, unprocessedKeys } = await myTable.send(
new TableBatchGet({
gets: [
userEntity.prepare(new BatchGet({
keys: [
{ userId: 'u1', email: 'alice@example.com' },
{ userId: 'u2', email: 'bob@example.com' },
],
})),
orderEntity.prepare(new BatchGet({
keys: [{ orderId: 'o1' }, { orderId: 'o2' }],
})),
],
}),
);
const [users, orders] = items;
// users: User[]
// orders: Order[]

You can request strongly consistent reads either at the command level or on individual entity groups.

Command-level consistent (recommended) — set it once on TableBatchGet to apply to the entire request. This completely overrides any consistent setting on individual groups:

const { items } = await myTable.send(
new TableBatchGet({
consistent: true, // Forces ConsistentRead: true for all groups, regardless of group-level settings
gets: [
userEntity.prepare(new BatchGet({
keys: [{ userId: 'u1', email: 'alice@example.com' }],
})),
orderEntity.prepare(new BatchGet({
keys: [{ orderId: 'o1' }],
})),
],
}),
);

Group-level consistent — when consistent is not set on the command, set it on individual entity groups. Because DynamoDB requires a single consistent-read setting per table in a batch request, if any group requests consistency, the entire batch request will use ConsistentRead: true.

const { items } = await myTable.send(
new TableBatchGet({
gets: [
userEntity.prepare(new BatchGet({
keys: [{ userId: 'u1', email: 'alice@example.com' }],
consistent: true, // Makes the entire batch request consistent
})),
orderEntity.prepare(new BatchGet({
keys: [{ orderId: 'o1' }],
})),
],
}),
);

All entity groups in a TableBatchWrite or TableBatchGet must belong to the same table that table.send() is called on. If any entity references a different table, a DocumentBuilderError is thrown at runtime before any DynamoDB request is made.

// This will throw a DocumentBuilderError at runtime
await tableA.send(new TableBatchGet({
gets: [
entityOnTableA.prepare(new BatchGet({ keys: [...] })),
entityOnTableB.prepare(new BatchGet({ keys: [...] })), // ❌ Wrong table
],
}));

If DynamoDB cannot process all items due to throughput limits, unprocessed items are returned mapped back to original domain objects — not raw DynamoDB key-decorated items. You can pass them directly into a subsequent batch operation for retry.

const { unprocessedPuts, unprocessedDeletes, unprocessedKeys } = await myTable.send(...);
// Retry unprocessed puts from index 0 (users)
const [userUnprocessedPuts] = unprocessedPuts;
if (userUnprocessedPuts?.length) {
await sleep(exponentialBackoff());
await myTable.send(new TableBatchWrite({
writes: [
userEntity.prepare(new BatchWrite({ items: userUnprocessedPuts })),
],
}));
}

The BatchGet command expects the following input config:

{
keys: Array<Partial<Schema>>;
consistent?: boolean;
skipValidation?: boolean;
timeoutMs?: number;
abortController?: AbortController;
returnConsumedCapacity?: ReturnConsumedCapacity;
}
Property Type Description
keys (required) Array<Partial<Schema>>

An array of primary keys for the items to retrieve. Each key should contain the attributes that make up the primary key. If using computed primary keys, only include the attributes used by your key builder functions.

consistent? boolean

If set to true, DynamoDB will ensure a strongly consistent read for all items.

Defaults to false.
skipValidation? boolean If set to true, schema validation is bypassed entirely. Defaults to false.
timeoutMs? number Number of milliseconds to wait before the operation times out and auto-cancels.
abortController? AbortController If you need to abort the commands operation, you can use the abort controller to signal cancellation.
returnConsumedCapacity? ReturnConsumedCapacity Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.

The BatchGet command returns the following result:

{
items: Schema[];
unprocessedKeys?: Array<Partial<Schema>>;
responseMetadata?: ResponseMetadata;
consumedCapacity?: ConsumedCapacity;
}
Property Type Description
items Schema[]

An array of retrieved items. Will be an empty array if no items were found.

unprocessedKeys? Array<Partial<Schema>>

If present, contains the keys that were not processed due to provisioned throughput limits. These keys can be used in a subsequent batch get request to retry retrieval.

responseMetadata? ResponseMetadata Metadata about the response from DynamoDB.
consumedCapacity? ConsumedCapacity Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.

The BatchProjectedGet command expects the following input config:

{
keys: Array<Partial<Schema>>;
projection: string[];
projectionSchema: ZodObject;
consistent?: boolean;
skipValidation?: boolean;
timeoutMs?: number;
abortController?: AbortController;
returnConsumedCapacity?: ReturnConsumedCapacity;
}
Property Type Description
keys (required) Array<Partial<Schema>>

An array of primary keys for the items to retrieve. Each key should contain the attributes that make up the primary key. If using computed primary keys, only include the attributes used by your key builder functions.

projection (required) string[]

An array of attribute names to include in the returned items.

projectionSchema (required) ZodObject

A Zod schema defining the shape of the projected items.

consistent? boolean

If set to true, DynamoDB will ensure a strongly consistent read for all items.

Defaults to false.
skipValidation? boolean If set to true, schema validation is bypassed entirely. Defaults to false.
timeoutMs? number Number of milliseconds to wait before the operation times out and auto-cancels.
abortController? AbortController If you need to abort the commands operation, you can use the abort controller to signal cancellation.
returnConsumedCapacity? ReturnConsumedCapacity Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.

The BatchProjectedGet command returns the following result:

{
items: ProjectionSchema[];
unprocessedKeys?: Array<Partial<ProjectionSchema>>;
responseMetadata?: ResponseMetadata;
consumedCapacity?: ConsumedCapacity;
}
Property Type Description
items ProjectionSchema[]

An array of retrieved items. Will be an empty array if no items were found. Unlike the standard BatchGet, items will be typed according to the provided projection schema.

unprocessedKeys? Array<Partial<ProjectionSchema>>

If present, contains the keys that were not processed due to provisioned throughput limits. These keys can be used in a subsequent batch get request to retry retrieval.

responseMetadata? ResponseMetadata Metadata about the response from DynamoDB.
consumedCapacity? ConsumedCapacity Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.

The BatchWrite command expects the following input config:

{
items?: Array<Schema>;
deletes?: Array<Partial<Schema>>;
returnItemCollectionMetrics?: ReturnItemCollectionMetrics;
skipValidation?: boolean;
timeoutMs?: number;
abortController?: AbortController;
returnConsumedCapacity?: ReturnConsumedCapacity;
}

At least one of items or deletes must be provided.

Property Type Description
items Array<Schema>

An array of items to put (create or replace) in the table.

deletes Array<Partial<Schema>>

An array of primary keys for items to delete from the table. Each key should contain the attributes that make up the primary key. If using computed primary keys, only include the attributes used by your key builder functions.

returnItemCollectionMetrics ReturnItemCollectionMetrics

Determines whether item collection metrics are returned. Valid values are SIZE and NONE.

skipValidation? boolean If set to true, schema validation is bypassed entirely. Defaults to false.
timeoutMs? number Number of milliseconds to wait before the operation times out and auto-cancels.
abortController? AbortController If you need to abort the commands operation, you can use the abort controller to signal cancellation.
returnConsumedCapacity? ReturnConsumedCapacity Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.

The BatchWrite command returns the following result:

{
unprocessedPuts?: Array<Schema>;
unprocessedDeletes?: Array<Partial<Schema>>;
itemColectionMetrics?: ItemCollectionMetrics;
responseMetadata?: ResponseMetadata;
consumedCapacity?: ConsumedCapacity;
}
Property Type Description
unprocessedPuts? Array<Schema>

If present, contains the put items that were not processed due to provisioned throughput limits. These items can be used in a subsequent batch write request to retry the put operations.

unprocessedDeletes? Array<Partial<Schema>>

If present, contains the delete keys that were not processed due to provisioned throughput limits. These keys can be used in a subsequent batch write request to retry the delete operations.

itemColectionMetrics? ItemCollectionMetrics

Information about item collection metrics, if requested via returnItemCollectionMetrics.

responseMetadata? ResponseMetadata Metadata about the response from DynamoDB.
consumedCapacity? ConsumedCapacity Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.

The TableBatchWrite command expects the following input config:

{
writes: PreparedBatchWrite[];
returnItemCollectionMetrics?: ReturnItemCollectionMetrics;
skipValidation?: boolean;
timeoutMs?: number;
abortController?: AbortController;
returnConsumedCapacity?: ReturnConsumedCapacity;
}
Property Type Description
writes (required) PreparedBatchWrite[]

An array of prepared batch write groups, each created via entity.prepare(new BatchWrite({ ... })). All entities must belong to the same table.

returnItemCollectionMetrics ReturnItemCollectionMetrics

Determines whether item collection metrics are returned. Valid values are SIZE and NONE.

skipValidation? boolean If set to true, schema validation is bypassed entirely. Defaults to false.
timeoutMs? number Number of milliseconds to wait before the operation times out and auto-cancels.
abortController? AbortController If you need to abort the commands operation, you can use the abort controller to signal cancellation.
returnConsumedCapacity? ReturnConsumedCapacity Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.

The TableBatchWrite command returns the following result:

{
unprocessedPuts: Array<Schema[] | undefined>; // tuple, one entry per write group
unprocessedDeletes: Array<Partial<Schema>[] | undefined>; // tuple, one entry per write group
itemCollectionMetrics?: ItemCollectionMetrics;
responseMetadata?: ResponseMetadata;
consumedCapacity?: ConsumedCapacity;
}
Property Type Description
unprocessedPuts Array<Schema[] | undefined>

A tuple (one entry per write group in input order) of put items that were not processed due to throughput limits. undefined for a given group means no unprocessed puts for that entity.

unprocessedDeletes Array<Partial<Schema>[] | undefined>

A tuple (one entry per write group in input order) of delete keys that were not processed due to throughput limits. undefined for a given group means no unprocessed deletes for that entity.

itemCollectionMetrics? ItemCollectionMetrics

Information about item collection metrics, if requested via returnItemCollectionMetrics.

responseMetadata? ResponseMetadata Metadata about the response from DynamoDB.
consumedCapacity? ConsumedCapacity Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.

The TableBatchGet command expects the following input config:

{
gets: PreparedBatchGet[];
consistent?: boolean;
skipValidation?: boolean;
timeoutMs?: number;
abortController?: AbortController;
returnConsumedCapacity?: ReturnConsumedCapacity;
}
Property Type Description
gets (required) PreparedBatchGet[]

An array of prepared batch get groups, each created via entity.prepare(new BatchGet({ ... })). All entities must belong to the same table.

consistent boolean

When set, overrides the consistent setting on all individual entity groups. true forces strongly consistent reads for the entire request; false forces eventually consistent reads even if a group sets consistent: true. When omitted, falls back to per-group logic: if any group sets consistent: true, the entire request uses ConsistentRead: true.

skipValidation? boolean If set to true, schema validation is bypassed entirely. Defaults to false.
timeoutMs? number Number of milliseconds to wait before the operation times out and auto-cancels.
abortController? AbortController If you need to abort the commands operation, you can use the abort controller to signal cancellation.
returnConsumedCapacity? ReturnConsumedCapacity Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.

The TableBatchGet command returns the following result:

{
items: Array<Schema[]>; // tuple, one entry per get group
unprocessedKeys: Array<Partial<Schema>[] | undefined>; // tuple, one entry per get group
responseMetadata?: ResponseMetadata;
consumedCapacity?: ConsumedCapacity;
}
Property Type Description
items Array<Schema[]>

A tuple (one entry per get group in input order) of retrieved items for each entity. Each group’s items are validated against that entity’s schema.

unprocessedKeys Array<Partial<Schema>[] | undefined>

A tuple (one entry per get group in input order) of keys that were not processed due to throughput limits. undefined for a given group means no unprocessed keys for that entity. These keys can be used in a subsequent batch get request.

responseMetadata? ResponseMetadata Metadata about the response from DynamoDB.
consumedCapacity? ConsumedCapacity Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.
import { BatchGet } from 'dynamo-document-builder/commands/batch-get';
import { BatchProjectedGet } from 'dynamo-document-builder/commands/batch-projected-get';
import { BatchWrite } from 'dynamo-document-builder/commands/batch-write';
import { TableBatchWrite } from 'dynamo-document-builder/commands/table-batch-write';
import { TableBatchGet } from 'dynamo-document-builder/commands/table-batch-get';