Batching
Use the BatchGet and BatchWrite commands to perform read and write batch operations in DynamoDB. These are the Document Builder commands for BatchGetItem and BatchWriteItem operations.
Batch Reads
Section titled “Batch Reads”The BatchGet command enables you to retrieve multiple items by primary key in a single operation:
const batchGet = new BatchGet({ keys: [ { userId: '123', todoId: '456' }, { userId: '789', todoId: '101' }, ],});
const { items } = await todoEntity.send(batchGet);console.log(items);Consistent Reads
Section titled “Consistent Reads”If you need the read to be strongly consistent for all items, you can set the consistent parameter to true:
const batchGet = new BatchGet({ keys: [ { userId: '123', todoId: '456' }, { userId: '789', todoId: '101' }, ], consistent: true,});
const { items } = await todoEntity.send(batchGet);Batch Projected Gets
Section titled “Batch Projected Gets”Use the BatchProjectedGet command when you only need specific attributes from multiple items.
Due to the items being a subset of your entity schema, batch projected gets require you to provide a Zod schema defining the shape of the projected items.
const batchProjectedGet = new BatchProjectedGet({ keys: [ { userId: '123', todoId: '456' }, { userId: '789', todoId: '101' }, ], projection: ['title', 'completed'], projectionSchema: z.object({ title: z.string(), completed: z.boolean(), }),});
const { items } = await todoEntity.send(batchProjectedGet);// items is typed as Array<{ title: string; completed: boolean; }>console.log(items);Batch Writes
Section titled “Batch Writes”The BatchWrite command enables you to put (create or replace) and/or delete multiple items in a single operation:
const batchWrite = new BatchWrite({ items: [ { userId: '123', todoId: '456', title: 'Take out the trash', completed: false }, { userId: '789', todoId: '101', title: 'Buy groceries', completed: true }, ], deletes: [ { userId: '111', todoId: '222' }, ],});Unprocessed Items
Section titled “Unprocessed Items”Both BatchGet and BatchWrite commands may return unprocessed keys or items if the operation exceeds provisioned throughput limits. You can retry these unprocessed items in subsequent batch operations.
BatchGet will return unprocessedKeys, while BatchWrite will return unprocessedPuts and unprocessedDeletes.
Multi-Entity Batching
Section titled “Multi-Entity Batching”Use the TableBatchWrite and TableBatchGet commands to perform batch operations across multiple entity types in a single DynamoDB request. These are the Document Builder table-level commands executed via table.send().
Preparing Entities
Section titled “Preparing Entities”Before passing entity operations to a table-level command, use entity.prepare() to bind each batch command to its entity. This returns a prepared group that carries the entity’s schema, key builders, and the requested operation.
// Prepare a batch write for usersconst userWrites = userEntity.prepare(new BatchWrite({ items: [ { userId: 'u1', name: 'Alice', email: 'alice@example.com' }, { userId: 'u2', name: 'Bob', email: 'bob@example.com' }, ], deletes: [ { userId: 'u3', email: 'charlie@example.com' }, ],}));
// Prepare a batch get for ordersconst orderGets = orderEntity.prepare(new BatchGet({ keys: [{ orderId: 'o1' }, { orderId: 'o2' }],}));Multi-Entity Batch Writes
Section titled “Multi-Entity Batch Writes”Pass prepared write groups to TableBatchWrite and execute via table.send():
const { unprocessedPuts, unprocessedDeletes } = await myTable.send( new TableBatchWrite({ writes: [ userEntity.prepare(new BatchWrite({ items: [ { userId: 'u1', name: 'Alice', email: 'alice@example.com' }, { userId: 'u2', name: 'Bob', email: 'bob@example.com' }, ], })), orderEntity.prepare(new BatchWrite({ items: [{ orderId: 'o1', userId: 'u1', total: 99.99, status: 'pending' }], deletes: [{ orderId: 'o0' }], })), ], }),);The result fields unprocessedPuts and unprocessedDeletes are typed tuples that match the order of the input writes array. This means each index is typed to its corresponding entity:
const [userUnprocessedPuts, orderUnprocessedPuts] = unprocessedPuts;// userUnprocessedPuts: User[] | undefined// orderUnprocessedPuts: Order[] | undefined
const [userUnprocessedDeletes, orderUnprocessedDeletes] = unprocessedDeletes;// userUnprocessedDeletes: Partial<User>[] | undefined// orderUnprocessedDeletes: Partial<Order>[] | undefinedMulti-Entity Batch Gets
Section titled “Multi-Entity Batch Gets”Pass prepared get groups to TableBatchGet and execute via table.send():
const { items, unprocessedKeys } = await myTable.send( new TableBatchGet({ gets: [ userEntity.prepare(new BatchGet({ keys: [ { userId: 'u1', email: 'alice@example.com' }, { userId: 'u2', email: 'bob@example.com' }, ], })), orderEntity.prepare(new BatchGet({ keys: [{ orderId: 'o1' }, { orderId: 'o2' }], })), ], }),);
const [users, orders] = items;// users: User[]// orders: Order[]Consistent Reads
Section titled “Consistent Reads”You can request strongly consistent reads either at the command level or on individual entity groups.
Command-level consistent (recommended) — set it once on TableBatchGet to apply to the entire request. This completely overrides any consistent setting on individual groups:
const { items } = await myTable.send( new TableBatchGet({ consistent: true, // Forces ConsistentRead: true for all groups, regardless of group-level settings gets: [ userEntity.prepare(new BatchGet({ keys: [{ userId: 'u1', email: 'alice@example.com' }], })), orderEntity.prepare(new BatchGet({ keys: [{ orderId: 'o1' }], })), ], }),);Group-level consistent — when consistent is not set on the command, set it on individual entity groups. Because DynamoDB requires a single consistent-read setting per table in a batch request, if any group requests consistency, the entire batch request will use ConsistentRead: true.
const { items } = await myTable.send( new TableBatchGet({ gets: [ userEntity.prepare(new BatchGet({ keys: [{ userId: 'u1', email: 'alice@example.com' }], consistent: true, // Makes the entire batch request consistent })), orderEntity.prepare(new BatchGet({ keys: [{ orderId: 'o1' }], })), ], }),);Table Validation
Section titled “Table Validation”All entity groups in a TableBatchWrite or TableBatchGet must belong to the same table that table.send() is called on. If any entity references a different table, a DocumentBuilderError is thrown at runtime before any DynamoDB request is made.
// This will throw a DocumentBuilderError at runtimeawait tableA.send(new TableBatchGet({ gets: [ entityOnTableA.prepare(new BatchGet({ keys: [...] })), entityOnTableB.prepare(new BatchGet({ keys: [...] })), // ❌ Wrong table ],}));Retrying Unprocessed Items
Section titled “Retrying Unprocessed Items”If DynamoDB cannot process all items due to throughput limits, unprocessed items are returned mapped back to original domain objects — not raw DynamoDB key-decorated items. You can pass them directly into a subsequent batch operation for retry.
const { unprocessedPuts, unprocessedDeletes, unprocessedKeys } = await myTable.send(...);
// Retry unprocessed puts from index 0 (users)const [userUnprocessedPuts] = unprocessedPuts;if (userUnprocessedPuts?.length) { await sleep(exponentialBackoff()); await myTable.send(new TableBatchWrite({ writes: [ userEntity.prepare(new BatchWrite({ items: userUnprocessedPuts })), ], }));}Batch Get Command Config
Section titled “Batch Get Command Config”The BatchGet command expects the following input config:
{ keys: Array<Partial<Schema>>; consistent?: boolean; skipValidation?: boolean; timeoutMs?: number; abortController?: AbortController; returnConsumedCapacity?: ReturnConsumedCapacity;}| Property | Type | Description |
|---|---|---|
keys (required) | Array<Partial<Schema>> | An array of primary keys for the items to retrieve. Each key should contain the attributes that make up the primary key. If using computed primary keys, only include the attributes used by your key builder functions. |
consistent? | boolean | If set to true, DynamoDB will ensure a strongly consistent read for all items. Defaults tofalse. |
skipValidation? | boolean |
If set to true, schema validation is bypassed entirely.
Defaults to false. |
timeoutMs? | number | Number of milliseconds to wait before the operation times out and auto-cancels. |
abortController? | AbortController | If you need to abort the commands operation, you can use the abort controller to signal cancellation. |
returnConsumedCapacity? | ReturnConsumedCapacity |
Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.
|
Batch Get Command Result
Section titled “Batch Get Command Result”The BatchGet command returns the following result:
{ items: Schema[]; unprocessedKeys?: Array<Partial<Schema>>; responseMetadata?: ResponseMetadata; consumedCapacity?: ConsumedCapacity;}| Property | Type | Description |
|---|---|---|
items | Schema[] | An array of retrieved items. Will be an empty array if no items were found. |
unprocessedKeys? | Array<Partial<Schema>> | If present, contains the keys that were not processed due to provisioned throughput limits. These keys can be used in a subsequent batch get request to retry retrieval. |
responseMetadata? | ResponseMetadata | Metadata about the response from DynamoDB. |
consumedCapacity? | ConsumedCapacity |
Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.
|
Batch Projected Get Command Config
Section titled “Batch Projected Get Command Config”The BatchProjectedGet command expects the following input config:
{ keys: Array<Partial<Schema>>; projection: string[]; projectionSchema: ZodObject; consistent?: boolean; skipValidation?: boolean; timeoutMs?: number; abortController?: AbortController; returnConsumedCapacity?: ReturnConsumedCapacity;}| Property | Type | Description |
|---|---|---|
keys (required) | Array<Partial<Schema>> | An array of primary keys for the items to retrieve. Each key should contain the attributes that make up the primary key. If using computed primary keys, only include the attributes used by your key builder functions. |
projection (required) | string[] | An array of attribute names to include in the returned items. |
projectionSchema (required) | ZodObject | A Zod schema defining the shape of the projected items. |
consistent? | boolean | If set to true, DynamoDB will ensure a strongly consistent read for all items. Defaults tofalse. |
skipValidation? | boolean |
If set to true, schema validation is bypassed entirely.
Defaults to false. |
timeoutMs? | number | Number of milliseconds to wait before the operation times out and auto-cancels. |
abortController? | AbortController | If you need to abort the commands operation, you can use the abort controller to signal cancellation. |
returnConsumedCapacity? | ReturnConsumedCapacity |
Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.
|
Batch Projected Get Command Result
Section titled “Batch Projected Get Command Result”The BatchProjectedGet command returns the following result:
{ items: ProjectionSchema[]; unprocessedKeys?: Array<Partial<ProjectionSchema>>; responseMetadata?: ResponseMetadata; consumedCapacity?: ConsumedCapacity;}| Property | Type | Description |
|---|---|---|
items | ProjectionSchema[] | An array of retrieved items.
Will be an empty array if no items were found.
Unlike the standard |
unprocessedKeys? | Array<Partial<ProjectionSchema>> | If present, contains the keys that were not processed due to provisioned throughput limits. These keys can be used in a subsequent batch get request to retry retrieval. |
responseMetadata? | ResponseMetadata | Metadata about the response from DynamoDB. |
consumedCapacity? | ConsumedCapacity |
Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.
|
Batch Write Command Config
Section titled “Batch Write Command Config”The BatchWrite command expects the following input config:
{ items?: Array<Schema>; deletes?: Array<Partial<Schema>>; returnItemCollectionMetrics?: ReturnItemCollectionMetrics; skipValidation?: boolean; timeoutMs?: number; abortController?: AbortController; returnConsumedCapacity?: ReturnConsumedCapacity;}At least one of items or deletes must be provided.
| Property | Type | Description |
|---|---|---|
items | Array<Schema> | An array of items to put (create or replace) in the table. |
deletes | Array<Partial<Schema>> | An array of primary keys for items to delete from the table. Each key should contain the attributes that make up the primary key. If using computed primary keys, only include the attributes used by your key builder functions. |
returnItemCollectionMetrics | ReturnItemCollectionMetrics | Determines whether item collection metrics are returned.
Valid values are |
skipValidation? | boolean |
If set to true, schema validation is bypassed entirely.
Defaults to false. |
timeoutMs? | number | Number of milliseconds to wait before the operation times out and auto-cancels. |
abortController? | AbortController | If you need to abort the commands operation, you can use the abort controller to signal cancellation. |
returnConsumedCapacity? | ReturnConsumedCapacity |
Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.
|
Batch Write Command Result
Section titled “Batch Write Command Result”The BatchWrite command returns the following result:
{ unprocessedPuts?: Array<Schema>; unprocessedDeletes?: Array<Partial<Schema>>; itemColectionMetrics?: ItemCollectionMetrics; responseMetadata?: ResponseMetadata; consumedCapacity?: ConsumedCapacity;}| Property | Type | Description |
|---|---|---|
unprocessedPuts? | Array<Schema> | If present, contains the put items that were not processed due to provisioned throughput limits. These items can be used in a subsequent batch write request to retry the put operations. |
unprocessedDeletes? | Array<Partial<Schema>> | If present, contains the delete keys that were not processed due to provisioned throughput limits. These keys can be used in a subsequent batch write request to retry the delete operations. |
itemColectionMetrics? | ItemCollectionMetrics | Information about item collection metrics, if requested via |
responseMetadata? | ResponseMetadata | Metadata about the response from DynamoDB. |
consumedCapacity? | ConsumedCapacity |
Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.
|
TableBatchWrite Command Config
Section titled “TableBatchWrite Command Config”The TableBatchWrite command expects the following input config:
{ writes: PreparedBatchWrite[]; returnItemCollectionMetrics?: ReturnItemCollectionMetrics; skipValidation?: boolean; timeoutMs?: number; abortController?: AbortController; returnConsumedCapacity?: ReturnConsumedCapacity;}| Property | Type | Description |
|---|---|---|
writes (required) | PreparedBatchWrite[] | An array of prepared batch write groups, each created via |
returnItemCollectionMetrics | ReturnItemCollectionMetrics | Determines whether item collection metrics are returned.
Valid values are |
skipValidation? | boolean |
If set to true, schema validation is bypassed entirely.
Defaults to false. |
timeoutMs? | number | Number of milliseconds to wait before the operation times out and auto-cancels. |
abortController? | AbortController | If you need to abort the commands operation, you can use the abort controller to signal cancellation. |
returnConsumedCapacity? | ReturnConsumedCapacity |
Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.
|
TableBatchWrite Command Result
Section titled “TableBatchWrite Command Result”The TableBatchWrite command returns the following result:
{ unprocessedPuts: Array<Schema[] | undefined>; // tuple, one entry per write group unprocessedDeletes: Array<Partial<Schema>[] | undefined>; // tuple, one entry per write group itemCollectionMetrics?: ItemCollectionMetrics; responseMetadata?: ResponseMetadata; consumedCapacity?: ConsumedCapacity;}| Property | Type | Description |
|---|---|---|
unprocessedPuts | Array<Schema[] | undefined> | A tuple (one entry per write group in input order) of put items that were not processed due to throughput limits.
|
unprocessedDeletes | Array<Partial<Schema>[] | undefined> | A tuple (one entry per write group in input order) of delete keys that were not processed due to throughput limits.
|
itemCollectionMetrics? | ItemCollectionMetrics | Information about item collection metrics, if requested via |
responseMetadata? | ResponseMetadata | Metadata about the response from DynamoDB. |
consumedCapacity? | ConsumedCapacity |
Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.
|
TableBatchGet Command Config
Section titled “TableBatchGet Command Config”The TableBatchGet command expects the following input config:
{ gets: PreparedBatchGet[]; consistent?: boolean; skipValidation?: boolean; timeoutMs?: number; abortController?: AbortController; returnConsumedCapacity?: ReturnConsumedCapacity;}| Property | Type | Description |
|---|---|---|
gets (required) | PreparedBatchGet[] | An array of prepared batch get groups, each created via |
consistent | boolean | When set, overrides the |
skipValidation? | boolean |
If set to true, schema validation is bypassed entirely.
Defaults to false. |
timeoutMs? | number | Number of milliseconds to wait before the operation times out and auto-cancels. |
abortController? | AbortController | If you need to abort the commands operation, you can use the abort controller to signal cancellation. |
returnConsumedCapacity? | ReturnConsumedCapacity |
Determines the level of detail about provisioned throughput consumption that is returned in the response. Valid values are TOTAL, INDEXES, and NONE.
|
TableBatchGet Command Result
Section titled “TableBatchGet Command Result”The TableBatchGet command returns the following result:
{ items: Array<Schema[]>; // tuple, one entry per get group unprocessedKeys: Array<Partial<Schema>[] | undefined>; // tuple, one entry per get group responseMetadata?: ResponseMetadata; consumedCapacity?: ConsumedCapacity;}| Property | Type | Description |
|---|---|---|
items | Array<Schema[]> | A tuple (one entry per get group in input order) of retrieved items for each entity. Each group’s items are validated against that entity’s schema. |
unprocessedKeys | Array<Partial<Schema>[] | undefined> | A tuple (one entry per get group in input order) of keys that were not processed due to throughput limits.
|
responseMetadata? | ResponseMetadata | Metadata about the response from DynamoDB. |
consumedCapacity? | ConsumedCapacity |
Information about the capacity units consumed by the operation, if requested via the returnConsumedCapacity config.
|
Tree-shakable Imports
Section titled “Tree-shakable Imports”import { BatchGet } from 'dynamo-document-builder/commands/batch-get';import { BatchProjectedGet } from 'dynamo-document-builder/commands/batch-projected-get';import { BatchWrite } from 'dynamo-document-builder/commands/batch-write';import { TableBatchWrite } from 'dynamo-document-builder/commands/table-batch-write';import { TableBatchGet } from 'dynamo-document-builder/commands/table-batch-get';