TypeScript SDK
Installation, configuration, and API reference for the Ledger TypeScript SDK.
Installation
pnpm add @ontopix/ledger
The package is published to Ontopix's private CodeArtifact registry. Configure your registry before installing — see your team's CodeArtifact setup guide.
Environment Variables
| Variable | Default | Description |
|---|---|---|
LEDGER_QUEUE_URL | (required outside sandbox) | SQS FIFO queue URL |
LEDGER_ENV | dev | Environment tag added to every event |
LEDGER_AWS_REGION | eu-central-1 | AWS region for SQS client |
LEDGER_SANDBOX_MODE | false | true writes to stdout instead of SQS |
LEDGER_LOG_LEVEL | INFO | Log level |
LEDGER_QUEUE_URL is validated at module load time. If missing and sandbox mode is
off, the module throws immediately.
Sandbox Mode
Set LEDGER_SANDBOX_MODE=true for local development. Events are printed to stdout
as JSON — no SQS or AWS credentials required.
LEDGER_SANDBOX_MODE=true LEDGER_ENV=dev node my_script.js
API Reference
track() — async
import { track } from "@ontopix/ledger";
await track({
service: "audit-service",
operation: "transcript",
units: 1,
unitType: "requests",
vendor: "elevenlabs",
model: "scribe_v2",
tenant_id: tenantId,
job_id: jobId,
});
Signature:
interface TrackParams {
service: string;
operation: string;
units: number;
unitType: string;
timestamp?: Date;
idempotencyKey?: string;
/** Extra dimensions — flat keys, validated against reserved names. */
[key: string]: string | number | Date | undefined;
}
async function track(params: TrackParams): Promise<void>;
service,operation,units, andunitTypeare required.timestampdefaults tonew Date().idempotencyKeydefaults to a random UUID.- Any additional properties on the params object become event dimensions (e.g.
vendor,tenant_id). Values are coerced to strings. - Never throws. Exceptions are caught and logged — a billing write failure will never propagate to your business logic.
Note: the TypeScript SDK uses camelCase for SDK parameters (unitType,
idempotencyKey) but the wire format uses snake_case (unit_type,
idempotency_key). Dimension keys are passed through as-is — use snake_case for
consistency with the data model.
record() — decorator factory
For functions where the entire execution is one billable unit.
import { record } from "@ontopix/ledger";
const aggregate = record({
service: "stats-service",
operation: "aggregate",
unitType: "api_call",
dimensionsFrom: ["tenantId"],
})(async function aggregate(tenantId: string, data: unknown[]) {
// ...
});
Signature:
interface RecordOptions {
service: string;
operation: string;
unitType: string;
units?: number; // default: 1.0
dimensionsFrom?: string[]; // extract dimensions from function arguments by position
[key: string]: unknown; // static dimensions
}
function record(options: RecordOptions): <T>(fn: T) => T;
dimensionsFromextracts values from function arguments by matching position in thedimensionsFromarray to argument position.- Any extra properties on
optionsare passed as static dimensions on every call. track()is called after the wrapped function resolves successfully.
recording() — manual tracking
Use when units are only known after the work completes.
import { recording } from "@ontopix/ledger";
const rec = recording({
service: "audit-service",
operation: "enrich",
unitType: "tokens",
vendor: "openai",
tenant_id: tenantId,
});
const result = await openaiClient.chat.completions.create({ ... });
rec.units = result.usage.totalTokens;
await rec.done();
Signature:
interface RecordingOptions {
service: string;
operation: string;
unitType: string;
units?: number; // default: 0.0
[key: string]: unknown; // dimensions
}
interface RecordingContext {
units: number; // read/write — set before calling done()
done(): Promise<void>; // sends the event via track()
}
function recording(options: RecordingOptions): RecordingContext;
- Set
rec.unitsduring or after your operation. - Call
rec.done()only on success. If the operation fails, simply skipdone()— no event is recorded.
End-to-End Example: Audit Pipeline
A pipeline that transcribes audio, enriches the transcript, audits it, and stores
results — all tracked per workspace_id.
import { track, recording, record } from "@ontopix/ledger";
const SERVICE = "audit-service";
async function runPipeline(audioUrl: string, workspaceId: string, jobId: string) {
// ── Step 1: Transcribe audio (ElevenLabs) ────────────────────────
// Cost is known upfront: 1 request. Use track().
const transcript = await elevenlabsClient.transcribe(audioUrl, { model: "scribe_v1" });
await track({
service: SERVICE,
operation: "transcribe",
units: 1,
unitType: "requests",
vendor: "elevenlabs",
model: "scribe_v1",
workspace_id: workspaceId,
job_id: jobId,
});
// ── Step 2: Enrich transcript (OpenAI gpt-5-mini) ────────────────
// Each token type is a separate resource consumed at a different cost.
// Emit one event per unitType so each can be queried and aggregated
// independently in Timestream (e.g. SUM(units) WHERE unit_type = 'input_tokens').
const enrichment = await openaiClient.chat.completions.create({
model: "gpt-5-mini",
messages: [{ role: "user", content: `Enrich: ${transcript.text}` }],
});
const enrichDims = {
service: SERVICE, operation: "enrich", vendor: "openai",
model: "gpt-5-mini", workspace_id: workspaceId, job_id: jobId,
} as const;
const eu = enrichment.usage;
await track({ units: eu.promptTokens, unitType: "input_tokens", ...enrichDims });
await track({ units: eu.completionTokens, unitType: "output_tokens", ...enrichDims });
await track({ units: eu.promptTokensDetails?.cachedTokens ?? 0, unitType: "input_cached_tokens", ...enrichDims });
// ── Step 3: Audit the enriched transcript (OpenAI gpt-5) ─────────
// Same pattern — one event per token type.
const audit = await openaiClient.chat.completions.create({
model: "gpt-5",
messages: [{ role: "user", content: `Audit: ${enrichment.choices[0].message.content}` }],
});
const auditDims = {
service: SERVICE, operation: "audit", vendor: "openai",
model: "gpt-5", workspace_id: workspaceId, job_id: jobId,
} as const;
const au = audit.usage;
await track({ units: au.promptTokens, unitType: "input_tokens", ...auditDims });
await track({ units: au.completionTokens, unitType: "output_tokens", ...auditDims });
await track({ units: au.promptTokensDetails?.cachedTokens ?? 0, unitType: "input_cached_tokens", ...auditDims });
// ── Step 4: Store results ─────────────────────────────────────────
// Cost is fixed: 1 write. The decorator handles tracking on success.
return await storeAuditResults(workspaceId, audit);
}
// record() wraps the function — tracking fires automatically on success.
const storeAuditResults = record({
service: SERVICE,
operation: "aggregate",
unitType: "writes",
dimensionsFrom: ["workspaceId"],
})(async function storeAuditResults(workspaceId: string, auditResponse: unknown) {
// ... db write logic ...
return { status: "stored", workspace_id: workspaceId };
});
This pipeline produces 8 Ledger events for the two LLM steps (3 token types each),
plus 1 for transcription and 1 for aggregation — 10 total per run. All share the same
workspace_id and job_id dimensions for attribution and traceability.
Idempotency
Every event carries an idempotencyKey used as the SQS MessageDeduplicationId.
By default, a random UUID is generated per call.
For deterministic deduplication (e.g. retries), provide your own key:
await track({
service: "audit-service",
operation: "transcript",
units: 1,
unitType: "requests",
idempotencyKey: `${jobId}:transcript`,
vendor: "elevenlabs",
job_id: jobId,
});
SQS FIFO deduplicates messages with the same key within a 5-minute window.
Reserved Dimension Names
The following keys cannot be used as dimension properties:
service, operation, units, unit_type, timestamp, environment,
idempotency_key, schema_version.
Passing a reserved name causes the call to be silently dropped (caught by the fire-and-forget error handler).