Ledger
Consumption tracking SDK for recording billable work across Ontopix services via SQS FIFO queues.
Ledger is Ontopix's consumption tracking service. It provides a multi-language SDK (Python, TypeScript, Go) for services to explicitly record billable work — vendor API calls, internal operations, or any unit of work with a cost that needs to be attributed to a tenant, client, or product. Events are fire-and-forget: the SDK sends a message to an SQS FIFO queue and never raises, so billing writes cannot fail your business logic.
Documentation
| Page | Description |
|---|---|
| Python SDK | sync (track) + async (atrack), @record decorator, context managers |
| TypeScript SDK | async track(), record() decorator, recording() helper |
| Go SDK | ledger.Track(), ledger.NewRecording() pattern |
How to Think About Tracking
Every vendor call or billable operation in your pipeline should produce a Ledger event. The key decision is which API pattern to use, based on when you know the cost.
Consider an audit pipeline scoped to a workspace_id:
| Step | Vendor | Operation | Cost known when? | Pattern | unit_type |
|---|---|---|---|---|---|
| Transcribe audio | ElevenLabs scribe_v1 | transcribe | Before the call (1 request) | track() × 1 | requests |
| Enrich transcript | OpenAI gpt-5-mini | enrich | After the call (token counts in response) | track() × 3 | input_tokens, output_tokens, input_cached_tokens |
| Audit transcript | OpenAI gpt-5 | audit | After the call (token counts in response) | track() × 3 | input_tokens, output_tokens, input_cached_tokens |
| Store results | — | aggregate | Before the call (1 write) | @record / record() × 1 | writes |
Rules of thumb:
- Cost known upfront (1 request, 1 write) →
track()— fire and forget - Cost known after (tokens, characters) →
track()after the call, orrecording()/arecording()when all dimensions are known upfront - Entire function = one billable unit →
@recorddecorator — tracking happens automatically on success - Always pass
workspace_id(ortenant_id) as a dimension so consumption can be attributed - Different resources = different events — each token type (
input_tokens,output_tokens,input_cached_tokens) is a separatetrack()call with its ownunit_type, because they have different costs and need independent aggregation in Timestream
Each SDK page includes this pipeline as a complete, runnable example.
Environment Variables
| Variable | Default | Description |
|---|---|---|
LEDGER_QUEUE_URL | (required outside sandbox) | SQS FIFO queue URL. Validated at import/load time — fails fast if missing. |
LEDGER_ENV | dev | Environment tag added to every event. Defaults to dev to prevent accidental prod writes. |
LEDGER_AWS_REGION | eu-central-1 | AWS region for the SQS client. |
LEDGER_SANDBOX_MODE | false | When true, events are written to stdout as JSON instead of SQS. No AWS credentials needed. |
LEDGER_LOG_LEVEL | INFO | Log verbosity (DEBUG, INFO, WARNING, ERROR). |
Sandbox Quickstart
Try Ledger locally without any AWS infrastructure:
export LEDGER_SANDBOX_MODE=true
export LEDGER_ENV=dev
import ledger
ledger.track(service="my-service", operation="test", units=1, unit_type="requests")
# {"service": "my-service", "operation": "test", "units": 1, ...} → printed to stdout
Or run the built-in demo (50 simulated events across 3 services):
task demo