How to Track Agentic AI Workflows: Task & Customer Attribution
Learn how to track multi-step AI agent workflows with task_id and customer_id. Group LLM calls, measure total costs per task, and attribute AI spend to customers.
AI agents are changing how we build applications. A single user request might trigger 10, 20, or even 50 LLM calls. Without proper tracking, you have no idea what these workflows cost or which customers are driving your AI bill.
This guide shows you how to track agentic AI workflows using task_id and customer_id—two simple parameters that transform your AI observability from chaos to clarity.
The Agentic AI Tracking Problem
Traditional API tracking works request-by-request. But AI agents don't work that way:
- Multiple LLM calls per task: An agent might call GPT-4 to plan, GPT-4o-mini to execute subtasks, and Claude to verify
- Variable workflows: Different inputs lead to different numbers of LLM calls
- Nested operations: Agents call other agents, creating deep call hierarchies
- Customer attribution: You need to know which customer triggered which costs
The Solution: task_id and customer_id
Two parameters solve the agentic tracking problem:
task_id
Groups all LLM calls that belong to the same workflow. See total cost, token usage, and call sequence for each task.
customer_id
Attributes AI costs to specific customers. Essential for usage-based billing and understanding customer unit economics.
Implementation: Tracking Agentic Workflows
Step 1: Generate a Task ID
Create a unique ID for each user request or workflow:
import { v4 as uuidv4 } from 'uuid';
// Generate a task ID for each workflow
function startAgentTask(customerId: string) {
return {
taskId: uuidv4(),
customerId,
startTime: Date.now()
};
}Step 2: Pass IDs to Your LLM Client
When wrapping your LLM client, include both identifiers:
import { Orbit } from '@with-orbit/sdk';
import OpenAI from 'openai';
const orbit = new Orbit({ apiKey: process.env.ORBIT_API_KEY });
async function runAgentWorkflow(userMessage: string, customerId: string) {
const task = startAgentTask(customerId);
// Create a wrapped client with task context
const openai = orbit.wrapOpenAI(new OpenAI(), {
feature: 'ai-agent',
task_id: task.taskId,
customer_id: task.customerId
});
// All LLM calls now tracked together
const plan = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: `Plan: ${userMessage}` }]
});
const result = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: `Execute: ${plan.choices[0].message.content}` }]
});
return result;
}Step 3: Track Across Multiple Providers
Real agents often use multiple LLM providers. The same pattern works across all of them:
async function multiProviderAgent(userRequest: string, customerId: string) {
const task = startAgentTask(customerId);
// OpenAI for planning
const openai = orbit.wrapOpenAI(new OpenAI(), {
feature: 'agent-planner',
task_id: task.taskId,
customer_id: task.customerId
});
// Anthropic for execution
const anthropic = orbit.wrapAnthropic(new Anthropic(), {
feature: 'agent-executor',
task_id: task.taskId,
customer_id: task.customerId
});
// Gemini for verification
const gemini = orbit.wrapGemini(new GoogleGenerativeAI(key), {
feature: 'agent-verifier',
task_id: task.taskId,
customer_id: task.customerId
});
// All calls grouped by task_id, attributed to customer
const plan = await openai.chat.completions.create({ ... });
const result = await anthropic.messages.create({ ... });
const verified = await gemini.generateContent({ ... });
return verified;
}What You Can Track
With task_id and customer_id in place, you get visibility into:
Per-Task Metrics
- Total cost per task: Sum of all LLM calls in the workflow
- Token breakdown: Input vs output tokens, by model
- Call sequence: Which LLM calls happened in what order
- Error tracking: Did any step in the workflow fail?
Per-Customer Metrics
- Total AI spend per customer: Enable usage-based billing
- Average cost per request: Understand unit economics
- Usage patterns: Which customers use AI features most
- High-cost customers: Identify accounts that need attention
Common Agentic Patterns
Pattern 1: ReAct Agent
The classic Reason + Act pattern with iterative tool use:
async function reactAgent(query: string, customerId: string) {
const task = startAgentTask(customerId);
const openai = orbit.wrapOpenAI(new OpenAI(), {
feature: 'react-agent',
task_id: task.taskId,
customer_id: task.customerId
});
let context = query;
const maxIterations = 10;
for (let i = 0; i < maxIterations; i++) {
const thought = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: `Reason about: ${context}` }]
});
if (thought.choices[0].message.content.includes('FINAL ANSWER')) {
break;
}
// Each iteration tracked as part of same task
const action = await openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: `Act on: ${thought.choices[0].message.content}` }]
});
context = action.choices[0].message.content;
}
}Pattern 2: Multi-Agent System
Multiple specialized agents working together:
async function multiAgentSystem(task: string, customerId: string) {
const taskContext = startAgentTask(customerId);
// Researcher agent
const researcher = orbit.wrapOpenAI(new OpenAI(), {
feature: 'researcher-agent',
task_id: taskContext.taskId,
customer_id: taskContext.customerId
});
// Writer agent
const writer = orbit.wrapAnthropic(new Anthropic(), {
feature: 'writer-agent',
task_id: taskContext.taskId,
customer_id: taskContext.customerId
});
// Critic agent
const critic = orbit.wrapOpenAI(new OpenAI(), {
feature: 'critic-agent',
task_id: taskContext.taskId,
customer_id: taskContext.customerId
});
const research = await researcher.chat.completions.create({ ... });
const draft = await writer.messages.create({ ... });
const feedback = await critic.chat.completions.create({ ... });
const final = await writer.messages.create({ ... });
// All 4 calls grouped under one task_id
}Pattern 3: Document Processing Pipeline
async function processDocument(doc: Document, customerId: string) {
const task = startAgentTask(customerId);
const openai = orbit.wrapOpenAI(new OpenAI(), {
feature: 'doc-processor',
task_id: task.taskId,
customer_id: task.customerId
});
// Chunk the document
const chunks = splitIntoChunks(doc, 4000);
// Process each chunk (all tracked under same task)
const summaries = await Promise.all(
chunks.map(chunk =>
openai.chat.completions.create({
model: 'gpt-4o-mini',
messages: [{ role: 'user', content: `Summarize: ${chunk}` }]
})
)
);
// Final synthesis
const final = await openai.chat.completions.create({
model: 'gpt-4o',
messages: [{
role: 'user',
content: `Synthesize these summaries: ${summaries.map(s => s.choices[0].message.content).join('\n')}`
}]
});
// Dashboard shows: 1 task, N+1 LLM calls, total cost
}Usage-Based Billing with customer_id
customer_id enables accurate usage-based billing for AI features:
// In your API endpoint
app.post('/api/ai-task', async (req, res) => {
const { userId, request } = req.body;
// Get customer ID from your billing system
const customerId = await getCustomerId(userId);
// Run the AI task with customer attribution
const result = await runAgentWorkflow(request, customerId);
// Orbit tracks costs per customer automatically
// Query via API or dashboard for billing
res.json(result);
});Best Practices
1. Use Meaningful Task IDs
UUIDs work, but including context can help debugging:
const taskId = `${feature}-${Date.now()}-${uuidv4().slice(0,8)}`;
// Example: "doc-summarizer-1705849200000-a1b2c3d4"2. Set Task IDs Early
Create the task_id at the start of the request, before any LLM calls. This ensures all calls in the workflow are captured.
3. Propagate Through Async Boundaries
When using async patterns, ensure task context flows through:
// Pass context explicitly to child functions
async function childOperation(openai, taskContext) {
// Use the same wrapped client or recreate with same IDs
}4. Handle Errors Gracefully
Even failed tasks should be tracked—they still cost tokens:
try {
await runAgentWorkflow(request, customerId);
} catch (error) {
// Error is logged, but costs are still tracked
console.error('Task failed:', error);
}Track Agentic Workflows with Orbit
Orbit makes agentic workflow tracking simple. Add task_id and customer_id to your LLM calls and get full visibility into multi-step AI workflows.
- Group LLM calls by task_id
- Attribute costs per customer_id
- See call sequences and total costs
- Works with OpenAI, Anthropic, Gemini
Related Articles
How to Track OpenAI API Costs in Your Application
Step-by-step tutorial on tracking OpenAI API costs in production. Monitor GPT-4o usage, track spending by feature, and get real-time cost visibility.
How to Monitor AI API Usage and Billing
Set up AI API monitoring to track your usage and costs. Monitor spending across OpenAI, Anthropic, and Gemini with real-time dashboards.
How to Track OpenAI API Costs by Feature
Learn how to track OpenAI API costs at the feature level, not just totals. Understand which parts of your app are driving spend.