How to Track OpenAI API Costs by Feature
Learn how to track OpenAI API costs at the feature level, not just totals. Understand which parts of your app are driving spend.
You're building AI features. Your OpenAI bill just hit $3,000 this month. Your boss asks: "Which feature is using all this?"
You open the OpenAI dashboard. Total tokens: 45 million. Total cost: $3,000. But which feature? The chatbot? The document analyzer? The code assistant? You have no idea.
This is the reality for most teams shipping AI in production. And it's a problem that gets worse as you scale.
The $10,000 Mystery
I talked to a startup founder last month. They had three AI-powered features in their SaaS product:
- A customer support chatbot
- An email draft generator
- A meeting notes summarizer
Their AI costs grew from $500/month to $10,000/month in six months. When they finally dug into the data (after building custom logging), they discovered something surprising: the meeting notes feature—used by only 5% of their users—was consuming 70% of their AI budget.
Why Vendor Dashboards Fall Short
OpenAI, Anthropic, and Google all provide usage dashboards. They show you:
- Total API calls
- Total tokens consumed
- Total cost
- Usage by model
What they don't show you:
- Cost per feature in your product
- Which features are efficient vs. wasteful
- Cost trends by feature over time
- Error rates by feature
It's like a restaurant knowing their total food costs but having no idea which dishes are profitable.
The Manual Approach (And Why It Hurts)
Many teams try to solve this by adding logging:
// Every API call gets wrapped with logging
async function callOpenAI(prompt, feature) {
const start = Date.now();
const response = await openai.chat.completions.create({...});
const latency = Date.now() - start;
// Log to your database
await db.insert({
feature,
tokens: response.usage.total_tokens,
cost: calculateCost(response.usage),
latency,
timestamp: new Date()
});
return response;
}This works, but it creates new problems:
- Maintenance burden: You're now building and maintaining analytics infrastructure
- Inconsistent implementation: Different developers implement logging differently
- Dashboard gap: You have data, but no easy way to visualize it
- Cost calculation complexity: Pricing changes, models change, math gets complicated
A Better Approach: SDK-Based Tracking
The cleanest solution is to use an SDK that wraps your AI client and handles tracking automatically. Here's what it looks like:
import { Orbit } from '@with-orbit/sdk';
import OpenAI from 'openai';
// Initialize once
const orbit = new Orbit({ apiKey: process.env.ORBIT_API_KEY });
// Wrap your client with a feature tag
const chatClient = orbit.wrapOpenAI(new OpenAI(), {
feature: 'customer-support-chat',
environment: 'production'
});
// Use it exactly like the normal OpenAI client
const response = await chatClient.chat.completions.create({
model: 'gpt-4o',
messages: [{ role: 'user', content: userMessage }]
});Every API call is automatically tagged, tracked, and visible in a dashboard. No custom logging code. No database management. No dashboard building.
What Good Feature-Level Tracking Looks Like
With proper tracking in place, you can answer questions like:
But cost is just the beginning. You also get:
- Error rates by feature: Is one feature failing more than others?
- Latency by feature: Which features are slow?
- Cost per request: How efficient is each feature?
- Trends over time: Are costs going up or down?
Real Decisions You Can Make
Once you have feature-level data, you can make informed decisions:
- Optimize the expensive features first: Focus your prompt engineering where it matters most
- Switch models strategically: Maybe your summarizer works fine with GPT-4o-mini instead of GPT-4o
- Set feature-level budgets: Alert when a feature exceeds its expected cost
- Prove ROI to stakeholders: Show exactly what each feature costs vs. the value it delivers
Getting Started
If you're ready to see where your AI spend is actually going:
- Audit your current features: List every place your app calls an AI API
- Decide on feature names: Use clear, consistent names like "chat-assistant", "doc-summarizer"
- Implement tracking: Either build your own or use a tool like Orbit
- Review weekly: Make cost reviews part of your regular engineering process
How Orbit Helps
Orbit is a lightweight SDK that wraps your AI clients and gives you instant visibility into per-feature costs, latency, and errors. No proxies, no request interception—just clean tracking that works.
- One-line SDK integration
- Real-time cost tracking per feature
- Free tier: 10,000 events/month
- Works with OpenAI, Anthropic, and Gemini
Related Articles
How to Track OpenAI API Costs in Your Application
Step-by-step tutorial on tracking OpenAI API costs in production. Monitor GPT-4o usage, track spending by feature, and get real-time cost visibility.
How to Track Agentic AI Workflows: Task & Customer Attribution
Learn how to track multi-step AI agent workflows with task_id and customer_id. Group LLM calls, measure total costs per task, and attribute AI spend to customers.
How to Monitor AI API Usage and Billing
Set up AI API monitoring to track your usage and costs. Monitor spending across OpenAI, Anthropic, and Gemini with real-time dashboards.