Setup Integration
This guide explains how to integrate your LLM-based application with our analytics platform. By following the steps below, you'll be able to track and analyze usage metrics from your application.
1. Get LLM Usage Information
To track usage, you first need to collect input, output, and cached token numbers from your LLM provider. You can check the list of Supported providers and LLM models in our documentation. Below, we provide an example for the OpenAI provider:
Example: Retrieving Usage Metrics from OpenAI
When you make a request to OpenAI's chat/completions
or completions
endpoint, the response includes usage information.
Python Example:
from openai import OpenAI
client = OpenAI()
completion = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "developer", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"}
]
)
print(completion)
JavaScript Example:
import OpenAI from 'openai';
const openai = new OpenAI();
async function main() {
const completion = await openai.chat.completions.create({
messages: [{ role: 'developer', content: 'You are a helpful assistant.' }],
model: 'gpt-4o',
store: true,
});
console.log(completion);
}
main();
Response Example
{
...
"usage": {
"prompt_tokens": 19, //inputTokens
"completion_tokens": 10, // outputTokens
"total_tokens": 29,
"prompt_tokens_details": {
"cached_tokens": 0 // inputTokensCached
},
"completion_tokens_details": {
"reasoning_tokens": 0,
"accepted_prediction_tokens": 0,
"rejected_prediction_tokens": 0
}
},
}
In these examples, prompt_tokens
refers to the number of tokens in the input prompt, completion_tokens
refers to the number of tokens in the generated output, and cached_tokens
(if provided) refers to the number of cached tokens present in the prompt.
2. Send LLM Usage Events to Our Platform
Once you have the token information, send it to our analytics platform using the POST /v1/events
endpoint.
For detailed information about the API endpoint, payload schema, and error handling, refer to the Events API Documentation.
Endpoint Details
POST /v1/events HTTP/1.1
Host: api.affordai.io
Authorization: Bearer <your-application-token>
Content-Type: application/json
{
"appId": "your-application-id", // unique application id
"eventType": "text_gen", // currently only "text_gen" is supported
"consumerId": "user@example.com", // might be any user identifier or null for anonymous request
"inputTokens": 500,
"inputTokensCached": 100, // optional, used for models which support tokens caching
"outputTokens": 300
}
Example Code for Sending Events:
Python Example:
import requests
# Event payload
payload = {
"appId": "your-application-id",
"eventType": "text_gen",
"consumerId": "user@example.com", # Optional
"inputTokens": 150,
"inputTokensCached": 100, # Optional
"outputTokens": 350
}
# Headers
headers = {
"Authorization": "Bearer <your-application-token>",
"Content-Type": "application/json"
}
# Send the request
response = requests.post("https://api.affordai.io/v1/events", json=payload, headers=headers)
# Check the response
if response.status_code == 202:
print("Event successfully recorded.")
else:
print(f"Failed to record event: {response.status_code}, {response.json()}")
JavaScript Example:
const axios = require('axios');
const payload = {
appId: 'your-application-id',
eventType: 'text_gen',
consumerId: 'user@example.com', // Optional
inputTokens: 150,
inputTokensCached: 100, // Optional
outputTokens: 350,
};
const headers = {
Authorization: 'Bearer <your-application-token>',
'Content-Type': 'application/json',
};
axios
.post('https://api.affordai.io/v1/events', payload, { headers })
.then((response) => {
console.log('Event successfully recorded.');
})
.catch((error) => {
console.error(
'Failed to record event:',
error.response.status,
error.response.data
);
});
Best Practices
- Secure Your Token: Never expose your Bearer token to the client-side. Store it securely on your server.
- Validate Payloads: Ensure the payload conforms to the schema to avoid validation errors.
- Monitor Errors: Log and handle error responses from the API to ensure smooth operation.
By following these steps, you can seamlessly integrate your LLM-based application with our analytics platform and start gaining valuable insights.