OpenAI Adapter
@accordkit/provider-openai decorates the official OpenAI JavaScript SDK. It emits normalized AccordKit events (message, tool_call, usage, tool_result, span) without changing how you call chat.completions.create or the newer beta namespaces.
Install
pnpm add @accordkit/tracer @accordkit/provider-openai openai
Wrap the client
import OpenAI from 'openai';
import { FileSink, Tracer } from '@accordkit/tracer';
import { withOpenAI } from '@accordkit/provider-openai';
const tracer = new Tracer({ sink: new FileSink() });
const client = withOpenAI(
new OpenAI({ apiKey: process.env.OPENAI_API_KEY }),
tracer,
);
withOpenAI returns a proxy that mirrors the SDK. Re-wrapping the same client is idempotent—the adapter remembers the proxy and returns it.
Emitted events
| Event | When it fires | Key fields |
|---|---|---|
message | Prompts (before the request) and assistant completions | role, content, model, requestId |
tool_call | Assistant requests a function/tool | tool, parsed input, $ext.id |
usage | OpenAI reports token accounting | inputTokens, outputTokens, $ext.totalTokens |
tool_result | Request completes (success or error) | ok, latencyMs, output summary |
span | Surrounds each API invocation | operation, durationMs, status, attrs.model |
All events share the same trace context so downstream tooling can correlate them easily.
Options
withOpenAI(client, tracer, {
enableResponsesApi: true,
enableImagesApi: true,
enableAudioApi: false,
provider: 'openai',
operationName: 'my-app.chat',
emitPrompts: true,
emitResponses: true,
emitToolCalls: true,
emitUsage: true,
emitToolResults: true,
emitSpan: true,
});
| Option | Default | Effect |
|---|---|---|
enableResponsesApi | false | Wrap the beta responses namespace so responses.create emits AccordKit events. |
enableImagesApi | false | Instrument images.generate to emit tool_result and span events (no binary payloads). |
enableAudioApi | false | Instrument audio.speech/transcriptions/translations. |
provider | 'openai' | Provider label attached to every emitted event. |
operationName | 'openai.chat.completions.create' | Name recorded on tool_result and span events. |
emitPrompts | true | Emit message events for system/user prompts. |
emitResponses | true | Emit message events for assistant replies. |
emitToolCalls | true | Emit tool_call events for tool/function requests. |
emitUsage | true | Emit usage events when the SDK returns usage. |
emitToolResults | true | Emit tool_result events summarising success/error and latency. |
emitSpan | true | Emit a span around every request. |
Streaming behaviour
Streaming responses are detected automatically. The adapter waits for finalChatCompletion() (or an equivalent helper) and then emits the same completion, usage, tool result, and span events you get for non-streaming calls. Intermediate deltas are not emitted individually yet, so the stream API remains unchanged for your application.
Redacting sensitive fields
Use tracer middleware to scrub content before it leaves your process.
import { HttpSink, Tracer, type TraceMiddleware } from '@accordkit/tracer';
const redactEmails: TraceMiddleware = (event) => {
if (event.type === 'message' && typeof event.content === 'string') {
event.content = event.content.replace(/\S+@\S+/g, '[redacted]');
}
return event;
};
const tracer = new Tracer({
sink: new HttpSink({ endpoint: 'https://example.com/ingest' }),
middlewares: [redactEmails],
});
Middleware runs on every emitted event, regardless of whether it originated from your code or the adapter.
Next steps
- Review the event model to see the exact JSONL shapes.
- Explore sinks to decide where events should be delivered.
- Check the
examples/openai-examplesproject for runnable CLIs.