-
Notifications
You must be signed in to change notification settings - Fork 52
feat(audio): add tracking for audio transcriptions in OpenAI client #400
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
feat(audio): add tracking for audio transcriptions in OpenAI client #400
Conversation
Greptile's behavior is changing!From now on, if a review finishes with no comments, we will not post an additional "statistics" comment to confirm that our review found nothing to comment on. However, you can confirm that we reviewed your changes in the status check section. This feature can be toggled off in your Code Review Settings by deselecting "Create a status check for each PR". |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR adds tracking support for OpenAI audio transcriptions via a new $ai_transcription event. The implementation follows the existing pattern used for embeddings, treating audio-to-text as a distinct operation from text-to-text transformations.
- Introduces
WrappedAudioandWrappedTranscriptionsclasses for both sync and async OpenAI clients - Captures transcription metadata including model, input file name, output text, latency, and optional properties like language and audio duration
- Supports privacy mode, groups, and custom properties consistent with other AI tracking features
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| posthog/ai/openai/openai.py | Adds WrappedAudio and WrappedTranscriptions classes to track transcription usage in the sync OpenAI client |
| posthog/ai/openai/openai_async.py | Adds async versions of WrappedAudio and WrappedTranscriptions to track transcription usage in the async OpenAI client |
| posthog/test/ai/openai/test_openai.py | Adds comprehensive test coverage for transcription tracking including basic usage, duration tracking, language parameter, groups, privacy mode, and async support |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
cc @andrewm4894 |
This adds support for tracking transcriptions from OpenAI. It does this via a new event
$ai_transcriptionwhich follows the pattern of embeddings. I figure that audio -> text Is different enough from text-to-text to deserve its own event.Confirmed it worked in my own testing. Feel free to impersonate and view https://us.posthog.com/project/254263/events/2e8ded5c-acd2-45b4-b10f-7a85a438ffaa/2026-01-02T15%3A02%3A00.007000-05%3A00