Learn how to add auto-generated captions to your on-demand Mux Video assets, to increase accessibility and to create transcripts for further processing.
Mux uses OpenAI's Whisper model to automatically generate captions for on-demand assets. This guide shows you how to enable this feature, what you can do with it, and what some of the limitations you might encounter are.
Generally, you should expect auto-generated captions to work well for content with reasonably clear audio. It may work less well with assets that contain a lot of non-speech audio (music, background noise, extended periods of silence).
We recommend that you try it out on some of your typical content, and see if the results meet your expectations.
This feature is designed to generate captions in the same language that your content's audio is produced in. It should not be used to programatically generate translated captions in other languages.
When you create a Mux AssetAPI, you can add a generated_subtitles
array to the API call, as follows:
// POST /video/v1/assets
{
"input": [
{
"url": "...",
"generated_subtitles": [
{
"language_code": "en",
"name": "English CC"
}
]
}
],
"playback_policy": "public",
"video_quality": "basic"
}
Mux supports the following languages and corresponding language codes for VOD generated captions. Languages labeled as "beta" may have lower accuracy.
Language | Language Code | Status |
---|---|---|
English | en | Stable |
Spanish | es | Stable |
Italian | it | Stable |
Portuguese | pt | Stable |
German | de | Stable |
French | fr | Stable |
Polish | pl | Beta |
Russian | ru | Beta |
Dutch | nl | Beta |
Catalan | ca | Beta |
Turkish | tr | Beta |
Swedish | sv | Beta |
Ukrainian | uk | Beta |
Norwegian | no | Beta |
Finnish | fi | Beta |
Slovak | sk | Beta |
Greek | el | Beta |
Czech | cs | Beta |
Croatian | hr | Beta |
Danish | da | Beta |
Romanian | ro | Beta |
Bulgarian | bg | Beta |
You can also enable autogenerated captions if you're using Direct UploadsAPI by specifying the generated_subtitles
configuration in the first entry of the input
list of the new_asset_settings
object, like this:
// POST /video/v1/uploads
{
"new_asset_settings": {
"playback_policy": [
"public"
],
"video_quality": "basic",
"input": [
{
"generated_subtitles": [
{
"language_code": "en",
"name": "English CC"
}
]
}
]
},
"cors_origin": "*"
}
Auto-captioning happens separately from the initial asset ingest, so that this doesn't delay the asset being available for playback. If you want to know when the text track for the captions is ready, listen for the video.asset.track.ready
webhook for a track with "text_source": "generated_vod"
.
You can retroactively add captions to any asset created in the last 7 days by POSTing to the generate-subtitles
endpoint on the asset audio track that you want to generate captions for, as shown below:
// POST /video/v1/assets/${ASSET_ID}/tracks/${AUDIO_TRACK_ID}/generate-subtitles
{
"generated_subtitles": [
{
"language_code": "en",
"name": "English (generated)"
}
]
}
If you need to use this API to backfill captions to assets created longer than 7 days ago, please reach out and we'd be happy to help. Please note that there may be a charge for backfilling captions onto large libraries.
For assets that have a ready
auto-generated captions track, you can also request a transcript (a plain text file) of the speech recognized in your asset.
To get this, use a playback id for your asset and the track id for the generated_vod
text track:
https://stream.mux.com/{PLAYBACK_ID}/text/{TRACK_ID}.txt
Signed assets require a token
parameter specifying a JWT with the same aud
claim used for video playback:
https://stream.mux.com/{PLAYBACK_ID}/text/{TRACK_ID}.txt?token={JWT}
You can also retrieve a WebVTT version of the text track by replacing .txt
with .vtt
in the URL.
You might find this transcript useful for doing further processing in other systems. For example, content moderation, sentiment analysis, summarization, extracting insights from your content, and many more.
There is no additional charge for this feature. It's included as part of the standard encoding and storage charges for Mux Video assets.
It depends on the length of the asset, but generally it takes about 0.1x content duration. As an example, a 1 hour asset would take about 6 minutes to generate captions for.
We're sorry to hear that! Unfortunately, though automatic speech recognition has improved enormously in recent years, sometimes it can still get things wrong.
One option you have is to edit and replace the mis-recognized speech in the captions track:
https://stream.mux.com/{PLAYBACK_ID}/text/{TRACK_ID}.vtt
create track
APIAPIWe currently do not recommend using this feature on mixed-language content.
We currently do not support automatic translation in generated captions - you should only generate captions in the language that matches your audio track.
We'd love to hear more about the languages that you'd like to see us support, please reach out with details.