An asset has been created
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
An asset is ready for playback. You can now use the asset's playback_id
to successfully start streaming this asset.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
An asset has encountered an error. Use this to notify your server about assets with errors. Asset errors can happen for a number of reasons, most commonly an input URL that Mux is unable to download or a file that is not a valid video file.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
An asset has been updated. Use this to make sure your server is notified about changes to assets.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
An asset has been deleted. Use this so that your server knows when an asset has been deleted, at which point it will no longer be playable.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
The live stream for this asset has completed. Every time a live stream starts and ends a new asset gets created and this event fires.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Static renditions for this asset are ready. Static renditions are streamable mp4 files that are most commonly used for allowing users to download files for offline viewing.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Static renditions for this asset are being prepared. After requesting static renditions you will get this webhook when they are being prepared.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Static renditions for this asset have been deleted. The static renditions (mp4 files) for this asset will no longer be available.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Preparing static renditions for this asset has encountered an error. This indicates that there was some error when creating static renditions (mp4s) of your asset. This should be rare and if you see it unexpectedly please open a support ticket.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Master access for this asset is ready. Master access is used when downloading an asset for purposes of editing or post-production work. The master access file is not intended to be streamed or downloaded by end-users.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Master access for this asset is being prepared. After requesting master access you will get this webhook while it is being prepared.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Master access for this asset has been deleted. Master access for this asset has been removed. You will no longer be able to download the master file. If you want it again you should re-request it.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Master access for this asset has encountered an error. This indicates that there was some error when creating master access for this asset. This should be rare and if you see it unexpectedly please open a support ticket.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
A new track for this asset has been created, for example a subtitle text track.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Unique identifier for the Asset. Max 255 characters.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
A track for this asset is ready. In the example of a subtitle text track the text track will now be delivered with your HLS stream.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Unique identifier for the Asset. Max 255 characters.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
A track for this asset has encountered an error. There was some error preparing this track. Most commonly this could be a text track file that Mux was unable to download for processing.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Unique identifier for the Asset. Max 255 characters.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
A track for this asset has been deleted.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Unique identifier for the Asset. Max 255 characters.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This event fires when Mux has encountered a non-fatal issue with the recorded asset of the live stream. At this time, the event is only fired when Mux is unable to download a slate image from the URL set as reconnect_slate_url
parameter value.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
Asset Identifier of the video used as the source for creating the clip.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
The status of the asset.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
An asset has been created from this upload. This is useful to know what a user of your application has finished uploading a file using the URL created by a Direct Upload.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Direct Upload.
Max time in seconds for the signed upload URL to be valid. If a successful upload has not occurred before the timeout limit, the direct upload is marked timed_out
Unique identifier for the Asset. Max 255 characters.
Time the Asset was created, defined as a Unix timestamp (seconds since epoch).
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The time at which the recording for the live stream started. The time value is Unix epoch time represented in ISO 8601 format.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Only set once the upload is in the asset_created
state.
If the upload URL will be used in a browser, you must specify the origin in order for the signed URL to have the correct CORS headers.
The URL to upload the associated source media to.
Indicates if this is a test Direct Upload, in which case the Asset that gets created will be a test
Asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Upload has been canceled. This event fires after hitting the cancel direct upload API.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Direct Upload.
Max time in seconds for the signed upload URL to be valid. If a successful upload has not occurred before the timeout limit, the direct upload is marked timed_out
Unique identifier for the Asset. Max 255 characters.
Time the Asset was created, defined as a Unix timestamp (seconds since epoch).
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The time at which the recording for the live stream started. The time value is Unix epoch time represented in ISO 8601 format.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Only set once the upload is in the asset_created
state.
If the upload URL will be used in a browser, you must specify the origin in order for the signed URL to have the correct CORS headers.
The URL to upload the associated source media to.
Indicates if this is a test Direct Upload, in which case the Asset that gets created will be a test
Asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Upload has been created. This event fires after creating a direct upload.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Direct Upload.
Max time in seconds for the signed upload URL to be valid. If a successful upload has not occurred before the timeout limit, the direct upload is marked timed_out
Unique identifier for the Asset. Max 255 characters.
Time the Asset was created, defined as a Unix timestamp (seconds since epoch).
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The time at which the recording for the live stream started. The time value is Unix epoch time represented in ISO 8601 format.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Only set once the upload is in the asset_created
state.
If the upload URL will be used in a browser, you must specify the origin in order for the signed URL to have the correct CORS headers.
The URL to upload the associated source media to.
Indicates if this is a test Direct Upload, in which case the Asset that gets created will be a test
Asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Upload has encountered an error. This event fires when the asset created by the direct upload fails. Most commonly this happens when an end-user uploads a non-video file.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Direct Upload.
Max time in seconds for the signed upload URL to be valid. If a successful upload has not occurred before the timeout limit, the direct upload is marked timed_out
Unique identifier for the Asset. Max 255 characters.
Time the Asset was created, defined as a Unix timestamp (seconds since epoch).
The status of the asset.
The duration of the asset in seconds (max duration for a single asset is 12 hours).
This field is deprecated. Please use resolution_tier
instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.
The aspect ratio of the asset in the form of width:height
, for example 16:9
.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
The individual media tracks that make up an asset.
Unique identifier for the Track
The type of track
The duration in seconds of the track media. This parameter is not set for text
type tracks. This field is optional and may not be set. The top level duration
field of an asset will always be set.
The maximum width in pixels available for the track. Only set for the video
type track.
The maximum height in pixels available for the track. Only set for the video
type track.
The maximum frame rate available for the track. Only set for the video
type track. This field may return -1
if the frame rate of the input cannot be reliably determined.
The maximum number of audio channels the track supports. Only set for the audio
type track.
Only set for the audio
type track.
This parameter is only set for text
type tracks.
The source of the text contained in a Track of type text
. Valid text_source
values are listed below.
uploaded
: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.embedded
: Tracks extracted from an embedded stream of CEA-608 closed captions.generated_vod
: Tracks generated by automatic speech recognition on an on-demand asset.generated_live
: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles
. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.generated_live_final
: Tracks generated by automatic speech recognition on a live stream using generated_subtitles
. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live
tracks. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.The language code value represents BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is only set for text
and audio
track types.
The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text
or audio
track with this value. For example, the value should be "English" for a subtitle text track for the language_code
value of en-US
. This parameter is only set for text
and audio
track types.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type
is text
and text_type
is subtitles
.
Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text
type tracks. Max 255 characters.
The status of the track. This parameter is only set for text
type tracks.
For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.
Object that describes any errors that happened when processing this asset.
The type of error that occurred for this asset.
Error messages with more details.
Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.
Indicates whether the live stream that created this asset is currently active
and not in idle
state. This is an optional parameter added when the asset is created from a live stream.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.
An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access
is set to none
and when the temporary URL expires.
The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.
Asset Identifier of the video used as the source for creating the clip.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.
Indicates the status of downloadable MP4 versions of this asset.
Array of file objects.
Extension of the static rendition file
The height of the static rendition's file in pixels
The width of the static rendition's file in pixels
The bitrate in bits per second
The file size in bytes
An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.
The time at which the recording for the live stream started. The time value is Unix epoch time represented in ISO 8601 format.
The duration of the live stream recorded. The time value is in seconds.
The type of media represented by the recording session, either content
for normal stream content or slate
for slate media inserted during stream interruptions.
An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.
The video codec used on the input file. For example, the input file encoded with hevc
video codec is non-standard and the value of this parameter is hevc
.
The audio codec used on the input file. Non-AAC audio codecs are non-standard.
The video key frame Interval (also called as Group of Picture or GOP) of the input file is high
. This parameter is present when the gop is greater than 20 seconds.
The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1
frame rate value indicates Mux could not determine the frame rate of the video track.
The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width
x height
in pixels.
The video bitrate of the input file is high
. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.
The video pixel aspect ratio of the input file.
Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.
Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.
A catch-all reason when the input file in created with non-standard encoding parameters.
The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.
True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.
The type of ingest used to create the asset.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Only set once the upload is in the asset_created
state.
If the upload URL will be used in a browser, you must specify the origin in order for the signed URL to have the correct CORS headers.
The URL to upload the associated source media to.
Indicates if this is a test Direct Upload, in which case the Asset that gets created will be a test
Asset.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
A new live stream has been created. Broadcasters with a stream_key
can start sending encoder feed to this live stream.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
An encoder has successfully connected to this live stream.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Recording on this live stream has started. Mux has successfully processed the first frames from the encoder. If you show a red dot icon in your UI, this would be a good time to show it.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This live stream is now 'active'. The live streams playback_id
OR the playback_id
associated with this live stream's asset can be used right now to created HLS URLs (https://stream.mux.com/{PLAYBACK_ID}.m3u8
and start streaming in your player. Note that before the live stream is 'active'
, trying to stream the HLS URL will result in HTTP 412
errors.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
An encoder has disconnected from this live stream. Note that while disconnected the live stream is still status: 'active'
.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
The reconnect_window
for this live stream has elapsed. The live stream status
will now transition to 'idle'
.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This live stream has been updated. For example, after resetting the live stream's stream key.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This live stream has been enabled. This event fires after enable live stream API.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This live stream has been disabled. This event fires after disable live stream API. Disabled live streams will no longer accept new RTMP connections.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This event fires after deleting a live stream
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.
Unique identifier for the PlaybackID
public
playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}
signed
playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}
. See Secure video playback for details about creating tokens.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.
Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.
Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.
Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.
Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.
Max resolution tier can be used to control the maximum resolution_tier
your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p
.
The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart
encoding tier is used. See the guide for more details.
An array of playback policy names that you want applied to this asset and available through playback_ids
. Options include: "public"
(anyone with the playback URL can stream the asset). And "signed"
(an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.
An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url
for requirements.
The URL of the file that Mux should download and use.
audio
tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.text
tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.mux://assets/{asset_id}
template where asset_id
is the Asset Identifier for creating the clip from.
The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.
Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"
The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.
Where the horizontal positioning of the overlay/watermark should begin from.
The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.
How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.
How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.
How opaque the overlay should appear, expressed as a percent. (Default 100%)
Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing
state when the asset transitions to ready
.
A name for this subtitle track.
Arbitrary metadata set for the subtitle track. Max 255 characters.
The language to generate subtitles in.
The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url
has mux://assets/{asset_id}
format.
This parameter is required for text
type tracks.
Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text
type tracks.
The language code value must be a valid BCP 47 specification compliant value. For example, en
for English or en-US
for the US version of English. This parameter is required for text
and audio
track types.
The name of the track containing a human-readable description. This value must be unique within each group of text
or audio
track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code
set to en
. This optional parameter should be used only for text
and audio
type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code
value.
Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type
of text
and text_type
set to subtitles
.
This optional parameter should be used tracks with type
of text
and text_type
set to subtitles
.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.
Describes the embedded closed caption configuration of the incoming live stream.
A name for this live stream closed caption track.
Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.
The language of the closed caption stream. Value must be BCP 47 compliant.
CEA-608 caption channel to read data from.
Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles
configured will automatically receive two text tracks. The first of these will have a text_source
value of generated_live
, and will be available with ready
status as soon as the stream is live. The second text track will have a text_source
value of generated_live_final
and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final
tracks will not be available in ready
status until the live stream ends. If an Asset has both generated_live
and generated_live_final
tracks that are ready
, then only the generated_live_final
track will be included during playback.
A name for this live stream subtitle track.
Arbitrary metadata set for the live stream subtitle track. Max 255 characters.
The language to generate subtitles in.
Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.
When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).
If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.
Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency
option.
By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.
The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.
This field is deprecated. Please use latency_mode
instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.
Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.
True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.
The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.
Unique key used for encrypting a stream to a Mux SRT endpoint.
The protocol used for the active ingest stream. This is only set when the live stream is active.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This live stream event fires when Mux has encountered a non-fatal issue. There is no disruption to the live stream ingest and playback. At this time, the event is only fired when Mux is unable to download an image from the URL set as reconnect_slate_url
parameter value.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the Live Stream. Max 255 characters.
Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.
The Asset that is currently being created if there is an active broadcast.
idle
indicates that there is no active broadcast. active
indicates that there is an active broadcast and disabled
status indicates that no future RTMP streams can be published.
Arbitrary user-supplied metadata set for the asset. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
A new simulcast target has been created for this live stream.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Unique identifier for the Live Stream. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
When the parent live stream is 'disconnected'
, all simulcast targets will have be 'idle'
.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Unique identifier for the Live Stream. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
When the parent live stream fires 'connected'
then the simulcast targets transition to 'starting'
.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Unique identifier for the Live Stream. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This fires when Mux has successfully connected to the simulcast target and has begun pushing content to that third party.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Unique identifier for the Live Stream. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This fires when Mux has encountered an error either while attempting to connect to the third party streaming service or while broadcasting. Mux will try to re-establish the connection and if it does successfully the simulcast target will transition back to 'broadcasting'
.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Unique identifier for the Live Stream. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This simulcast target has been deleted.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Unique identifier for the Live Stream. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
This simulcast target has been updated.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
ID of the Simulcast Target
Arbitrary user-supplied metadata set when creating a simulcast target.
The current status of the simulcast target. See Statuses below for detailed description.
idle
: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.starting
: The simulcast target transitions into this state when the parent live stream transition into connected state.broadcasting
: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.errored
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity
field with more details about the error.Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.
RTMP hostname including the application name for the third party live streaming service.
The severity of the error encountered by the simulcast target.
This field is only set when the simulcast target is in the errored
status.
See the values of severities below and their descriptions.
normal
: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.fatal
: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.Unique identifier for the Live Stream. Max 255 characters.
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt
Alert for high traffic video delivery.
Type for the webhook event
Unique identifier for the event
Time the event was created
Name for the environment
Unique identifier for the environment
Unique identifier for the live stream that created the asset.
Unique identifier for the asset.
The passthrough
value for the asset.
Time at which the asset was created. Measured in seconds since the Unix epoch.
If exists, time at which the asset was deleted. Measured in seconds since the Unix epoch.
The state of the asset.
The duration of the asset in seconds.
The resolution tier that the asset was ingested at, affecting billing for ingest & storage
Total number of delivered seconds during this time window.
Seconds delivered broken into resolution tiers. Each tier will only be displayed if there was content delivered in the tier.
Total number of delivered seconds during this time window that had a resolution larger than the 1440p tier (over 4,194,304 pixels total).
Total number of delivered seconds during this time window that had a resolution larger than the 1080p tier but less than or equal to the 2160p tier (over 2,073,600 and <= 4,194,304 pixels total).
Total number of delivered seconds during this time window that had a resolution larger than the 720p tier but less than or equal to the 1440p tier (over 921,600 and <= 2,073,600 pixels total).
Total number of delivered seconds during this time window that had a resolution within the 720p tier (up to 921,600 pixels total, based on typical 1280x720).
Total number of delivered seconds during this time window that had a resolution of audio only.
Current threshold set for alerting
Attempts for sending out the webhook event
Unique identifier for the webhook
HTTP response status code for the webhook attempt
HTTP response headers for the webhook attempt
HTTP response body for the webhook attempt
Max attempts number for the webhook attempt
Unique identifier for the webhook attempt
Time the webhook request was attempted
URL address for the webhook attempt