Skip to Content
Mux Docs: Home

Webhook Reference

An asset has been created

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

An asset is ready for playback. You can now use the asset's playback_id to successfully start streaming this asset.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

An asset has encountered an error. Use this to notify your server about assets with errors. Asset errors can happen for a number of reasons, most commonly an input URL that Mux is unable to download or a file that is not a valid video file.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

An asset has been updated. Use this to make sure your server is notified about changes to assets.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

An asset has been deleted. Use this so that your server knows when an asset has been deleted, at which point it will no longer be playable.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

The live stream for this asset has completed. Every time a live stream starts and ends a new asset gets created and this event fires.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Static renditions for this asset are ready. Static renditions are streamable mp4 files that are most commonly used for allowing users to download files for offline viewing.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Static renditions for this asset are being prepared. After requesting static renditions you will get this webhook when they are being prepared.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Static renditions for this asset have been deleted. The static renditions (mp4 files) for this asset will no longer be available.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Preparing static renditions for this asset has encountered an error. This indicates that there was some error when creating static renditions (mp4s) of your asset. This should be rare and if you see it unexpectedly please open a support ticket.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Master access for this asset is ready. Master access is used when downloading an asset for purposes of editing or post-production work. The master access file is not intended to be streamed or downloaded by end-users.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Master access for this asset is being prepared. After requesting master access you will get this webhook while it is being prepared.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Master access for this asset has been deleted. Master access for this asset has been removed. You will no longer be able to download the master file. If you want it again you should re-request it.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Master access for this asset has encountered an error. This indicates that there was some error when creating master access for this asset. This should be rare and if you see it unexpectedly please open a support ticket.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.created_at
integer
data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.tracks
array

The individual media tracks that make up an asset.

data.tracks[].id
string

Unique identifier for the Track

data.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.tracks[].error
object

Object that describes any errors that happened when processing this asset.

data.tracks[].error.type
string

The type of error that occurred for this asset.

data.tracks[].error.messages
array

Error messages with more details.

data.errors
object

Object that describes any errors that happened when processing this asset.

data.errors.type
string

The type of error that occurred for this asset.

data.errors.messages
array

Error messages with more details.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.master.status
string
Possible values: "ready""preparing""errored"
data.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.master_access
string (default: none)
Possible values: "temporary""none"
data.mp4_support
string (default: none)
Possible values: "standard""none"
data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.static_renditions.files
array

Array of file objects.

data.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.static_renditions.files[].filesize
integer

The file size in bytes

data.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.recording_times[].started_at
object
data.recording_times[].started_at.nanos
integer
data.recording_times[].started_at.seconds
integer
data.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.non_standard_input_reasons.unexpected_video_parameters
string
data.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

A new track for this asset has been created, for example a subtitle text track.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Track

data.type
string
Possible values: "video""audio""text"

The type of track

data.duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.max_channel_layoutDeprecated
string

Only set for the audio type track.

data.text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.asset_id
string

Unique identifier for the Asset. Max 255 characters.

data.error
object

Object that describes any errors that happened when processing this asset.

data.error.type
string

The type of error that occurred for this asset.

data.error.messages
array

Error messages with more details.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

A track for this asset is ready. In the example of a subtitle text track the text track will now be delivered with your HLS stream.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Track

data.type
string
Possible values: "video""audio""text"

The type of track

data.duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.max_channel_layoutDeprecated
string

Only set for the audio type track.

data.text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.asset_id
string

Unique identifier for the Asset. Max 255 characters.

data.error
object

Object that describes any errors that happened when processing this asset.

data.error.type
string

The type of error that occurred for this asset.

data.error.messages
array

Error messages with more details.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

A track for this asset has encountered an error. There was some error preparing this track. Most commonly this could be a text track file that Mux was unable to download for processing.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Track

data.type
string
Possible values: "video""audio""text"

The type of track

data.duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.max_channel_layoutDeprecated
string

Only set for the audio type track.

data.text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.asset_id
string

Unique identifier for the Asset. Max 255 characters.

data.error
object

Object that describes any errors that happened when processing this asset.

data.error.type
string

The type of error that occurred for this asset.

data.error.messages
array

Error messages with more details.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

A track for this asset has been deleted.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Track

data.type
string
Possible values: "video""audio""text"

The type of track

data.duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.max_channel_layoutDeprecated
string

Only set for the audio type track.

data.text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.asset_id
string

Unique identifier for the Asset. Max 255 characters.

data.error
object

Object that describes any errors that happened when processing this asset.

data.error.type
string

The type of error that occurred for this asset.

data.error.messages
array

Error messages with more details.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This event fires when Mux has encountered a non-fatal issue with the recorded asset of the live stream. At this time, the event is only fired when Mux is unable to download a slate image from the URL set as reconnect_slate_url parameter value.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Asset. Max 255 characters.

data.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.warning
object
data.warning.type
string
data.warning.message
string
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

An asset has been created from this upload. This is useful to know what a user of your application has finished uploading a file using the URL created by a Direct Upload.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Direct Upload.

data.timeout
integer (default: 3600, minimum: 60, maximum: 604800)

Max time in seconds for the signed upload URL to be valid. If a successful upload has not occurred before the timeout limit, the direct upload is marked timed_out

data.status
string
Possible values: "waiting""asset_created""errored""cancelled""timed_out"
data.new_asset_settings
object
data.new_asset_settings.id
string

Unique identifier for the Asset. Max 255 characters.

data.new_asset_settings.created_at
string

Time the Asset was created, defined as a Unix timestamp (seconds since epoch).

data.new_asset_settings.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.new_asset_settings.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.new_asset_settings.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.new_asset_settings.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.new_asset_settings.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.new_asset_settings.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.new_asset_settings.playback_ids[].id
string

Unique identifier for the PlaybackID

data.new_asset_settings.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings.tracks
array

The individual media tracks that make up an asset.

data.new_asset_settings.tracks[].id
string

Unique identifier for the Track

data.new_asset_settings.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.new_asset_settings.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.new_asset_settings.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.new_asset_settings.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.new_asset_settings.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.new_asset_settings.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.new_asset_settings.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.new_asset_settings.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.new_asset_settings.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.new_asset_settings.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.new_asset_settings.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.new_asset_settings.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.new_asset_settings.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.new_asset_settings.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.new_asset_settings.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.new_asset_settings.errors
object

Object that describes any errors that happened when processing this asset.

data.new_asset_settings.errors.type
string

The type of error that occurred for this asset.

data.new_asset_settings.errors.messages
array

Error messages with more details.

data.new_asset_settings.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.new_asset_settings.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.new_asset_settings.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.new_asset_settings.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.new_asset_settings.master.status
string
Possible values: "ready""preparing""errored"
data.new_asset_settings.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.new_asset_settings.master_access
string (default: none)
Possible values: "temporary""none"
data.new_asset_settings.mp4_support
string (default: none)
Possible values: "standard""none"
data.new_asset_settings.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.new_asset_settings.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.new_asset_settings.static_renditions.files
array

Array of file objects.

data.new_asset_settings.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.new_asset_settings.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.new_asset_settings.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.new_asset_settings.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.new_asset_settings.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.new_asset_settings.static_renditions.files[].filesize
string

The file size in bytes

data.new_asset_settings.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.new_asset_settings.recording_times[].started_at
string

The time at which the recording for the live stream started. The time value is Unix epoch time represented in ISO 8601 format.

data.new_asset_settings.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.new_asset_settings.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.new_asset_settings.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.new_asset_settings.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.new_asset_settings.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.new_asset_settings.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.new_asset_settings.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.new_asset_settings.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.new_asset_settings.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.new_asset_settings.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.new_asset_settings.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.new_asset_settings.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.new_asset_settings.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.new_asset_settings.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.new_asset_settings.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.new_asset_settings.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.asset_id
string

Only set once the upload is in the asset_created state.

data.cors_origin
string

If the upload URL will be used in a browser, you must specify the origin in order for the signed URL to have the correct CORS headers.

data.url
string

The URL to upload the associated source media to.

data.test
boolean

Indicates if this is a test Direct Upload, in which case the Asset that gets created will be a test Asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Upload has been canceled. This event fires after hitting the cancel direct upload API.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Direct Upload.

data.timeout
integer (default: 3600, minimum: 60, maximum: 604800)

Max time in seconds for the signed upload URL to be valid. If a successful upload has not occurred before the timeout limit, the direct upload is marked timed_out

data.status
string
Possible values: "waiting""asset_created""errored""cancelled""timed_out"
data.new_asset_settings
object
data.new_asset_settings.id
string

Unique identifier for the Asset. Max 255 characters.

data.new_asset_settings.created_at
string

Time the Asset was created, defined as a Unix timestamp (seconds since epoch).

data.new_asset_settings.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.new_asset_settings.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.new_asset_settings.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.new_asset_settings.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.new_asset_settings.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.new_asset_settings.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.new_asset_settings.playback_ids[].id
string

Unique identifier for the PlaybackID

data.new_asset_settings.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings.tracks
array

The individual media tracks that make up an asset.

data.new_asset_settings.tracks[].id
string

Unique identifier for the Track

data.new_asset_settings.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.new_asset_settings.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.new_asset_settings.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.new_asset_settings.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.new_asset_settings.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.new_asset_settings.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.new_asset_settings.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.new_asset_settings.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.new_asset_settings.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.new_asset_settings.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.new_asset_settings.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.new_asset_settings.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.new_asset_settings.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.new_asset_settings.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.new_asset_settings.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.new_asset_settings.errors
object

Object that describes any errors that happened when processing this asset.

data.new_asset_settings.errors.type
string

The type of error that occurred for this asset.

data.new_asset_settings.errors.messages
array

Error messages with more details.

data.new_asset_settings.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.new_asset_settings.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.new_asset_settings.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.new_asset_settings.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.new_asset_settings.master.status
string
Possible values: "ready""preparing""errored"
data.new_asset_settings.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.new_asset_settings.master_access
string (default: none)
Possible values: "temporary""none"
data.new_asset_settings.mp4_support
string (default: none)
Possible values: "standard""none"
data.new_asset_settings.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.new_asset_settings.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.new_asset_settings.static_renditions.files
array

Array of file objects.

data.new_asset_settings.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.new_asset_settings.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.new_asset_settings.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.new_asset_settings.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.new_asset_settings.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.new_asset_settings.static_renditions.files[].filesize
string

The file size in bytes

data.new_asset_settings.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.new_asset_settings.recording_times[].started_at
string

The time at which the recording for the live stream started. The time value is Unix epoch time represented in ISO 8601 format.

data.new_asset_settings.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.new_asset_settings.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.new_asset_settings.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.new_asset_settings.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.new_asset_settings.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.new_asset_settings.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.new_asset_settings.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.new_asset_settings.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.new_asset_settings.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.new_asset_settings.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.new_asset_settings.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.new_asset_settings.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.new_asset_settings.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.new_asset_settings.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.new_asset_settings.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.new_asset_settings.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.asset_id
string

Only set once the upload is in the asset_created state.

data.cors_origin
string

If the upload URL will be used in a browser, you must specify the origin in order for the signed URL to have the correct CORS headers.

data.url
string

The URL to upload the associated source media to.

data.test
boolean

Indicates if this is a test Direct Upload, in which case the Asset that gets created will be a test Asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Upload has been created. This event fires after creating a direct upload.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Direct Upload.

data.timeout
integer (default: 3600, minimum: 60, maximum: 604800)

Max time in seconds for the signed upload URL to be valid. If a successful upload has not occurred before the timeout limit, the direct upload is marked timed_out

data.status
string
Possible values: "waiting""asset_created""errored""cancelled""timed_out"
data.new_asset_settings
object
data.new_asset_settings.id
string

Unique identifier for the Asset. Max 255 characters.

data.new_asset_settings.created_at
string

Time the Asset was created, defined as a Unix timestamp (seconds since epoch).

data.new_asset_settings.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.new_asset_settings.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.new_asset_settings.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.new_asset_settings.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.new_asset_settings.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.new_asset_settings.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.new_asset_settings.playback_ids[].id
string

Unique identifier for the PlaybackID

data.new_asset_settings.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings.tracks
array

The individual media tracks that make up an asset.

data.new_asset_settings.tracks[].id
string

Unique identifier for the Track

data.new_asset_settings.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.new_asset_settings.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.new_asset_settings.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.new_asset_settings.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.new_asset_settings.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.new_asset_settings.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.new_asset_settings.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.new_asset_settings.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.new_asset_settings.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.new_asset_settings.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.new_asset_settings.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.new_asset_settings.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.new_asset_settings.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.new_asset_settings.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.new_asset_settings.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.new_asset_settings.errors
object

Object that describes any errors that happened when processing this asset.

data.new_asset_settings.errors.type
string

The type of error that occurred for this asset.

data.new_asset_settings.errors.messages
array

Error messages with more details.

data.new_asset_settings.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.new_asset_settings.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.new_asset_settings.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.new_asset_settings.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.new_asset_settings.master.status
string
Possible values: "ready""preparing""errored"
data.new_asset_settings.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.new_asset_settings.master_access
string (default: none)
Possible values: "temporary""none"
data.new_asset_settings.mp4_support
string (default: none)
Possible values: "standard""none"
data.new_asset_settings.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.new_asset_settings.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.new_asset_settings.static_renditions.files
array

Array of file objects.

data.new_asset_settings.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.new_asset_settings.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.new_asset_settings.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.new_asset_settings.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.new_asset_settings.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.new_asset_settings.static_renditions.files[].filesize
string

The file size in bytes

data.new_asset_settings.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.new_asset_settings.recording_times[].started_at
string

The time at which the recording for the live stream started. The time value is Unix epoch time represented in ISO 8601 format.

data.new_asset_settings.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.new_asset_settings.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.new_asset_settings.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.new_asset_settings.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.new_asset_settings.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.new_asset_settings.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.new_asset_settings.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.new_asset_settings.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.new_asset_settings.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.new_asset_settings.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.new_asset_settings.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.new_asset_settings.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.new_asset_settings.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.new_asset_settings.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.new_asset_settings.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.new_asset_settings.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.asset_id
string

Only set once the upload is in the asset_created state.

data.cors_origin
string

If the upload URL will be used in a browser, you must specify the origin in order for the signed URL to have the correct CORS headers.

data.url
string

The URL to upload the associated source media to.

data.test
boolean

Indicates if this is a test Direct Upload, in which case the Asset that gets created will be a test Asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Upload has encountered an error. This event fires when the asset created by the direct upload fails. Most commonly this happens when an end-user uploads a non-video file.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Direct Upload.

data.timeout
integer (default: 3600, minimum: 60, maximum: 604800)

Max time in seconds for the signed upload URL to be valid. If a successful upload has not occurred before the timeout limit, the direct upload is marked timed_out

data.status
string
Possible values: "waiting""asset_created""errored""cancelled""timed_out"
data.new_asset_settings
object
data.new_asset_settings.id
string

Unique identifier for the Asset. Max 255 characters.

data.new_asset_settings.created_at
string

Time the Asset was created, defined as a Unix timestamp (seconds since epoch).

data.new_asset_settings.status
string
Possible values: "preparing""ready""errored"

The status of the asset.

data.new_asset_settings.duration
number

The duration of the asset in seconds (max duration for a single asset is 12 hours).

data.new_asset_settings.max_stored_resolutionDeprecated
string
Possible values: "Audio only""SD""HD""FHD""UHD"

This field is deprecated. Please use resolution_tier instead. The maximum resolution that has been stored for the asset. The asset may be delivered at lower resolutions depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored.

data.new_asset_settings.resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage. This field also represents the highest resolution tier that the content can be delivered at, however the actual resolution may be lower depending on the device, bandwidth, and exact resolution of the uploaded asset.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.max_stored_frame_rate
number

The maximum frame rate that has been stored for the asset. The asset may be delivered at lower frame rates depending on the device and bandwidth, however it cannot be delivered at a higher value than is stored. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.new_asset_settings.aspect_ratio
string

The aspect ratio of the asset in the form of width:height, for example 16:9.

data.new_asset_settings.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.new_asset_settings.playback_ids[].id
string

Unique identifier for the PlaybackID

data.new_asset_settings.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings.tracks
array

The individual media tracks that make up an asset.

data.new_asset_settings.tracks[].id
string

Unique identifier for the Track

data.new_asset_settings.tracks[].type
string
Possible values: "video""audio""text"

The type of track

data.new_asset_settings.tracks[].duration
number

The duration in seconds of the track media. This parameter is not set for text type tracks. This field is optional and may not be set. The top level duration field of an asset will always be set.

data.new_asset_settings.tracks[].max_width
integer

The maximum width in pixels available for the track. Only set for the video type track.

data.new_asset_settings.tracks[].max_height
integer

The maximum height in pixels available for the track. Only set for the video type track.

data.new_asset_settings.tracks[].max_frame_rate
number

The maximum frame rate available for the track. Only set for the video type track. This field may return -1 if the frame rate of the input cannot be reliably determined.

data.new_asset_settings.tracks[].max_channels
integer

The maximum number of audio channels the track supports. Only set for the audio type track.

data.new_asset_settings.tracks[].max_channel_layoutDeprecated
string

Only set for the audio type track.

data.new_asset_settings.tracks[].text_type
string
Possible values: "subtitles"

This parameter is only set for text type tracks.

data.new_asset_settings.tracks[].text_source
string
Possible values: "uploaded""embedded""generated_live""generated_live_final""generated_vod"

The source of the text contained in a Track of type text. Valid text_source values are listed below.

  • uploaded: Tracks uploaded to Mux as caption or subtitle files using the Create Asset Track API.
  • embedded: Tracks extracted from an embedded stream of CEA-608 closed captions.
  • generated_vod: Tracks generated by automatic speech recognition on an on-demand asset.
  • generated_live: Tracks generated by automatic speech recognition on a live stream configured with generated_subtitles. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
  • generated_live_final: Tracks generated by automatic speech recognition on a live stream using generated_subtitles. The accuracy, timing, and formatting of these subtitles is improved compared to the corresponding generated_live tracks. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.
data.new_asset_settings.tracks[].language_code
string

The language code value represents BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is only set for text and audio track types.

data.new_asset_settings.tracks[].name
string

The name of the track containing a human-readable description. The HLS manifest will associate a subtitle text or audio track with this value. For example, the value should be "English" for a subtitle text track for the language_code value of en-US. This parameter is only set for text and audio track types.

data.new_asset_settings.tracks[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This parameter is only set tracks where type is text and text_type is subtitles.

data.new_asset_settings.tracks[].passthrough
string

Arbitrary user-supplied metadata set for the track either when creating the asset or track. This parameter is only set for text type tracks. Max 255 characters.

data.new_asset_settings.tracks[].status
string
Possible values: "preparing""ready""errored""deleted"

The status of the track. This parameter is only set for text type tracks.

data.new_asset_settings.tracks[].primary
boolean

For an audio track, indicates that this is the primary audio track, ingested from the main input for this asset. The primary audio track cannot be deleted.

data.new_asset_settings.errors
object

Object that describes any errors that happened when processing this asset.

data.new_asset_settings.errors.type
string

The type of error that occurred for this asset.

data.new_asset_settings.errors.messages
array

Error messages with more details.

data.new_asset_settings.upload_id
string

Unique identifier for the Direct Upload. This is an optional parameter added when the asset is created from a direct upload.

data.new_asset_settings.is_live
boolean

Indicates whether the live stream that created this asset is currently active and not in idle state. This is an optional parameter added when the asset is created from a live stream.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.new_asset_settings.live_stream_id
string

Unique identifier for the live stream. This is an optional parameter added when the asset is created from a live stream.

data.new_asset_settings.master
object

An object containing the current status of Master Access and the link to the Master MP4 file when ready. This object does not exist if master_access is set to none and when the temporary URL expires.

data.new_asset_settings.master.status
string
Possible values: "ready""preparing""errored"
data.new_asset_settings.master.url
string

The temporary URL to the master version of the video, as an MP4 file. This URL will expire after 24 hours.

data.new_asset_settings.master_access
string (default: none)
Possible values: "temporary""none"
data.new_asset_settings.mp4_support
string (default: none)
Possible values: "standard""none"
data.new_asset_settings.source_asset_id
string

Asset Identifier of the video used as the source for creating the clip.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.static_renditions
object

An object containing the current status of any static renditions (mp4s). The object does not exist if no static renditions have been requested. See Download your videos for more information.

data.new_asset_settings.static_renditions.status
string (default: disabled)
Possible values: "ready""preparing""disabled""errored"

Indicates the status of downloadable MP4 versions of this asset.

data.new_asset_settings.static_renditions.files
array

Array of file objects.

data.new_asset_settings.static_renditions.files[].name
string
Possible values: "low.mp4""medium.mp4""high.mp4""audio.m4a"
data.new_asset_settings.static_renditions.files[].ext
string
Possible values: "mp4""m4a"

Extension of the static rendition file

data.new_asset_settings.static_renditions.files[].height
integer

The height of the static rendition's file in pixels

data.new_asset_settings.static_renditions.files[].width
integer

The width of the static rendition's file in pixels

data.new_asset_settings.static_renditions.files[].bitrate
integer

The bitrate in bits per second

data.new_asset_settings.static_renditions.files[].filesize
string

The file size in bytes

data.new_asset_settings.recording_times
array

An array of individual live stream recording sessions. A recording session is created on each encoder connection during the live stream. Additionally any time slate media is inserted during brief interruptions in the live stream media or times when the live streaming software disconnects, a recording session representing the slate media will be added with a "slate" type.

data.new_asset_settings.recording_times[].started_at
string

The time at which the recording for the live stream started. The time value is Unix epoch time represented in ISO 8601 format.

data.new_asset_settings.recording_times[].duration
number

The duration of the live stream recorded. The time value is in seconds.

data.new_asset_settings.recording_times[].typeBeta
string
Possible values: "content""slate"

The type of media represented by the recording session, either content for normal stream content or slate for slate media inserted during stream interruptions.

data.new_asset_settings.non_standard_input_reasons
object

An object containing one or more reasons the input file is non-standard. See the guide on minimizing processing time for more information on what a standard input is defined as. This object only exists on on-demand assets that have non-standard inputs, so if missing you can assume the input qualifies as standard.

data.new_asset_settings.non_standard_input_reasons.video_codec
string

The video codec used on the input file. For example, the input file encoded with hevc video codec is non-standard and the value of this parameter is hevc.

data.new_asset_settings.non_standard_input_reasons.audio_codec
string

The audio codec used on the input file. Non-AAC audio codecs are non-standard.

data.new_asset_settings.non_standard_input_reasons.video_gop_size
string
Possible values: "high"

The video key frame Interval (also called as Group of Picture or GOP) of the input file is high. This parameter is present when the gop is greater than 20 seconds.

data.new_asset_settings.non_standard_input_reasons.video_frame_rate
string

The video frame rate of the input file. Video with average frames per second (fps) less than 5 or greater than 120 is non-standard. A -1 frame rate value indicates Mux could not determine the frame rate of the video track.

data.new_asset_settings.non_standard_input_reasons.video_resolution
string

The video resolution of the input file. Video resolution higher than 2048 pixels on any one dimension (height or width) is considered non-standard, The resolution value is presented as width x height in pixels.

data.new_asset_settings.non_standard_input_reasons.video_bitrate
string
Possible values: "high"

The video bitrate of the input file is high. This parameter is present when the average bitrate of any key frame interval (also known as Group of Pictures or GOP) is higher than what's considered standard which typically is 16 Mbps.

data.new_asset_settings.non_standard_input_reasons.pixel_aspect_ratio
string

The video pixel aspect ratio of the input file.

data.new_asset_settings.non_standard_input_reasons.video_edit_list
string
Possible values: "non-standard"

Video Edit List reason indicates that the input file's video track contains a complex Edit Decision List.

data.new_asset_settings.non_standard_input_reasons.audio_edit_list
string
Possible values: "non-standard"

Audio Edit List reason indicates that the input file's audio track contains a complex Edit Decision List.

data.new_asset_settings.non_standard_input_reasons.unexpected_media_file_parameters
string
Possible values: "non-standard"

A catch-all reason when the input file in created with non-standard encoding parameters.

data.new_asset_settings.non_standard_input_reasons.unsupported_pixel_format
string

The video pixel format, as a string, returned by libav. Considered non-standard if not one of yuv420p or yuvj420p.

data.new_asset_settings.test
boolean

True means this live stream is a test asset. A test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test assets are watermarked with the Mux logo, limited to 10 seconds, and deleted after 24 hrs.

data.new_asset_settings.ingest_type
string
Possible values: "on_demand_url""on_demand_direct_upload""on_demand_clip""live_rtmp""live_srt"

The type of ingest used to create the asset.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.asset_id
string

Only set once the upload is in the asset_created state.

data.cors_origin
string

If the upload URL will be used in a browser, you must specify the origin in order for the signed URL to have the correct CORS headers.

data.url
string

The URL to upload the associated source media to.

data.test
boolean

Indicates if this is a test Direct Upload, in which case the Asset that gets created will be a test Asset.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

A new live stream has been created. Broadcasters with a stream_key can start sending encoder feed to this live stream.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

An encoder has successfully connected to this live stream.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Recording on this live stream has started. Mux has successfully processed the first frames from the encoder. If you show a red dot icon in your UI, this would be a good time to show it.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This live stream is now 'active'. The live streams playback_id OR the playback_id associated with this live stream's asset can be used right now to created HLS URLs (https://stream.mux.com/{PLAYBACK_ID}.m3u8 and start streaming in your player. Note that before the live stream is 'active', trying to stream the HLS URL will result in HTTP 412 errors.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

An encoder has disconnected from this live stream. Note that while disconnected the live stream is still status: 'active'.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

The reconnect_window for this live stream has elapsed. The live stream status will now transition to 'idle'.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This live stream has been updated. For example, after resetting the live stream's stream key.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This live stream has been enabled. This event fires after enable live stream API.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This live stream has been disabled. This event fires after disable live stream API. Disabled live streams will no longer accept new RTMP connections.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This event fires after deleting a live stream

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.created_at
integer
data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.recent_asset_ids
array

An array of strings with the most recent Asset IDs that were created from this Live Stream. The most recently generated Asset ID is the last entry in the list.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.playback_ids
array

An array of Playback ID objects. Use these to create HLS playback URLs. See Play your videos for more details.

data.playback_ids[].id
string

Unique identifier for the PlaybackID

data.playback_ids[].policy
string
Possible values: "public""signed"
  • public playback IDs are accessible by constructing an HLS URL like https://stream.mux.com/${PLAYBACK_ID}

  • signed playback IDs should be used with tokens https://stream.mux.com/${PLAYBACK_ID}?token={TOKEN}. See Secure video playback for details about creating tokens.

data.new_asset_settings
object
data.new_asset_settings.input
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.input[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.input[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.input[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.input[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.input[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.input[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.input[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.input[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.input[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.input[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.input[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.input[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.input[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.input[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.input[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.input[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.input[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.input[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.input[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.input[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.new_asset_settings.passthrough
string

Arbitrary user-supplied metadata that will be included in the asset details and related webhooks. Can be used to store your own ID for a video along with the asset. Max: 255 characters.

data.new_asset_settings.mp4_support
string
Possible values: "none""standard"

Specify what level (if any) of support for mp4 playback. In most cases you should use our default HLS-based streaming playback ({playback_id}.m3u8) which can automatically adjust to viewers' connection speeds, but an mp4 can be useful for some legacy devices or downloading for offline playback. See the Download your videos guide for more information.

data.new_asset_settings.normalize_audio
boolean (default: false)

Normalize the audio track loudness level. This parameter is only applicable to on-demand (not live) assets.

data.new_asset_settings.master_access
string
Possible values: "none""temporary"

Specify what level (if any) of support for master access. Master access can be enabled temporarily for your asset to be downloaded. See the Download your videos guide for more information.

data.new_asset_settings.test
boolean

Marks the asset as a test asset when the value is set to true. A Test asset can help evaluate the Mux Video APIs without incurring any cost. There is no limit on number of test assets created. Test asset are watermarked with the Mux logo, limited to 10 seconds, deleted after 24 hrs.

data.new_asset_settings.max_resolution_tier
string
Possible values: "1080p""1440p""2160p"

Max resolution tier can be used to control the maximum resolution_tier your asset is encoded, stored, and streamed at. If not set, this defaults to 1080p.

data.new_asset_settings.encoding_tier
string
Possible values: "smart""baseline"

The encoding tier informs the cost, quality, and available platform features for the asset. By default the smart encoding tier is used. See the guide for more details.

data.new_asset_settings.playback_policies
array

An array of playback policy names that you want applied to this asset and available through playback_ids. Options include: "public" (anyone with the playback URL can stream the asset). And "signed" (an additional access token is required to play the asset). If no playback_policy is set, the asset will have no playback IDs and will therefore not be playable. For simplicity, a single string name can be used in place of the array in the case of only one playback policy.

data.new_asset_settings.inputs
array

An array of objects that each describe an input file to be used to create the asset. As a shortcut, input can also be a string URL for a file when only one input file is used. See input[].url for requirements.

data.new_asset_settings.inputs[].url
string

The URL of the file that Mux should download and use.

  • For the main input file, this should be the URL to the muxed file for Mux to download, for example an MP4, MOV, MKV, or TS file. Mux supports most audio/video file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For audio tracks, the URL is the location of the audio file for Mux to download, for example an M4A, WAV, or MP3 file. Mux supports most audio file formats and codecs, but for fastest processing, you should use standard inputs wherever possible.
  • For text tracks, the URL is the location of subtitle/captions file. Mux supports SubRip Text (SRT) and Web Video Text Tracks formats for ingesting Subtitles and Closed Captions.
  • For Watermarking or Overlay, the URL is the location of the watermark image. The maximum size is 4096x4096.
  • When creating clips from existing Mux assets, the URL is defined with mux://assets/{asset_id} template where asset_id is the Asset Identifier for creating the clip from. The url property may be omitted on the first input object when providing asset settings for LiveStream and Upload objects, in order to configure settings related to the primary (live stream or direct upload) input.
data.new_asset_settings.inputs[].overlay_settings
object

An object that describes how the image file referenced in URL should be placed over the video (i.e. watermarking). Ensure that the URL is active and persists the entire lifespan of the video object.

data.new_asset_settings.inputs[].overlay_settings.vertical_align
string
Possible values: "top""middle""bottom"

Where the vertical positioning of the overlay/watermark should begin from. Defaults to "top"

data.new_asset_settings.inputs[].overlay_settings.vertical_margin
string

The distance from the vertical_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'middle', a positive value will shift the overlay towards the bottom and and a negative value will shift it towards the top.

data.new_asset_settings.inputs[].overlay_settings.horizontal_align
string
Possible values: "left""center""right"

Where the horizontal positioning of the overlay/watermark should begin from.

data.new_asset_settings.inputs[].overlay_settings.horizontal_margin
string

The distance from the horizontal_align starting point and the image's closest edge. Can be expressed as a percent ("10%") or as a pixel value ("100px"). Negative values will move the overlay offscreen. In the case of 'center', a positive value will shift the image towards the right and and a negative value will shift it towards the left.

data.new_asset_settings.inputs[].overlay_settings.width
string

How wide the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the width will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If height is supplied with no width, the width will scale proportionally to the height.

data.new_asset_settings.inputs[].overlay_settings.height
string

How tall the overlay should appear. Can be expressed as a percent ("10%") or as a pixel value ("100px"). If both width and height are left blank the height will be the true pixels of the image, applied as if the video has been scaled to fit a 1920x1080 frame. If width is supplied with no height, the height will scale proportionally to the width.

data.new_asset_settings.inputs[].overlay_settings.opacity
string

How opaque the overlay should appear, expressed as a percent. (Default 100%)

data.new_asset_settings.inputs[].generated_subtitles
array

Generate subtitle tracks using automatic speech recognition with this configuration. This may only be provided for the first input object (the main input file). For direct uploads, this first input should omit the url parameter, as the main input file is provided via the direct upload. This will create subtitles based on the audio track ingested from that main input file. Note that subtitle generation happens after initial ingest, so the generated tracks will be in the preparing state when the asset transitions to ready.

data.new_asset_settings.inputs[].generated_subtitles[].name
string

A name for this subtitle track.

data.new_asset_settings.inputs[].generated_subtitles[].passthrough
string

Arbitrary metadata set for the subtitle track. Max 255 characters.

data.new_asset_settings.inputs[].generated_subtitles[].language_code
string (default: en)
Possible values: "en""es""it""pt""de""fr""pl""ru""nl""ca""tr""sv""uk""no""fi""sk""el""cs""hr""da""ro""bg"

The language to generate subtitles in.

data.new_asset_settings.inputs[].start_time
number

The time offset in seconds from the beginning of the video indicating the clip's starting marker. The default value is 0 when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].end_time
number

The time offset in seconds from the beginning of the video, indicating the clip's ending marker. The default value is the duration of the video when not included. This parameter is only applicable for creating clips when input.url has mux://assets/{asset_id} format.

data.new_asset_settings.inputs[].type
string
Possible values: "video""audio""text"

This parameter is required for text type tracks.

data.new_asset_settings.inputs[].text_type
string
Possible values: "subtitles"

Type of text track. This parameter only supports subtitles value. For more information on Subtitles / Closed Captions, see this blog post. This parameter is required for text type tracks.

data.new_asset_settings.inputs[].language_code
string

The language code value must be a valid BCP 47 specification compliant value. For example, en for English or en-US for the US version of English. This parameter is required for text and audio track types.

data.new_asset_settings.inputs[].name
string

The name of the track containing a human-readable description. This value must be unique within each group of text or audio track types. The HLS manifest will associate a subtitle text track with this value. For example, the value should be "English" for a subtitle text track with language_code set to en. This optional parameter should be used only for text and audio type tracks. This parameter can be optionally provided for the first video input to denote the name of the muxed audio track if present. If this parameter is not included, Mux will auto-populate based on the input[].language_code value.

data.new_asset_settings.inputs[].closed_captions
boolean

Indicates the track provides Subtitles for the Deaf or Hard-of-hearing (SDH). This optional parameter should be used for tracks with type of text and text_type set to subtitles.

data.new_asset_settings.inputs[].passthrough
string

This optional parameter should be used tracks with type of text and text_type set to subtitles.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.audio_only
boolean

The live stream only processes the audio track if the value is set to true. Mux drops the video track if broadcasted.

data.embedded_subtitles
array

Describes the embedded closed caption configuration of the incoming live stream.

data.embedded_subtitles[].name
string

A name for this live stream closed caption track.

data.embedded_subtitles[].passthrough
string

Arbitrary user-supplied metadata set for the live stream closed caption track. Max 255 characters.

data.embedded_subtitles[].language_code
string (default: en)

The language of the closed caption stream. Value must be BCP 47 compliant.

data.embedded_subtitles[].language_channel
string (default: cc1)
Possible values: "cc1""cc2""cc3""cc4"

CEA-608 caption channel to read data from.

data.generated_subtitles
array

Configure the incoming live stream to include subtitles created with automatic speech recognition. Each Asset created from a live stream with generated_subtitles configured will automatically receive two text tracks. The first of these will have a text_source value of generated_live, and will be available with ready status as soon as the stream is live. The second text track will have a text_source value of generated_live_final and will contain subtitles with improved accuracy, timing, and formatting. However, generated_live_final tracks will not be available in ready status until the live stream ends. If an Asset has both generated_live and generated_live_final tracks that are ready, then only the generated_live_final track will be included during playback.

data.generated_subtitles[].name
string

A name for this live stream subtitle track.

data.generated_subtitles[].passthrough
string

Arbitrary metadata set for the live stream subtitle track. Max 255 characters.

data.generated_subtitles[].language_code
string (default: en)
Possible values: "en""en-US"

The language to generate subtitles in.

data.generated_subtitles[].transcription_vocabulary_ids
array

Unique identifiers for existing Transcription Vocabularies to use while generating subtitles for the live stream. If the Transcription Vocabularies provided collectively have more than 1000 phrases, only the first 1000 phrases will be included.

data.reconnect_window
number (default: 60, minimum: 0, maximum: 1800)

When live streaming software disconnects from Mux, either intentionally or due to a drop in the network, the Reconnect Window is the time in seconds that Mux should wait for the streaming software to reconnect before considering the live stream finished and completing the recorded asset. Max: 1800s (30 minutes).

If not specified directly, Standard Latency streams have a Reconnect Window of 60 seconds; Reduced and Low Latency streams have a default of 0 seconds, or no Reconnect Window. For that reason, we suggest specifying a value other than zero for Reduced and Low Latency streams.

Reduced and Low Latency streams with a Reconnect Window greater than zero will insert slate media into the recorded asset while waiting for the streaming software to reconnect or when there are brief interruptions in the live stream media. When using a Reconnect Window setting higher than 60 seconds with a Standard Latency stream, we highly recommend enabling slate with the use_slate_for_standard_latency option.

data.use_slate_for_standard_latency
boolean (default: false)

By default, Standard Latency live streams do not have slate media inserted while waiting for live streaming software to reconnect to Mux. Setting this to true enables slate insertion on a Standard Latency stream.

data.reconnect_slate_url
string

The URL of the image file that Mux should download and use as slate media during interruptions of the live stream media. This file will be downloaded each time a new recorded asset is created from the live stream. If this is not set, the default slate media will be used.

data.reduced_latencyDeprecated
boolean

This field is deprecated. Please use latency_mode instead. Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this if you want lower latency for your live stream. See the Reduce live stream latency guide to understand the tradeoffs.

data.simulcast_targets
array

Each Simulcast Target contains configuration details to broadcast (or "restream") a live stream to a third-party streaming service. See the Stream live to 3rd party platforms guide.

data.simulcast_targets[].id
string

ID of the Simulcast Target

data.simulcast_targets[].passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.simulcast_targets[].status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.simulcast_targets[].stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.simulcast_targets[].url
string

RTMP hostname including the application name for the third party live streaming service.

data.simulcast_targets[].error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.latency_mode
string
Possible values: "low""reduced""standard"

Latency is the time from when the streamer transmits a frame of video to when you see it in the player. Set this as an alternative to setting low latency or reduced latency flags.

data.test
boolean

True means this live stream is a test live stream. Test live streams can be used to help evaluate the Mux Video APIs for free. There is no limit on the number of test live streams, but they are watermarked with the Mux logo, and limited to 5 minutes. The test live stream is disabled after the stream is active for 5 mins and the recorded asset also deleted after 24 hours.

data.max_continuous_duration
integer (default: 43200, minimum: 60, maximum: 43200)

The time in seconds a live stream may be continuously active before being disconnected. Defaults to 12 hours.

data.srt_passphrase
string

Unique key used for encrypting a stream to a Mux SRT endpoint.

data.active_ingest_protocol
string
Possible values: "rtmp""srt"

The protocol used for the active ingest stream. This is only set when the live stream is active.

data.connected
boolean
data.recording
boolean
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This live stream event fires when Mux has encountered a non-fatal issue. There is no disruption to the live stream ingest and playback. At this time, the event is only fired when Mux is unable to download an image from the URL set as reconnect_slate_url parameter value.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

Unique identifier for the Live Stream. Max 255 characters.

data.stream_key
string

Unique key used for streaming to a Mux RTMP endpoint. This should be considered as sensitive as credentials, anyone with this stream key can begin streaming.

data.active_asset_id
string

The Asset that is currently being created if there is an active broadcast.

data.status
string
Possible values: "active""idle""disabled"

idle indicates that there is no active broadcast. active indicates that there is an active broadcast and disabled status indicates that no future RTMP streams can be published.

data.passthrough
string

Arbitrary user-supplied metadata set for the asset. Max 255 characters.

data.warning
object
data.warning.type
string
data.warning.message
string
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

A new simulcast target has been created for this live stream.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

ID of the Simulcast Target

data.passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.url
string

RTMP hostname including the application name for the third party live streaming service.

data.error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.live_stream_id
string

Unique identifier for the Live Stream. Max 255 characters.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

When the parent live stream is 'disconnected', all simulcast targets will have be 'idle'.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

ID of the Simulcast Target

data.passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.url
string

RTMP hostname including the application name for the third party live streaming service.

data.error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.live_stream_id
string

Unique identifier for the Live Stream. Max 255 characters.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

When the parent live stream fires 'connected' then the simulcast targets transition to 'starting'.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

ID of the Simulcast Target

data.passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.url
string

RTMP hostname including the application name for the third party live streaming service.

data.error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.live_stream_id
string

Unique identifier for the Live Stream. Max 255 characters.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This fires when Mux has successfully connected to the simulcast target and has begun pushing content to that third party.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

ID of the Simulcast Target

data.passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.url
string

RTMP hostname including the application name for the third party live streaming service.

data.error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.live_stream_id
string

Unique identifier for the Live Stream. Max 255 characters.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This fires when Mux has encountered an error either while attempting to connect to the third party streaming service or while broadcasting. Mux will try to re-establish the connection and if it does successfully the simulcast target will transition back to 'broadcasting'.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

ID of the Simulcast Target

data.passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.url
string

RTMP hostname including the application name for the third party live streaming service.

data.error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.live_stream_id
string

Unique identifier for the Live Stream. Max 255 characters.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This simulcast target has been deleted.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

ID of the Simulcast Target

data.passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.url
string

RTMP hostname including the application name for the third party live streaming service.

data.error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.live_stream_id
string

Unique identifier for the Live Stream. Max 255 characters.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

This simulcast target has been updated.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.id
string

ID of the Simulcast Target

data.passthrough
string

Arbitrary user-supplied metadata set when creating a simulcast target.

data.status
string
Possible values: "idle""starting""broadcasting""errored"

The current status of the simulcast target. See Statuses below for detailed description.

  • idle: Default status. When the parent live stream is in disconnected status, simulcast targets will be idle state.
  • starting: The simulcast target transitions into this state when the parent live stream transition into connected state.
  • broadcasting: The simulcast target has successfully connected to the third party live streaming service and is pushing video to that service.
  • errored: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. When a simulcast target has this status it will have an error_severity field with more details about the error.
data.stream_key
string

Stream Key represents an stream identifier for the third party live streaming service to simulcast the parent live stream too.

data.url
string

RTMP hostname including the application name for the third party live streaming service.

data.error_severity
string
Possible values: "normal""fatal"

The severity of the error encountered by the simulcast target. This field is only set when the simulcast target is in the errored status. See the values of severities below and their descriptions.

  • normal: The simulcast target encountered an error either while attempting to connect to the third party live streaming service, or mid-broadcasting. A simulcast may transition back into the broadcasting state if a connection with the service can be re-established.
  • fatal: The simulcast target is incompatible with the current input to the parent live stream. No further attempts to this simulcast target will be made for the current live stream asset.
data.live_stream_id
string

Unique identifier for the Live Stream. Max 255 characters.

attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull

Alert for high traffic video delivery.

NameTypeDescription
type
string

Type for the webhook event

id
string

Unique identifier for the event

created_at
string

Time the event was created

object
object
object.type
string
object.id
string
environment
object
environment.name
string

Name for the environment

environment.id
string

Unique identifier for the environment

data
object
data.data
array
data.data[].live_stream_id
string

Unique identifier for the live stream that created the asset.

data.data[].asset_id
string

Unique identifier for the asset.

data.data[].passthrough
string

The passthrough value for the asset.

data.data[].created_at
integer

Time at which the asset was created. Measured in seconds since the Unix epoch.

data.data[].deleted_at
integer

If exists, time at which the asset was deleted. Measured in seconds since the Unix epoch.

data.data[].asset_state
string
Possible values: "ready""errored""deleted"

The state of the asset.

data.data[].asset_duration
number

The duration of the asset in seconds.

data.data[].asset_resolution_tier
string
Possible values: "audio-only""720p""1080p""1440p""2160p"

The resolution tier that the asset was ingested at, affecting billing for ingest & storage

data.data[].delivered_seconds
number

Total number of delivered seconds during this time window.

data.data[].delivered_seconds_by_resolution
object

Seconds delivered broken into resolution tiers. Each tier will only be displayed if there was content delivered in the tier.

data.data[].delivered_seconds_by_resolution.tier_2160p
number

Total number of delivered seconds during this time window that had a resolution larger than the 1440p tier (over 4,194,304 pixels total).

data.data[].delivered_seconds_by_resolution.tier_1440p
number

Total number of delivered seconds during this time window that had a resolution larger than the 1080p tier but less than or equal to the 2160p tier (over 2,073,600 and <= 4,194,304 pixels total).

data.data[].delivered_seconds_by_resolution.tier_1080p
number

Total number of delivered seconds during this time window that had a resolution larger than the 720p tier but less than or equal to the 1440p tier (over 921,600 and <= 2,073,600 pixels total).

data.data[].delivered_seconds_by_resolution.tier_720p
number

Total number of delivered seconds during this time window that had a resolution within the 720p tier (up to 921,600 pixels total, based on typical 1280x720).

data.data[].delivered_seconds_by_resolution.tier_audio_only
number

Total number of delivered seconds during this time window that had a resolution of audio only.

data.timeframe
array
data.threshold
integer

Current threshold set for alerting

data.id
string
attempts
array

Attempts for sending out the webhook event

attempts[].webhook_id
integer

Unique identifier for the webhook

attempts[].response_status_code
integer

HTTP response status code for the webhook attempt

attempts[].response_headers
object

HTTP response headers for the webhook attempt

attempts[].response_body
stringnull

HTTP response body for the webhook attempt

attempts[].max_attempts
integer

Max attempts number for the webhook attempt

attempts[].id
string

Unique identifier for the webhook attempt

attempts[].created_at
string

Time the webhook request was attempted

attempts[].address
string

URL address for the webhook attempt

request_idDeprecated
stringnull
accessorDeprecated
stringnull
accessor_sourceDeprecated
stringnull