Google is committed to advancing racial equity for Black communities. See how.

fuchsia.media

PROTOCOLS

ActivityReporter

Defined in fuchsia.media/activity_reporter.fidl

A protocol for monitoring the usage activity of the AudioRenderers.

WatchRenderActivity

Notifies the client whenever there is a change in the set of active AudioRenderUsages. It returns immediately the first time that it is called.

Request

NameType

Response

NameType
active_usages vector<AudioRenderUsage>[5]

Audio

Defined in fuchsia.media/audio.fidl

CreateAudioCapturer

Create an AudioCapturer which either captures from the current default audio input device, or loops-back from the current default audio output device based on value passed for the loopback flag.

Request

NameType
audio_capturer_request request<AudioCapturer>
loopback bool

CreateAudioRenderer

Request

NameType
audio_renderer_request request<AudioRenderer>

SetSystemGain

Request

NameType
gain_db float32

SetSystemMute

Request

NameType
muted bool

SystemGainMuteChanged

Response

NameType
gain_db float32
muted bool

AudioCapturer

Defined in fuchsia.media/audio_capturer.fidl

AudioCapturer

An AudioCapturer is an interface returned from an fuchsia.media.Audio's CreateAudioCapturer method, which may be used by clients to capture audio from either the current default audio input device, or the current default audio output device depending on the flags passed during creation.

** Format support **

See (Get|Set)StreamType below. By default, the captured stream type will be initially determined by the currently configured stream type of the source that the AudioCapturer was bound to at creation time. Users may either fetch this type using GetStreamType, or they may choose to have the media resampled or converted to a type of their choosing by calling SetStreamType. Note: the stream type may only be set while the system is not running, meaning that there are no pending capture regions (specified using CaptureAt) and that the system is not currently running in 'async' capture mode.

** Buffers and memory management **

Audio data is captured into a shared memory buffer (a VMO) supplied by the user to the AudioCapturer during the AddPayloadBuffer call. Please note the following requirements related to the management of the payload buffer.

++ The payload buffer must be supplied before any capture operation may start. Any attempt to start capture (via either CaptureAt or StartAsyncCapture) before a payload buffer has been established is an error. ++ The payload buffer may not be changed while there are any capture operations pending. ++ The stream type may not be changed after the payload buffer has been set. ++ The payload buffer must be an integral number of audio frame sizes (in bytes) ++ When running in 'async' mode (see below), the payload buffer must be at least as large as twice the frames_per_packet size specified during StartAsyncCapture. ++ The handle to the payload buffer supplied by the user must be readable, writable, mappable and transferable. ++ Users should always treat the payload buffer as read-only.

** Synchronous vs. Asynchronous capture mode **

The AudioCapturer interface can be used in one of two mutually exclusive modes: Synchronous and Asynchronous. A description of each mode and their tradeoffs is given below.

** Synchronous mode **

By default, AudioCapturer instances are running in 'sync' mode. They will only capture data when a user supplies at least one region to capture into using the CaptureAt method. Regions supplied in this way will be filled in the order that they are received and returned to the client as StreamPackets via the return value of the CaptureAt method. If an AudioCapturer instance has data to capture, but no place to put it (because there are no more pending regions to fill), the next payload generated will indicate that their has been an overflow by setting the Discontinuity flag on the next produced StreamPacket. Synchronous mode may not be used in conjunction with Asynchronous mode. It is an error to attempt to call StartAsyncCapture while the system still regions supplied by CaptureAt waiting to be filled.

If a user has supplied regions to be filled by the AudioCapturer instance in the past, but wishes to reclaim those regions, they may do so using the DiscardAllPackets method. Calling the DiscardAllPackets method will cause all pending regions to be returned, but with NO_TIMESTAMP as their StreamPacket's PTS. See "Timing and Overflows", below, for a discussion of timestamps and discontinuity flags. After a DiscardAllPackets operation, an OnEndOfStream event will be produced. While an AudioCapturer will never overwrite any region of the payload buffer after a completed region is returned, it may overwrite the unfilled portions of a partially filled buffer which has been returned as a result of a DiscardAllPackets operation.

** Asynchronous mode **

While running in 'async' mode, clients do not need to explicitly supply shared buffer regions to be filled by the AudioCapturer instance. Instead, a client enters into 'async' mode by calling StartAsyncCapture and supplying a callback interface and the number of frames to capture per-callback. Once running in async mode, the AudioCapturer instance will identify which payload buffer regions to capture into, capture the specified number of frames, then deliver those frames as StreamPackets using the OnPacketCapture FIDL event. Users may stop capturing and return the AudioCapturer instance to 'sync' mode using the StopAsyncCapture method.

It is considered an error to attempt any of the following operations.

++ To attempt to enter 'async' capture mode when no payload buffer has been established. ++ To specify a number of frames to capture per payload which does not permit at least two contiguous capture payloads to exist in the established shared payload buffer simultaneously. ++ To send a region to capture into using the CaptureAt method while the AudioCapturer instance is running in 'async' mode. ++ To attempt to call DiscardAllPackets while the AudioCapturer instance is running in 'async' mode. ++ To attempt to re-start 'async' mode capturing without having first stopped. ++ To attempt any operation except for SetGain while in the process of stopping.

** Synchronizing with a StopAsyncCapture operation **

Stopping asynchronous capture mode and returning to synchronous capture mode is an operation which takes time. Aside from SetGain, users may not call any other methods on the AudioCapturer interface after calling StopAsyncCapture (including calling StopAsyncCapture again) until after the stop operation has completed. Because of this, it is important for users to be able to synchronize with the stop operation. Two mechanisms are provided for doing so.

The first is to use StopAsyncCapture (not the NoReply variant). When the user's callback has been called, they can be certain that stop operation is complete and that the AudioCapturer instance has returned to synchronous operation mode.

The second way to determine that a stop operation has completed is to use the flags on the packets which get delivered via the user-supplied AudioCapturerCallback interface after calling StopAsyncCapture. When asked to stop, any partially filled packet will be returned to the user, and the final packet returned will always have the end-of-stream flag (kFlagsEos) set on it to indicate that this is the final frame in the sequence. If there is no partially filled packet to return, the AudioCapturer will synthesize an empty packet with no timestamp, and offset/length set to zero, in order to deliver a packet with the end-of-stream flag set on it. Once users have seen the end-of-stream flag after calling stop, the AudioCapturer has finished the stop operation and returned to synchronous operating mode.

** Timing and Overflows **

All media packets produced by an AudioCapturer instance will have their PTS field filled out with the capture time of the audio expressed as a timestamp given by the reference clock timeline. Note: this timestamp is actually a capture timestamp, not a presentation timestamp (it is more of a CTS than a PTS) and is meant to represent the underlying system's best estimate of the capture time of the first frame of audio, including all outboard and hardware introduced buffering delay. As a result, all timestamps produced by an AudioCapturer should be expected to be in the past relative to 'now' on the stream's reference clock timeline.

The one exception to the "everything has an explicit timestamp" rule is when discarding submitted regions while operating in synchronous mode. Discarded packets have no data in them, but FIDL demands that all pending method-return-value callbacks be executed. Because of this, the regions will be returned to the user, but their timestamps will be set to NO_TIMESTAMP, and their payload sizes will be set to zero. Any partially filled payload will have a valid timestamp, but a payload size smaller than originally requested. The final discarded payload (if there were any to discard) will be followed by an OnEndOfStream event.

Two StreamPackets delivered by an AudioCapturer instance are 'continuous' if the first frame of audio contained in the second packet was capture exactly one nominal frame time after the final frame of audio in the first packet. If this relationship does not hold, the second StreamPacket will have the 'kFlagDiscontinuous' flag set in it's flags field.

Even though explicit timestamps are provided on every StreamPacket produced, users who have very precise timing requirements are encouraged to always reason about time by counting frames delivered since the last discontinuity, rather than simply using the raw capture timestamps. This is because the explicit timestamps written on continuous packets may have a small amount of rounding error based on whether or not the units of the capture timeline reference clock are divisible by the chosen audio frame rate.

Users should always expect the first StreamPacket produced by an AudioCapturer to have the discontinuous flag set on it (as there is no previous packet to be continuous with). Similarly, the first StreamPacket after a DiscardAllPackets or a Stop/Start cycle will always be discontinuous. After that, there are only two reasons that a StreamPacket will ever be discontinuous:

  1. The user is operating an synchronous mode and does not supply regions to be filled quickly enough. If the next continuous frame of data has not been captured by the time it needs to be purged from the source buffers, an overflow has occurred and the AudioCapturer will flag the next captured region as discontinuous.
  2. The user is operating in asynchronous mode and some internal error prevents the AudioCapturer instance from capturing the next frame of audio in a continuous fashion. This might be high system load or a hardware error, but in general it is something which should never normally happen. In practice, however, if it does, the next produced packet will be flagged as being discontinuous.

** Synchronous vs. Asynchronous Trade-offs **

The choice of operating in synchronous vs. asynchronous mode is up to the user, and depending on the user's requirements, there are some advantages and disadvantages to each choice.

Synchronous mode requires only a single Zircon channel under the hood and can achieve some small savings because of this. In addition, the user has complete control over the buffer management. Users specify exactly where audio will be captured to and in what order. Because of this, if users do not need to always be capturing, it is simple to stop and restart the capture later (just by ceasing to supply packets, then resuming later on). Payloads do not need to be uniform in size either, clients may specify payloads of whatever granularity is appropriate.

The primary downside of operating in synchronous mode is that two messages will need to be sent for every packet to be captured. One to inform the AudioCapturer of the instance to capture into, and one to inform the user that the packet has been captured. This may end up increasing overhead and potentially complicating client designs.

Asynchronous mode has the advantage requiring only 1/2 of the messages, however, when operating in 'async' mode, AudioCapturer instances have no way of knowing if a user is processing the StreamPackets being sent in a timely fashion, and no way of automatically detecting an overflow condition. Users of 'async' mode should be careful to use a buffer large enough to ensure that they will be able to process their data before an AudioCapturer will be forced to overwrite it.

AddPayloadBuffer

Adds a payload buffer to the current buffer set associated with the connection. A StreamPacket struct reference a payload buffer in the current set by ID using the StreamPacket.payload_buffer_id field.

A buffer with ID id must not be in the current set when this method is invoked, otherwise the service will close the connection.

Request

NameType
id uint32
payload_buffer handle<vmo>

BindGainControl

Binds to the gain control for this AudioCapturer.

Request

NameType
gain_control_request request<fuchsia.media.audio/GainControl>

CaptureAt

Explicitly specify a region of the shared payload buffer for the audio input to capture into.

Request

NameType
payload_buffer_id uint32
payload_offset uint32
frames uint32

Response

NameType
captured_packet StreamPacket

DiscardAllPackets

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Request

NameType

GetReferenceClock

Retrieves the stream's reference clock. The returned handle will have READ, DUPLICATE and TRANSFER rights, and will refer to a zx::clock that is MONOTONIC and CONTINUOUS.

Request

NameType

Response

NameType
reference_clock handle<clock>

GetStreamType

Gets the currently configured stream type. Note: for an AudioCapturer which was just created and has not yet had its stream type explicitly set, this will retrieve the stream type -- at the time the AudioCapturer was created -- of the source (input or looped-back output) to which the AudioCapturer is bound.

Request

NameType

Response

NameType
stream_type StreamType

OnEndOfStream

Indicates that the stream has ended.

Response

NameType

OnPacketProduced

Delivers a packet produced by the service. When the client is done with the payload memory, the client must call ReleasePacket to release the payload memory.

Response

NameType
packet StreamPacket

ReleasePacket

Releases payload memory associated with a packet previously delivered via OnPacketProduced.

Request

NameType
packet StreamPacket

RemovePayloadBuffer

Removes a payload buffer from the current buffer set associated with the connection.

A buffer with ID id must exist in the current set when this method is invoked, otherwise the service will will close the connection.

Request

NameType
id uint32

SetPcmStreamType

Sets the stream type of the stream to be delivered. Causes the source material to be reformatted/resampled if needed in order to produce the requested stream type. Note that the stream type may not be changed after the payload buffer has been established.

Request

NameType
stream_type AudioStreamType

SetReferenceClock

Sets the reference clock that controls this capturer's playback rate. If the input parameter is a valid zx::clock, it must have READ, DUPLICATE, TRANSFER rights and refer to a clock that is both MONOTONIC and CONTINUOUS. If instead an invalid clock is passed (such as the uninitialized zx::clock()), this signals that the capturer wants to use the 'optimal' clock generated by AudioCore for this stream.

SetReferenceClock cannot be called after the capturer payload buffer has been added. It also cannot be called a second time (even before capture). If the client wants a reference clock that is initially CLOCK_MONOTONIC but may diverge at some later time, they should create a clone of the monotonic clock, set this as the stream's reference clock, then rate-adjust it subsequently as needed.

Request

NameType
reference_clock handle<clock>?

SetUsage

Sets the usage of the capture stream. This may be changed on the fly, but packets in flight may be affected by the new usage. By default the Capturer is created with the FOREGROUND usage.

Request

NameType
usage AudioCaptureUsage

StartAsyncCapture

Place the AudioCapturer into 'async' capture mode and begin to produce packets of exactly 'frames_per_packet' number of frames each. The OnPacketProduced event (of StreamSink) will be used to inform the client of produced packets.

Request

NameType
frames_per_packet uint32

StopAsyncCapture

Stop capturing in 'async' capture mode and (optionally) deliver a callback that may be used by the client if explicit synchronization is needed.

Request

NameType

Response

NameType

StopAsyncCaptureNoReply

Request

NameType

AudioConsumer

Defined in fuchsia.media/audio_consumer.fidl

Interface for playing and controlling audio

BindVolumeControl

Binds to this AudioConsumer volume control for control and notifications.

Request

NameType
volume_control_request request<fuchsia.media.audio/VolumeControl>

CreateStreamSink

Creates a StreamSink for the consumer with the indicated properties.

Multiple stream sinks may be acquired using this method, but they are intended to be used sequentially rather than concurrently. The first stream sink that's created using this method is used as the sole source of packets incoming to the logical consumer until that stream sink is closed or the EndOfStream method is called on that sink. At that point, the second stream sink is used, and so on.

If an unsupported compression type is supplied, the stream_sink_request request will be closed with an epitaph value of ZX_ERR_INVALID_ARGS

Request

NameType
buffers vector<vmo>[16]
stream_type AudioStreamType
compression Compression?
stream_sink_request request<StreamSink>

OnEndOfStream

Indicates that the last packet prior to the end of the stream has been rendered.

Response

NameType

SetRate

Requests to change the playback rate of the renderer. 1.0 means normal playback. Negative rates are not supported. The new rate will be reflected in the updated status. The default rate of any newly created StreamSink is 1.0.

Request

NameType
rate float32

Start

Starts rendering as indicated by flags.

media_time indicates the packet timestamp that corresponds to reference_time. Typically, this is the timestamp of the first packet that will be rendered. If packets will be supplied with no timestamps, this value should be NO_TIMESTAMP. Passing a media_time value of NO_TIMESTAMP chooses the default media time, established as follows: 1. When starting for the first time, the default media time is the timestamp on the first packet sent to the stream sink. 2. When resuming after stop, the default media time is the media time at which the stream stopped.

reference_time is the monotonic system time at which rendering should be started. For supply-driven sources, this must be the time at which the first packet was (or will be) sent plus a lead time, which must be in the range indicated in the AudioConsumerStatus. For demand-driven sources, the client must ensure that the lead time requirement is met at the start time. Passing the default value of 0 for reference_time causes the consumer to choose a start time based on the availability of packets, the lead time requirements, and whether LOW_LATENCY has been specified.

The actual start time will be reflected in the updated status.

Request

NameType
flags AudioConsumerStartFlags
reference_time zx/time
media_time int64

Stop

Stops rendering as soon as possible after this method is called. The actual stop time will be reflected in the updated status.

Request

NameType

WatchStatus

Gets the current status of the consumer using the long get pattern. The consumer responds to this method when the status changes - initially with respect to the initial status value and thereafter with respect to the previously-reported status value.

Request

NameType

Response

NameType
status AudioConsumerStatus

AudioCore

Defined in fuchsia.media/audio_core.fidl

BindUsageVolumeControl

Binds to a volume control protocol for the given usage.

Request

NameType
usage Usage
volume_control request<fuchsia.media.audio/VolumeControl>

CreateAudioCapturer

Create an AudioCapturer which either captures from the current default audio input device, or loops-back from the current default audio output device based on value passed for the loopback flag.

Request

NameType
loopback bool
audio_in_request request<AudioCapturer>

CreateAudioCapturerWithConfiguration

Create a new Audio Capturer according to the given requirements.

pcm_stream_type sets the stream type of the stream to be delivered. It causes the source material to be reformatted/resampled if needed in order to produce the requested stream type.

usage is used by Fuchsia to make decisions about user experience. See AudioCaptureUsage for more details.

configuration must be initialized to a variant, or no capturer can be created.

TODO(45240): Implement

Request

NameType
stream_type AudioStreamType
configuration AudioCapturerConfiguration
audio_capturer_request request<AudioCapturer>

CreateAudioRenderer

Create an AudioRenderer which outputs audio to the default device.

Request

NameType
audio_out_request request<AudioRenderer>

EnableDeviceSettings

Request

NameType
enabled bool

LoadDefaults

Re-loads the platform policy configuration. Falls back to a default config if the platform does not provide a config.

Request

NameType

ResetInteractions

Re-initializes the set of rules that are currently governing the interaction of streams in audio_core. The default behavior is 'NONE'.

Request

NameType

SetCaptureUsageGain

Set the Usage gain applied to Capturers. By default, the gain for all capture usages is set to Unity (0 db).

Request

NameType
usage AudioCaptureUsage
gain_db float32

SetInteraction

SetInteraction allows changing how audio_core handles interactions of multiple active streams simultaneously. If streams of Usage active are processing audio, and streams of Usage affected are as well, the Behavior specified will be applied to the streams of Usage affected.

Request

NameType
active Usage
affected Usage
behavior Behavior

SetRenderUsageGain

Set the Usage gain applied to Renderers. By default, the gain for all render usages is set to Unity (0 db).

Request

NameType
usage AudioRenderUsage
gain_db float32

SetSystemGain

System Gain and Mute

Fuchsia clients control the volume of individual audio streams via the fuchsia.media.audio.GainControl protocol. System Gain and Mute affect all audio output, and are controlled with methods that use the same concepts as GainControl, namely: independent gain and mute, with change notifications. Setting System Mute to true leads to the same outcome as setting System Gain to MUTED_GAIN_DB: all audio output across the system is silenced.

Sets the systemwide gain in decibels. gain_db values are clamped to the range -160 db to 0 db, inclusive. This setting is applied to all audio output devices. Audio input devices are unaffected. Does not affect System Mute.

Request

NameType
gain_db float32

SetSystemMute

Sets/clears the systemwide 'Mute' state for audio output devices. Audio input devices are unaffected. Changes to the System Mute state do not affect the value of System Gain.

Request

NameType
muted bool

SystemGainMuteChanged

Provides current values for systemwide Gain and Mute. When a client connects to AudioCore, the system immediately sends that client a SystemGainMuteChanged event with the current system Gain|Mute settings. Subsequent events will be sent when these Gain|Mute values change.

Response

NameType
gain_db float32
muted bool

AudioDeviceEnumerator

Defined in fuchsia.media/audio_device_enumerator.fidl

AddDeviceByChannel

Request

NameType
device_channel handle<channel>
device_name string[256]
is_input bool

AddDeviceByChannel2

Request

NameType
device_name string[256]
is_input bool
channel fuchsia.hardware.audio/StreamConfig

GetDefaultInputDevice

Default Device

Fetch the device ID of the current default input or output device, or ZX_KOID_INVALID if no such device exists.

Request

NameType

Response

NameType
device_token uint64

GetDefaultOutputDevice

Request

NameType

Response

NameType
device_token uint64

GetDeviceGain

Gain/Mute/AGC control

Note that each of these operations requires a device_token in order to target the proper input/output.

The Get command returns the device_token of the device whose gain is being reported, or ZX_KOID_INVALID in the case that the requested device_token was invalid or the device had been removed from the system before the Get command could be processed.

Set commands which are given an invalid device token are ignored and have no effect on the system. In addition, users do not need to control all of the gain settings for an audio device with each call. Only the settings with a corresponding flag set in the set_flags parameter will be affected. For example, passing SetAudioGainFlag_MuteValid will cause a SetDeviceGain call to care only about the mute setting in the gain_info structure, while passing (SetAudioGainFlag_GainValid | SetAudioGainFlag_MuteValid) will cause both the mute and the gain status to be changed simultaneously.

Request

NameType
device_token uint64

Response

NameType
device_token uint64
gain_info AudioGainInfo

GetDevices

Obtain the list of currently active audio devices.

Request

NameType

Response

NameType
devices vector<AudioDeviceInfo>

OnDefaultDeviceChanged

Response

NameType
old_default_token uint64
new_default_token uint64

OnDeviceAdded

Events sent when devices are added or removed, or when properties of a device change.

Response

NameType
device AudioDeviceInfo

OnDeviceGainChanged

Response

NameType
device_token uint64
gain_info AudioGainInfo

OnDeviceRemoved

Response

NameType
device_token uint64

SetDeviceGain

Request

NameType
device_token uint64
gain_info AudioGainInfo
valid_flags AudioGainValidFlags

AudioRenderer

Defined in fuchsia.media/audio_renderer.fidl

AudioRenderers can be in one of two states at any time: configurable or operational. A renderer is considered operational whenever it has packets queued to be rendered; otherwise it is configurable. Once an AudioRenderer enters the operational state, calls to "configuring" methods are disallowed and will cause the audio service to disconnect the client's connection. The following are considered configuring methods: AddPayloadBuffer, SetPcmStreamType, SetStreamType, SetPtsUnits, SetPtsContinuityThreshold.

If an AudioRenderer must be reconfigured, the client must ensure that no packets are still enqueued when these "configuring" methods are called. Thus it is best practice to call DiscardAllPackets on the AudioRenderer (and ideally Stop before DiscardAllPackets), prior to reconfiguring the renderer.

AddPayloadBuffer

Adds a payload buffer to the current buffer set associated with the connection. A StreamPacket struct reference a payload buffer in the current set by ID using the StreamPacket.payload_buffer_id field.

A buffer with ID id must not be in the current set when this method is invoked, otherwise the service will close the connection.

Request

NameType
id uint32
payload_buffer handle<vmo>

BindGainControl

Binds to the gain control for this AudioRenderer.

Request

NameType
gain_control_request request<fuchsia.media.audio/GainControl>

DiscardAllPackets

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released. The response is sent after all packets have been released.

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released.

Request

NameType

EnableMinLeadTimeEvents

Enable or disable notifications about changes to the minimum clock lead time (in nanoseconds) for this AudioRenderer. Calling this method with 'enabled' set to true will trigger an immediate OnMinLeadTimeChanged event with the current minimum lead time for the AudioRenderer. If the value changes, an OnMinLeadTimeChanged event will be raised with the new value. This behavior will continue until the user calls EnableMinLeadTimeEvents(false).

The minimum clock lead time is the amount of time ahead of the reference clock's understanding of "now" that packets needs to arrive (relative to the playback clock transformation) in order for the mixer to be able to mix packet. For example...

++ Let the PTS of packet X be P(X) ++ Let the function which transforms PTS -> RefClock be R(p) (this function is determined by the call to Play(...) ++ Let the minimum lead time be MLT

If R(P(X)) < RefClock.Now() + MLT Then the packet is late, and some (or all) of the packet's payload will need to be skipped in order to present the packet at the scheduled time.

Request

NameType
enabled bool

EndOfStream

Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.

Request

NameType

GetMinLeadTime

While it is possible to call GetMinLeadTime before SetPcmStreamType, there's little reason to do so. This is because lead time is a function of format/rate, so lead time will be recalculated after SetPcmStreamType. If min lead time events are enabled before SetPcmStreamType (with EnableMinLeadTimeEvents(true)), then an event will be generated in response to SetPcmStreamType.

Request

NameType

Response

NameType
min_lead_time_nsec int64

GetReferenceClock

Retrieves the stream's reference clock. The returned handle will have READ, DUPLICATE and TRANSFER rights, and will refer to a zx::clock that is MONOTONIC and CONTINUOUS.

Request

NameType

Response

NameType
reference_clock handle<clock>

OnMinLeadTimeChanged

Response

NameType
min_lead_time_nsec int64

Pause

Immediately put the AudioRenderer into the paused state and then report the relationship between the media and reference timelines which was established (if requested).

Request

NameType

Response

NameType
reference_time int64
media_time int64

PauseNoReply

Request

NameType

Play

Immediately put the AudioRenderer into a playing state. Start the advance of the media timeline, using specific values provided by the caller (or default values if not specified). In an optional callback, return the timestamp values ultimately used -- these set the ongoing relationship between the media and reference timelines (i.e., how to translate between the domain of presentation timestamps, and the realm of local system time).

Local system time is specified in units of nanoseconds; media_time is specified in the units defined by the user in the SetPtsUnits function, or nanoseconds if SetPtsUnits is not called.

The act of placing an AudioRenderer into the playback state establishes a relationship between 1) the user-defined media (or presentation) timeline for this particular AudioRenderer, and 2) the real-world system reference timeline. To communicate how to translate between timelines, the Play() callback provides an equivalent timestamp in each time domain. The first value ('reference_time') is given in terms of this renderer's reference clock; the second value ('media_time') is what media instant exactly corresponds to that local time. Restated, the frame at 'media_time' in the audio stream should be presented at 'reference_time' according to the reference clock.

Note: on calling this API, media_time immediately starts advancing. It is possible (if uncommon) for a caller to specify a system time that is far in the past, or far into the future. This, along with the specified media time, is simply used to determine what media time corresponds to 'now', and THAT media time is then intersected with presentation timestamps of packets already submitted, to determine which media frames should be presented next.

With the corresponding reference_time and media_time values, a user can translate arbitrary time values from one timeline into the other. After calling SetPtsUnits(pts_per_sec_numerator, pts_per_sec_denominator) and given the 'ref_start' and 'media_start' values from Play, then for any 'ref_time':

media_time = ( (ref_time - ref_start) / 1e9 * (pts_per_sec_numerator / pts_per_sec_denominator) ) + media_start

Conversely, for any presentation timestamp 'media_time':

ref_time = ( (media_time - media_start) * (pts_per_sec_denominator / pts_per_sec_numerator) * 1e9 ) + ref_start

Users, depending on their use case, may optionally choose not to specify one or both of these timestamps. A timestamp may be omitted by supplying the special value 'NO_TIMESTAMP'. The AudioRenderer automatically deduces any omitted timestamp value using the following rules:

Reference Time If 'reference_time' is omitted, the AudioRenderer will select a "safe" reference time to begin presentation, based on the minimum lead times for the output devices that are currently bound to this AudioRenderer. For example, if an AudioRenderer is bound to an internal audio output requiring at least 3 mSec of lead time, and an HDMI output requiring at least 75 mSec of lead time, the AudioRenderer might (if 'reference_time' is omitted) select a reference time 80 mSec from now.

Media Time If media_time is omitted, the AudioRenderer will select one of two values.

  • If the AudioRenderer is resuming from the paused state, and packets have not been discarded since being paused, then the AudioRenderer will use a media_time corresponding to the instant at which the presentation became paused.
  • If the AudioRenderer is being placed into a playing state for the first time following startup or a 'discard packets' operation, the initial media_time will be set to the PTS of the first payload in the pending packet queue. If the pending queue is empty, initial media_time will be set to zero.

Return Value When requested, the AudioRenderer will return the 'reference_time' and 'media_time' which were selected and used (whether they were explicitly specified or not) in the return value of the play call.

Examples

  1. A user has queued some audio using SendPacket and simply wishes them to start playing as soon as possible. The user may call Play without providing explicit timestamps -- Play(NO_TIMESTAMP, NO_TIMESTAMP).

  2. A user has queued some audio using SendPacket, and wishes to start playback at a specified 'reference_time', in sync with some other media stream, either initially or after discarding packets. The user would call Play(reference_time, NO_TIMESTAMP).

  3. A user has queued some audio using SendPacket. The first of these packets has a PTS of zero, and the user wishes playback to begin as soon as possible, but wishes to skip all of the audio content between PTS 0 and PTS 'media_time'. The user would call Play(NO_TIMESTAMP, media_time).

  4. A user has queued some audio using SendPacket and want to present this media in synch with another player in a different device. The coordinator of the group of distributed players sends an explicit message to each player telling them to begin presentation of audio at PTS 'media_time', at the time (based on the group's shared reference clock) 'reference_time'. Here the user would call Play(reference_time, media_time).

Request

NameType
reference_time int64
media_time int64

Response

NameType
reference_time int64
media_time int64

PlayNoReply

Request

NameType
reference_time int64
media_time int64

RemovePayloadBuffer

Removes a payload buffer from the current buffer set associated with the connection.

A buffer with ID id must exist in the current set when this method is invoked, otherwise the service will will close the connection.

Request

NameType
id uint32

SendPacket

Sends a packet to the service. The response is sent when the service is done with the associated payload memory.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

Response

NameType

SendPacketNoReply

Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

SetPcmStreamType

Sets the type of the stream to be delivered by the client. Using this method implies that the stream encoding is AUDIO_ENCODING_LPCM.

This must be called before Play or PlayNoReply. After a call to SetPcmStreamType, the client must then send an AddPayloadBuffer request, then the various StreamSink methods such as SendPacket/SendPacketNoReply.

Request

NameType
type AudioStreamType

SetPtsContinuityThreshold

Sets the maximum threshold (in seconds) between explicit user-provided PTS and expected PTS (determined using interpolation). Beyond this threshold, a stream is no longer considered 'continuous' by the renderer.

Defaults to an interval of half a PTS 'tick', using the currently-defined PTS units. Most users should not need to change this value from its default.

Example: A user is playing back 48KHz audio from a container, which also contains video and needs to be synchronized with the audio. The timestamps are provided explicitly per packet by the container, and expressed in mSec units. This means that a single tick of the media timeline (1 mSec) represents exactly 48 frames of audio. The application in this scenario delivers packets of audio to the AudioRenderer, each with exactly 470 frames of audio, and each with an explicit timestamp set to the best possible representation of the presentation time (given this media clock's resolution). So, starting from zero, the timestamps would be..

[ 0, 10, 20, 29, 39, 49, 59, 69, 78, 88, ... ]

In this example, attempting to use the presentation time to compute the starting frame number of the audio in the packet would be wrong the majority of the time. The first timestamp is correct (by definition), but it will be 24 packets before the timestamps and frame numbers come back into alignment (the 24th packet would start with the 11280th audio frame and have a PTS of exactly 235).

One way to fix this situation is to set the PTS continuity threshold (henceforth, CT) for the stream to be equal to 1/2 of the time taken by the number of frames contained within a single tick of the media clock, rounded up. In this scenario, that would be 24.0 frames of audio, or 500 uSec. Any packets whose expected PTS was within +/-CT frames of the explicitly provided PTS would be considered to be a continuation of the previous frame of audio. For this example, calling 'SetPtsContinuityThreshold(0.0005)' would work well.

Other possible uses: Users who are scheduling audio explicitly, relative to a clock which has not been configured as the reference clock, can use this value to control the maximum acceptable synchronization error before a discontinuity is introduced. E.g., if a user is scheduling audio based on a recovered common media clock, and has not published that clock as the reference clock, and they set the CT to 20mSec, then up to 20mSec of drift error can accumulate before the AudioRenderer deliberately inserts a presentation discontinuity to account for the error.

Users whose need to deal with a container where their timestamps may be even less correct than +/- 1/2 of a PTS tick may set this value to something larger. This should be the maximum level of inaccuracy present in the container timestamps, if known. Failing that, it could be set to the maximum tolerable level of drift error before absolute timestamps are explicitly obeyed. Finally, a user could set this number to a very large value (86400.0 seconds, for example) to effectively cause all timestamps to be ignored after the first, thus treating all audio as continuous with previously delivered packets. Conversely, users who wish to always explicitly schedule their audio packets exactly may specify a CT of 0.

Note: explicitly specifying high-frequency PTS units reduces the default continuity threshold accordingly. Internally, this threshold is stored as an integer of 1/8192 subframes. The default threshold is computed as follows: RoundUp((AudioFPS/PTSTicksPerSec) * 4096) / (AudioFPS * 8192) For this reason, specifying PTS units with a frequency greater than 8192x the frame rate (or NOT calling SetPtsUnits, which accepts the default PTS unit of 1 nanosec) will result in a default continuity threshold of zero.

Request

NameType
threshold_seconds float32

SetPtsUnits

Sets the units used by the presentation (media) timeline. By default, PTS units are nanoseconds (as if this were called with numerator of 1e9 and denominator of 1).

Request

NameType
tick_per_second_numerator uint32
tick_per_second_denominator uint32

SetReferenceClock

Sets the reference clock that controls this renderer's playback rate. If the input parameter is a valid zx::clock, it must have READ, DUPLICATE, TRANSFER rights and refer to a clock that is both MONOTONIC and CONTINUOUS. If instead an invalid clock is passed (such as the uninitialized zx::clock()), this signals that the renderer wants to use the 'optimal' clock generated by AudioCore for this stream.

SetReferenceClock cannot be called once SetPcmStreamType is called. It also cannot be called a second time (even if the renderer format has not yet been set). If a client wants a reference clock that is initially CLOCK_MONOTONIC but may diverge at some later time, they should create a clone of the monotonic clock, set this as the stream's reference clock, then rate-adjust it subsequently as needed.

Request

NameType
reference_clock handle<clock>?

SetUsage

Sets the usage of the render stream. This method may not be called after SetPcmStreamType is called. The default usage is MEDIA.

Request

NameType
usage AudioRenderUsage

ProfileProvider

Defined in fuchsia.media/profile_provider.fidl

RegisterHandler

Register a thread as a media thread. This notifies the media subsystem that this thread should have an elevated scheduling profile applied to it in order to meet audio or video deadlines.

name is the name of the component requesting the profile. period is the suggested interval to be scheduled at.

Returns the period and capacity that was applied, if a deadline profile was selected. Returns 0 if no deadline profile was selected.

Request

NameType
thread_handle handle<thread>
name string[64]
period zx/duration

Response

NameType
period zx/duration
capacity zx/duration

RegisterHandlerWithCapacity

Register a thread as a media thread. This notifies the media subsystem that this thread should have an elevated scheduling profile applied to it in order to meet audio or video deadlines.

name is the name of the component requesting the profile. period is the suggested interval to be scheduled at. If period is 0, then the server will select an appropriate interval in an implemenation dependent way. capacity is the suggested CPU capacity to reserve for thread_handle. For example, with a 0.1 capacity, 10% of a core will be reserved for the thread. For heterogeneous systems, this capacity should be planned against the highest performance processor(s) in the system.

Returns the period and capacity that was applied, if a deadline profile was selected. Returns 0 if no deadline profile was selected.

Request

NameType
thread_handle handle<thread>
name string[64]
period zx/duration
capacity float32

Response

NameType
period zx/duration
capacity zx/duration

SessionAudioConsumerFactory

Defined in fuchsia.media/audio_consumer.fidl

Interface for creating audio consumers bound to a session

CreateAudioConsumer

Creates an AudioConsumer which is an interface for playing audio bound to a particular session. session_id is the identifier of the media session for which audio is to be rendered.

Request

NameType
session_id uint64
audio_consumer_request request<AudioConsumer>

SimpleStreamSink

Defined in fuchsia.media/stream.fidl

A StreamSink that uses StreamBufferSet for buffer management.

AddPayloadBuffer

Adds a payload buffer to the current buffer set associated with the connection. A StreamPacket struct reference a payload buffer in the current set by ID using the StreamPacket.payload_buffer_id field.

A buffer with ID id must not be in the current set when this method is invoked, otherwise the service will close the connection.

Request

NameType
id uint32
payload_buffer handle<vmo>

DiscardAllPackets

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released. The response is sent after all packets have been released.

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released.

Request

NameType

EndOfStream

Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.

Request

NameType

RemovePayloadBuffer

Removes a payload buffer from the current buffer set associated with the connection.

A buffer with ID id must exist in the current set when this method is invoked, otherwise the service will will close the connection.

Request

NameType
id uint32

SendPacket

Sends a packet to the service. The response is sent when the service is done with the associated payload memory.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

Response

NameType

SendPacketNoReply

Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

StreamBufferSet

Defined in fuchsia.media/stream.fidl

Manages a set of payload buffers for a stream. This interface is typically inherited along with StreamSink or StreamSource to enable the transport of elementary streams between clients and services.

AddPayloadBuffer

Adds a payload buffer to the current buffer set associated with the connection. A StreamPacket struct reference a payload buffer in the current set by ID using the StreamPacket.payload_buffer_id field.

A buffer with ID id must not be in the current set when this method is invoked, otherwise the service will close the connection.

Request

NameType
id uint32
payload_buffer handle<vmo>

RemovePayloadBuffer

Removes a payload buffer from the current buffer set associated with the connection.

A buffer with ID id must exist in the current set when this method is invoked, otherwise the service will will close the connection.

Request

NameType
id uint32

StreamProcessor

Defined in fuchsia.media/stream_processor.fidl

Overview of operation:

  1. Create
  • create via CodecFactory - see CodecFactory
  • create via LicenseSession - see LicenseSession
  1. Get input constraints
  • OnInputConstraints() - sent unsolicited by stream processor shortly after stream processor creation.
  1. Provide input buffers
  • SetInputBufferPartialSettings() (current way)
  • SetInputBufferSettings() / AddInputBuffer() (old deprecated way)
  1. Deliver input data
  • QueueInputPacket() + OnFreeInputPacket(), for as long as it takes, possibly working through all input packets repeatedly before...
  1. Get output constraints and format
  • OnOutputConstraints()
  • This is not sent until after at least one QueueInput* message is sent by the client, even if the underlying processor behind the StreamProcessor doesn't fundamentally need any input data to determine its output constraints. This server behavior prevents clients taking an incorrect dependency on the output constraints showing up before input is delivered.
  • A client must tolerate this arriving as late as after substantial input data has been delivered, including lots of input packet recycling via OnFreeInputPacket().
  • This message can arrive more than once before the first output data.
  1. Provide output buffers
  • SetOutputBufferPartialSettings() / CompleteOutputBufferPartialSettings() (current way)
  • SetOutputBufferSettings() / AddOutputBuffer() (old deprecated way)
  1. Data flows, with optional EndOfStream
  • OnOutputPacket() / RecycleOutputPacket() / QueueInputPacket() / OnFreeInputPacket() / QueueInputEndOfStream() / OnOutputEndOfStream()

Semi-trusted StreamProcessor server - SW decoders run in an isolate (with very few capabilities) just in case the decoding SW has a vulnerability which could be used to take over the StreamProcessor server. Clients of the stream processor interface using decoders and processing streams of separate security contexts, to a greater extent than some other interfaces, need to protect themselves against invalid server behavior, such as double-free of a packet_index and any other invalid server behavior. Having fed in compressed data of one security context, don't place too much trust in a single StreamProcessor instance to not mix data among any buffers that StreamProcessor server has ever been told about. Instead, create separate StreamProcessor instances for use by security-separate client-side contexts. While the picture for HW-based decoders looks somewhat different and is out of scope of this paragraph, the client should always use separate StreamProcessor instances for security-separate client-side contexts.

Descriptions of actions taken by methods of this protocol and the states of things are given as if the methods are synchronously executed by the stream processor server, but in reality, as is typical of FIDL interfaces, the message processing is async. The states described are to be read as the state from the client's point of view unless otherwise stated. Events coming back from the server are of course delivered async, and a client that processes more than one stream per StreamProcessor instance needs to care whether a given event is from the current stream vs. some older soon-to-be-gone stream.

The Sync() method's main purpose is to enable the client to robustly prevent having both old and new buffers allocated in the system at the same time, since media buffers can be significantly large, depending. The Sync() method achieves this by only delivering it's response when all previous calls to the StreamProcessor protocol have actually taken effect in the StreamControl ordering domain. Sync() can also be used to wait for the stream processor server to catch up if there's a possibility that a client might otherwise get too far ahead of the StreamProcessor server, by for example requesting creation of a large number of streams in a row. It can also be used during debugging to ensure that a stream processor server hasn't gotten stuck. Calling Sync() is entirely optional and never required for correctness - only potentially required to de-overlap resource usage.

It's possible to re-use a StreamProcessor instance for another stream, and doing so can sometimes skip over re-allocation of buffers. This can be a useful thing to do for cases like seeking to a new location - at the StreamProcessor interface that can look like switching to a new stream.

AddInputBuffer

The client is required to add all the input buffers before sending any message that starts a new stream else the stream processor will close the StreamProcessor channel.

When the last buffer is added with this message, all the input packets effectively jump from non-existent to free with the client. The StreamProcessor will not generate an OnFreeInputPacket() for each new input packet. The client can immediately start sending QueueInputPacket() after sending the last AddInputBuffer().

Request

NameType
buffer StreamBuffer

AddOutputBuffer

Request

NameType
buffer StreamBuffer

CloseCurrentStream

This "closes" the current stream, leaving no current stream. In addition, this message can optionally release input buffers or output buffers.

If there has never been any active stream, the stream_lifetime_ordinal must be zero or the server will close the channel. If there has been an active stream, the stream_lifetime_ordinal must be the most recent active stream whether that stream is still active or not. Else the server will close the channel.

Multiple of this message without any new active stream in between is not to be considered an error, which allows a client to use this message to close the current stream to stop wasting processing power on a stream the user no longer cares about, then later decide that buffers should be released and send this message again with release_input_buffers and/or release_output_buffers true to get the buffers released, if the client is interested in trying to avoid overlap in resource usage between old buffers and new buffers (not all clients are).

See also Sync().

Request

NameType
stream_lifetime_ordinal uint64
release_input_buffers bool
release_output_buffers bool

CompleteOutputBufferPartialSettings

After SetOutputBufferPartialSettings(), the server won't send OnOutputConstraints(), OnOutputFormat(), OnOutputPacket(), or OnOutputEndOfStream() until after the client sends CompleteOutputBufferPartialSettings().

Some clients may be able to send CompleteOutputBufferPartialSettings() immediately after SetOutputBufferPartialSettings() - in that case the client needs to be prepared to receive output without knowing the buffer count or packet count yet - such clients may internally delay processing the received output until the client has heard from sysmem (which is when the client will learn the buffer count and packet count).

Other clients may first wait for sysmem to allocate, prepare to receive output, and then send CompleteOutputBufferPartialSettings().

Request

NameType
buffer_lifetime_ordinal uint64

EnableOnStreamFailed

Permit the server to use OnStreamFailed() instead of the server just closing the whole StreamProcessor channel on stream failure.

If the server hasn't seen this message by the time a stream fails, the server will close the StreamProcessor channel instead of sending OnStreamFailed().

Request

NameType

FlushEndOfStreamAndCloseStream

This message is optional.

This message is only valid after QueueInputEndOfStream() for this stream. The stream_lifetime_ordinal input parameter must match the stream_lifetime_ordinal of the QueueInputEndOfStream(), else the server will close the channel.

A client can use this message to flush through (not discard) the last input data of a stream so that the stream processor server generates corresponding output data for all the input data before the server moves on to the next stream, without forcing the client to wait for OnOutputEndOfStream() before queueing data of another stream.

The difference between QueueInputEndOfStream() and FlushEndOfStreamAndCloseStream(): QueueInputEndOfStream() is a promise from the client that there will not be any more input data for the stream (and this info is needed by some stream processors for the stream processor to ever emit the very last output data). The QueueInputEndOfStream() having been sent doesn't prevent the client from later completely discarding the rest of the current stream by closing the current stream (with or without a stream switch). In contrast, FlushEndOfStreamAndCloseStream() is a request from the client that all the previously-queued input data be processed including the logical "EndOfStream" showing up as OnOutputEndOfStream() (in success case) before moving on to any newer stream - this essentially changes the close-stream handling from discard to flush-through for this stream only.

A client using this message can start providing input data for a new stream without that causing discard of old stream data. That's the purpose of this message - to allow a client to flush through (not discard) the old stream's last data (instead of the default when closing or switching streams which is discard).

Because the old stream is not done processing yet and the old stream's data is not being discarded, the client must be prepared to continue to process OnOutputConstraints() messages until the stream_lifetime_ordinal is done. The client will know the stream_lifetime_ordinal is done when OnOutputEndOfStream(), OnStreamFailed(), or the StreamProcessor channel closes.

Request

NameType
stream_lifetime_ordinal uint64

OnFreeInputPacket

The server sends this message when the stream processor is done consuming this packet and the packet can be re-filled by the client.

This is not sent for all packets when a new buffer_lifetime_ordinal starts as in that case all the packets are initially free with the client.

After receiving the available input buffer via this event, the stream processor client can call later call QueueInputBuffer with appropriate offset and length set.

Response

NameType
free_input_packet PacketHeader

OnInputConstraints

The server sends this shortly after StreamProcessor creation to indicate input buffer constraints. The "min" and "max" input constraints don't change for the life of the StreamProcessor.

The "max" values for buffer size and count are large enough to support the most demanding format the server supports on input. The "recommended" values should be workable for use with the input FormatDetails conveyed during StreamProcessor creation. The "recommended" values are not necessarily suitable if the client uses QueueInputFormatDetails() to change the input format. In that case it's up to the client to determine suitable values, either by creating a new StreamProcessor instance instead, or knowing suitable values outside the scope of this protocol.

See comments on StreamBufferConstraints.

This message is guaranteed to be sent unsolicited to the StreamProcessor client during or shortly after StreamProcessor creation. Clients should not depend on this being the very first message to arrive at the client.

The "min" and "max" input constraints are guaranteed not to change for a given StreamProcessor instance. The "recommended" values may effectively change when the server processes QueueInputFormatDetails(). There is not any way in the protocol short of creating a new StreamProcessor instance for the client to get those new "recommended" values.

Response

NameType
input_constraints StreamBufferConstraints

OnOutputConstraints

This event informs the client of new output constraints.

This message is ordered with respect to other output (such as output packets, output format, output end-of-stream).

Before the first OnOutputPacket() of a stream, the server guarantees that at least one OnOutputConstraints() and exactly one OnOutputFormat() will be sent. The server may not set buffer_constraints_action_required true in OnOutputConstraints() if the buffer config is already suitable for the stream (buffer_constraints_action_required false means the buffer config is already fine). The client must tolerate multiple OnOutputConstraints() (and 1 OnOutputFormat() message) before the first output packet. As long as the client hasn't moved to a new stream, the server won't send another OnOutputConstraints() until after the client has configured output buffers.

This message can be sent mid-stream by a server. If buffer_constraints_action_required false, the message is safe to ignore, but a client may choose to stash the new constraints for later use the next time the client wants to unilaterally re-configure buffers (when allowed). If later the server needs the output config to change, the server may send a new OnOutputConstraints() with buffer_constraints_action_required true.

On buffer_constraints_action_required true, a client that does not wish to fully handle mid-stream output buffer config changes should either give up completely on the processing, or at least re-config the output as specified before starting a new stream (and possibly re-delivering input data, if the client wants). This avoids useless retry with a new stream starting from just before the output buffer config change which would hit the same mid-stream output config change again.

Similarly, some servers may only partly support mid-stream format changes, or only support a mid-stream format change if the buffers are already large enough to handle both before and after the format change. Such servers should still indicate buffer_constraints_action_required true, but then send OnStreamFailed() after the client has re-configured output buffers (seamlessly dealing with the mid-stream output config change is even better of course, but is not always feasible depending on format). When the client retries with a new stream starting from a nearby location in the client's logical overall media timeline, the output buffers will already be suitable for the larger size output, so the new stream will not need any mid-stream output buffer re-config, only a mid-stream OnOutputFormat(). This strategy avoids the problem that would otherwise occur if a client were to retry with a new stream starting just before the mid-stream output buffer config change (the retry wouldn't be effective since the same need for an output buffer config change would be hit again). Servers are discouraged from sending OnStreamFailed() solely due to a mid-stream need for different output buffer config without first sending OnOutputConstraints() with buffer_constraints_action_required true and waiting for the client to re-configure output buffers (to avoid the useless client retry with a new stream from a logical location before the config change).

When buffer_constraints_action_required true, the server will not send any OnOutputPacket() for this stream until after the client has configured/re-configured output buffers.

A client that gives up on processing a stream on any mid-stream OnOutputConstraints() or mid-stream OnOutputFormat() should completely ignore any OnOutputConstraints() with buffer_constraints_action_required false. Otherwise the client may needlessly fail processing, or server implementations might not be able to use buffer_constraints_action_required false for fear of simpler clients just disconnecting.

All clients, even those which don't want to support any mid-stream output buffer re-config or mid-stream OnOutputFormat() are required to deal with 1..multiple OnOutputConstraints() messages before the first output packet, and 1 OnOutputFormat() messages before the first output packet.

This message is ordered with respect to output packets, and with respect to OnOutputFormat().

Response

NameType
output_config StreamOutputConstraints

OnOutputEndOfStream

After QueueInputEndOfStream() is sent by the StreamProcessor client, within a reasonable duration the corresponding OnOutputEndOfStream() will be sent by the StreamProcessor server. Similar to QueueInputEndOfStream(), OnOutputEndOfStream() is sent a maximum of once per stream.

No more stream data for this stream will be sent after this message. All input data for this stream was processed.

While a StreamProcessor client is not required to QueueInputEndOfStream() (unless the client wants to use FlushEndOfStreamAndCloseStream()), if a StreamProcessor server receives QueueInputEndOfStream(), and the client hasn't closed the stream, the StreamProcessor server must generate a corresponding OnOutputEndOfStream() if nothing went wrong, or must send OnStreamFailed(), or must close the server end of the StreamProcessor channel. An ideal StreamProcessor server would handle and report stream errors via the error_ flags and complete stream processing without sending OnStreamFailed(), but in any case, the above-listed options are the only ways that an OnOutputEndOfStream() won't happen after QueueInputEndOfStream().

There will be no more OnOutputPacket() or OnOutputConstraints() messages for this stream_lifetime_ordinal after this message - if a server doesn't follow this rule, a client should close the StreamProcessor channel.

The error_detected_before bool has the same semantics as the error_detected_before bool in OnOutputPacket().

Response

NameType
stream_lifetime_ordinal uint64
error_detected_before bool

OnOutputFormat

This message is sent by the server before the first output packet of any stream, and potentially mid-stream between output packets of the stream, ordered with respect to output packets, and ordered with respect to OnOutputConstraints().

The server guarantees that the first packet of every stream will be preceeded by an OnOutputFormat().

The server guarantees that there will be an OnOutputFormat() between an OnOutputConstraints() with buffer_constraints_action_required true and an OnOutputPacket(). In other words, the client is essentially allowed to forget what the output format is on any OnOutputConstraints() with buffer_constraints_action_required true, because the server promises a subsequent OnOutputFormat() before any OnOutputPacket().

If the server sets buffer_constraints_action_required true in OnOutputConstraints(), the server won't send OnOutputFormat() (and therefore also won't send OnOutputPacket()) until the client has re-configured output buffers.

The server is allowed to send an OnOutputFormat() mid-stream between two output packets.

A server won't send two adjacent OnOutputFormat() messages without any output packet in between. However an OnOutputFormat() message doesn't guarantee a subsequent packet, because for example the server could send OnOutputEndOfStream() or OnStreamFailed() instead.

A client that does not wish to seamlessly handle mid-stream output format changes should either ensure that no stream processed by the client ever has any mid-stream format change, or the client should ensure that any retry of processing starts the new attempt at a point logically at or after the point where the old format has ended and the new format starts, else the client could just hit the same mid-stream format change again.

An example of this message being sent mid-stream is mid-stream change of dimensions of video frames output from a video decoder.

Not all servers will support seamless handling of format change. Those that do support seamless handling of format change may require that the format change not also require output buffer re-config, in order for the handling to be seamless. See the comment block for OnOutputConstraints() for more discussion of how servers and clients should behave - in particular when they don't seamlessly handle output constraint change and/or output format change.

If this message isn't being sent by the server when expected at the start of a stream, the most common reason is that a OnOutputConstraints() with buffer_constraints_action_required true hasn't been processed by the client (by configuring output buffers using SetOutputBufferPartialSettings() etc).

Response

NameType
output_format StreamOutputFormat

OnOutputPacket

This is how the stream processor emits an output packet to the stream processor client.

Order is significant.

The client should eventually call RecycleOutputPacket() (possibly after switching streams multiple times), unless the buffer_lifetime_ordinal has moved on. A stream change doesn't change which packets are busy with the client vs. free with the server.

The relevant buffer is always the one specified in the packet's buffer_index field.

For low-level buffer types that support it, a StreamProcessor is free to emit an output packet before the low-level buffer actually has any usable data in the buffer, with the mechanism for signalling the presence of data separate from the OnOutputPacket() message. For such low-level buffer types, downstream consumers of data from the emitted packet must participate in the low-level buffer signalling mechanism to know when it's safe to consume the data. This is most likely to be relevant when using a video decoder and gralloc-style buffers.

The error_ bool(s) allow (but do not require) a StreamProcessor server to report errors that happen during an AU or between AUs.

The scope of error_detected_before starts at the end of the last delivered output packet on this stream, or the start of stream if there were no previous output packets on this stream. The scope ends at the start of the output_packet.

The error_detected_before bool is separate so that discontinuities can be indicated separately from whether the current packet is damaged.

The scope of error_detected_during is from the start to the end of this output_packet.

Response

NameType
output_packet Packet
error_detected_before bool
error_detected_during bool

OnStreamFailed

The stream has failed, but the StreamProcessor instance is still usable for a new stream.

This message is only ever sent by the server if the client previously sent EnableOnStreamFailed(). If the client didn't send EnableOnStreamFailed() then the server closes the StreamProcessor channel instead.

StreamProcessor server implementations are encouraged to handle stream errors (and ideally to also report them via error_ bools of OnOutputPacket() and OnOutputEndOfStream()) without failing the whole stream, but if a stream processor server is unable to do that, but still can cleanly contain the failure to the stream, the stream processor server can (assuming EnableOnStreamFailed() was called) use OnStreamFailed() to indicate the stream failure to the client without closing the StreamProcessor channel.

An ideal StreamProcessor server handles problems with input data without sending this message, but sending this message is preferred vs. closing the server end of the StreamProcessor channel if the StreamProcessor server can 100% reliably contain the stream failure to the stream, without any adverse impact to any later stream.

No further messages will arrive from the server regarding the failed stream. This includes any OnOutputEndOfStream() that the client would have otherwise expected.

Response

NameType
stream_lifetime_ordinal uint64
error StreamError

QueueInputEndOfStream

Inform the server that all QueueInputPacket() messages for this stream have been sent.

If the stream isn't closed first (by the client, or by OnStreamFailed(), or StreamProcessor channel closing), there will later be a corresponding OnOutputEndOfStream().

The corresponding OnOutputEndOfStream() message will be generated only if the server finishes processing the stream before the server sees the client close the stream (such as by starting a new stream). A way to force the server to finish the stream before closing is to use FlushEndOfStreamAndCloseStream() after QueueInputEndOfStream() before any new stream. Another way to force the server to finish the stream before closing is to wait for the OnOutputEndOfStream() before taking any action that closes the stream.

In addition to serving as an "EndOfStream" marker to make it obvious client-side when all input data has been processed, if a client never sends QueueInputEndOfStream(), no amount of waiting will necessarily result in all input data getting processed through to the output. Some stream processors have some internally-delayed data which only gets pushed through by additional input data or by this EndOfStream marker. In that sense, this message can be viewed as a flush-through at InputData domain level, but the flush-through only takes effect if the stream processor even gets that far before the stream is just closed at StreamControl domain level. This message is not alone sufficient to act as an overall flush-through at StreamControl level. For that, send this message first and then send FlushEndOfStreamAndCloseStream() (at which point it becomes possible to queue input data for a new stream without causing discard of this older stream's data), or wait for the OnOutputEndOfStream() before closing the current stream.

If a client sends QueueInputPacket(), QueueInputFormatDetails(), QueueInputEndOfStream() for this stream after the first QueueInputEndOfStream() for this stream, a server should close the StreamProcessor channel.

Request

NameType
stream_lifetime_ordinal uint64

QueueInputFormatDetails

If the input format details are still the same as specified during StreamProcessor creation, this message is unnecessary and does not need to be sent.

If the stream doesn't exist yet, this message creates the stream.

The server won't send OnOutputConstraints() until after the client has sent at least one QueueInput* message.

All servers must permit QueueInputFormatDetails() at the start of a stream without failing, as long as the new format is supported by the StreamProcessor instance. Technically this allows for a server to only support the exact input format set during StreamProcessor creation, and that is by design. A client that tries to switch formats and gets a StreamProcessor channel failure should try again one more time with a fresh StreamProcessor instance created with CodecFactory using the new input format during creation, before giving up.

These format details override the format details specified during stream processor creation for this stream only. The next stream will default back to the format details set during stream processor creation.

This message is permitted at the start of the first stream (just like at the start of any stream). The format specified need not match what was specified during stream processor creation, but if it doesn't match, the StreamProcessor channel might close as described above.

Request

NameType
stream_lifetime_ordinal uint64
format_details FormatDetails

QueueInputPacket

This message queues input data to the stream processor for processing.

If the stream doesn't exist yet, this message creates the new stream.

The server won't send OnOutputConstraints() until after the client has sent at least one QueueInput* message.

The client must continue to deliver input data via this message even if the stream processor has not yet generated the first OnOutputConstraints(), and even if the StreamProcessor is generating OnFreeInputPacket() for previously-queued input packets. The input data must continue as long as there are free packets to be assured that the server will ever generate the first OnOutputConstraints().

Request

NameType
packet Packet

RecycleOutputPacket

After the client is done with an output packet, the client needs to tell the stream processor that the output packet can be re-used for more output, via this method.

It's not permitted to recycle an output packet that's already free with the stream processor server. It's permitted but discouraged for a client to recycle an output packet that has been deallocated by an explicit or implicit output buffer de-configuration(). See buffer_lifetime_ordinal for more on that. A server must ignore any such stale RecycleOutputPacket() calls.

Request

NameType
available_output_packet PacketHeader

SetInputBufferPartialSettings

This is the replacement for SetInputBufferSettings().

When the client is using sysmem to allocate buffers, this message is used instead of SetInputBufferSettings()+AddInputBuffer(). Instead, a single SetInputBufferPartialSettings() provides the StreamProcessor with the client-specified input settings and a BufferCollectionToken which the StreamProcessor will use to convey constraints to sysmem. Both the client and the StreamProcessor will be informed of the allocated buffers directly by sysmem via their BufferCollection channel (not via the StreamProcessor channel).

The client must not QueueInput...() until after sysmem informs the client that buffer allocation has completed and was successful.

The server should be prepared to see QueueInput...() before the server has necessarily heard from sysmem that the buffers are allocated - the server must tolerate either ordering, as the QueueInput...() and notification of sysmem allocation completion arrive on different channels, so the client having heard that allocation is complete doesn't mean the server knows that allocation is complete yet. However, the server can expect that allocation is in fact complete and can expect to get the allocation information from sysmem immediately upon requesting the information from sysmem.

Request

NameType
input_settings StreamBufferPartialSettings

SetInputBufferSettings

Clients should use SetInputBufferPartialSettings() instead.

Configuring input buffers (the old way) consists of calling SetInputBufferSettings() followed by a number of calls to AddInputBuffer() equal to the number of buffers set via SetInputBufferSettings(). In buffer-per-packet mode, this is the same as the number of packets. In single-buffer mode, this is 1.

After OnInputConstraints(), the client uses these two methods to set up input buffers and packets.

Configuring input buffers is required before QueueInputPacket().

The client can also re-set-up input buffers any time there is no current stream. The client need not wait until all previously-set-up input buffers are with the client via OnFreeInputPacket(). The old buffer_lifetime_ordinal just ends.

The recommended way to de-overlap resource usage (when/if the client wants to) is to send CloseCurrentStream() with release_input_buffers true then send Sync() and wait for its response before allocating any new buffers. How to cause other parts of the system to release their references on low-level buffers is outside the scope of this interface.

This call ends any previous buffer_lifetime_ordinal, and starts a new one.

Request

NameType
input_settings StreamBufferSettings

SetOutputBufferPartialSettings

This is the replacement for SetOutputBufferSettings().

When the client is using sysmem to allocate buffers, this message is used instead of SetOutputBufferSettings()+AddOutputBuffer(). Instead, a single SetOutputBufferPartialSettings() provides the StreamProcessor with the client-specified output settings and a BufferCollectionToken which the StreamProcessor will use to convey constraints to sysmem. Both the client and the StreamProcessor will be informed of the allocated buffers directly by sysmem via their BufferCollection channel (not via the StreamProcessor channel).

Configuring output buffers is required after OnOutputConstraints() is received by the client with buffer_constraints_action_required true and stream_lifetime_ordinal equal to the client's current stream_lifetime_ordinal (even if there is an active stream), and is permitted any time there is no current stream.

Closing the current stream occurs on the StreamControl ordering domain, so after a CloseCurrentStream() or FlushEndOfStreamAndCloseStream(), a subsequent Sync() completion must be received by the client before the client knows that there's no longer a current stream.

See also CompleteOutputBufferPartialSettings().

Request

NameType
output_settings StreamBufferPartialSettings

SetOutputBufferSettings

These are not permitted until after the first OnOutputConstraints().

Roughly speaking, these messages are sent in response to OnOutputConstraints() with buffer_constraints_action_required true.

Configuring output buffers consists of calling SetOutputBufferSettings() followed by a number of calls to AddOutputBuffer() equal to the number of buffers set via SetOutputBufferSettings(). In buffer-per-packet mode, this is the same as the number of packets. In single-buffer mode, this is 1.

Configuring output buffers is required after OnOutputConstraints() is received by the client with buffer_constraints_action_required true and stream_lifetime_ordinal equal to the client's current stream_lifetime_ordinal (even if there is an active stream), and is permitted any time there is no current stream.

Closing the current stream occurs on the StreamControl ordering domain, so after a CloseCurrentStream() or FlushEndOfStreamAndCloseStream(), a subsequent Sync() completion must be received by the client before the client knows that there's no longer a current stream.

Request

NameType
output_settings StreamBufferSettings

Sync

On completion, all previous StreamProcessor calls have done what they're going to do server-side, except for processing of data queued using QueueInputPacket().

The main purpose of this call is to enable the client to wait until CloseCurrentStream() with release_input_buffers and/or release_output_buffers set to true to take effect, before the client allocates new buffers and re-sets-up input and/or output buffers. This de-overlapping of resource usage can be worthwhile for media buffers which can consume resource types whose overall pools aren't necessarily vast in comparison to resources consumed. Especially if a client is reconfiguring buffers multiple times.

Note that Sync() prior to allocating new media buffers is not alone sufficient to achieve non-overlap of media buffer resource usage system wide, but it can be a useful part of achieving that.

The Sync() transits the Output ordering domain and the StreamControl ordering domain, but not the InputData ordering domain.

This request can be used to avoid hitting kMaxInFlightStreams which is presently 10. A client that stays <= 8 in-flight streams will comfortably stay under the limit of 10. While the protocol permits repeated SetInputBufferSettings() and the like, a client that spams the channel can expect that the channel will just close if the server or the channel itself gets too far behind.

Request

NameType

Response

NameType

StreamSink

Defined in fuchsia.media/stream.fidl

Consumes a stream of packets. This interface is typically inherited along with StreamBufferSet to enable the transport of elementary streams from clients to services.

DiscardAllPackets

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released. The response is sent after all packets have been released.

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released.

Request

NameType

EndOfStream

Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.

Request

NameType

SendPacket

Sends a packet to the service. The response is sent when the service is done with the associated payload memory.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

Response

NameType

SendPacketNoReply

Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

StreamSource

Defined in fuchsia.media/stream.fidl

Produces a stream of packets. This interface is typically inherited along with StreamBufferSet to enable the transport of elementary streams from services to clients.

DiscardAllPackets

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Request

NameType

OnEndOfStream

Indicates that the stream has ended.

Response

NameType

OnPacketProduced

Delivers a packet produced by the service. When the client is done with the payload memory, the client must call ReleasePacket to release the payload memory.

Response

NameType
packet StreamPacket

ReleasePacket

Releases payload memory associated with a packet previously delivered via OnPacketProduced.

Request

NameType
packet StreamPacket

UsageGainListener

Defined in fuchsia.media/usage_reporter.fidl

A protocol for watching changes to usage gain settings.

The channel will close when the device is not present.

OnGainMuteChanged

Called immediately on connection and afterward any time the usage gain setting changes.

Clients must respond to acknowledge the event. Clients that do not acknowledge their events will eventually be disconnected.

Request

NameType
muted bool
gain_dbfs float32

Response

NameType

UsageGainReporter

Defined in fuchsia.media/usage_reporter.fidl

A protocol for setting up watchers of usage gain.

RegisterListener

Connects a listener to a stream of usage gain setting changes for usage on the device identified by device_token. Usage Gain is not set directly by any client; it is a translation of the usage volume setting for each device.

Devices may map the same volume level to different dbfs, so a device_unique_id is needed to indentify the device.

AudioDeviceEnumerator provides programmatic access to devices and their unique ids if it is necessary for a client to select an id at runtime.

Request

NameType
device_unique_id string[36]
usage Usage
usage_gain_listener UsageGainListener

UsageReporter

Defined in fuchsia.media/usage_reporter.fidl

A protocol for setting up watchers of audio usages.

Watch

Request

NameType
usage Usage
usage_watcher UsageWatcher

UsageWatcher

Defined in fuchsia.media/usage_reporter.fidl

A protocol for listening to changes to the policy state of an audio usage.

User actions, such as lowering the volume or muting a stream, are not reflected in this API.

OnStateChanged

Called on first connection and whenever the watched usage changes. The provided usage will always be the bound usage; it is provided so that an implementation of this protocol may be bound to more than one usage.

Clients must respond to acknowledge the event. Clients that do not acknowledge their events will eventually be disconnected.

Request

NameType
usage Usage
state UsageState

Response

NameType

STRUCTS

AacConstantBitRate

Defined in fuchsia.media/stream_common.fidl

NameTypeDescriptionDefault
bit_rate uint32

Bits per second

No default

AacEncoderSettings

Defined in fuchsia.media/stream_common.fidl

NameTypeDescriptionDefault
transport AacTransport No default
channel_mode AacChannelMode No default
bit_rate AacBitRate No default
aot AacAudioObjectType No default

AacTransportAdts

Defined in fuchsia.media/stream_common.fidl

AAC inside ADTS

NameTypeDescriptionDefault

AacTransportLatm

Defined in fuchsia.media/stream_common.fidl

AAC inside LATM

NameTypeDescriptionDefault
mux_config_present bool

Whether MuxConfiguration stream element is present

No default

AacTransportRaw

Defined in fuchsia.media/stream_common.fidl

Raw AAC access units.

NameTypeDescriptionDefault

AudioCompressedFormatAac

Defined in fuchsia.media/stream_common.fidl

NameTypeDescriptionDefault

AudioCompressedFormatSbc

Defined in fuchsia.media/stream_common.fidl

NameTypeDescriptionDefault

AudioDeviceInfo

Defined in fuchsia.media/audio_device_enumerator.fidl

NameTypeDescriptionDefault
name string No default
unique_id string No default
token_id uint64 No default
is_input bool No default
gain_info AudioGainInfo No default
is_default bool No default

AudioGainInfo

Defined in fuchsia.media/audio_device_enumerator.fidl

NameTypeDescriptionDefault
gain_db float32 No default
flags AudioGainInfoFlags No default

AudioStreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of an audio elementary stream.

NameTypeDescriptionDefault
sample_format AudioSampleFormat No default
channels uint32 No default
frames_per_second uint32 No default

Compression

Defined in fuchsia.media/stream_type.fidl

Describes the compression applied to a stream. This type can be used in conjunction with AudioStreamType or VideoStreamType to represent a medium-specific compressed type.

NameTypeDescriptionDefault
type CompressionType

The type of compression applied to the stream. This is generally one of the ENCODING values, though AUDIO_ENCODING_LPCM and VIDEO_ENCODING_UNCOMPRESSED must not be used, because those encodings are regarded as uncompressed.

No default
parameters vector<uint8>[8192]?

Type-specific, opaque ‘out-of-band’ parameters describing the compression of the stream.

No default

EncryptionPattern

Defined in fuchsia.media/stream_common.fidl

EncryptionPattern

Pattern encryption utilizes a pattern of encrypted and clear 16 byte blocks over the protected range of a subsample (the encrypted_bytes of a SubsampleEntry). This structure specifies the number of encrypted data blocks followed by the number of clear data blocks.

NameTypeDescriptionDefault
clear_blocks uint32 No default
encrypted_blocks uint32 No default

Metadata

Defined in fuchsia.media/metadata.fidl

NameTypeDescriptionDefault
properties vector<Property> No default

Parameter

Defined in fuchsia.media/stream_common.fidl

Parameter

Generic parameter.

We want to minimize use of this generic "Parameter" structure by natively defining as many stream-specific parameter semantics as we can.

NameTypeDescriptionDefault
scope string No default
name string No default
value Value No default

PcmFormat

Defined in fuchsia.media/stream_common.fidl

PcmFormat

PCM audio format details.

NameTypeDescriptionDefault
pcm_mode AudioPcmMode No default
bits_per_sample uint32 No default
frames_per_second uint32 No default
channel_map vector<AudioChannelId>[16] No default

Property

Defined in fuchsia.media/metadata.fidl

NameTypeDescriptionDefault
label string No default
value string No default

SbcEncoderSettings

Defined in fuchsia.media/stream_common.fidl

Settings for an SBC Encoder.

SBC Encoders take signed little endian 16 bit linear PCM samples and return encoded SBC frames. SBC encoder PCM data in batches of sub_bands * block_count PCM frames. This encoder will accept PCM data on arbitrary frame boundaries, but the output flushed when EOS is queued may be zero-padded to make a full batch for encoding.

NameTypeDescriptionDefault
sub_bands SbcSubBands SbcSubBands.SUB_BANDS_8
allocation SbcAllocation SbcAllocation.ALLOC_LOUDNESS
block_count SbcBlockCount SbcBlockCount.BLOCK_COUNT_4
channel_mode SbcChannelMode No default
bit_pool uint64

SBC bit pool value.

No default

StreamPacket

Defined in fuchsia.media/stream.fidl

Describes a packet consumed by StreamSink or produced by StreamSource.

NameTypeDescriptionDefault
pts int64

Time at which the packet is to be presented, according to the presentation clock.

NO_TIMESTAMP
payload_buffer_id uint32

ID of the payload buffer used for this packet.

When this struct is used with StreamBufferSet, this field is the ID of a payload buffer provided via StreamBufferSet.AddPayloadBuffer. In that case, this value must identify a payload buffer in the current set. Other interfaces may define different semantics for this field.

No default
payload_offset uint64

Offset of the packet payload in the payload buffer.

This value plus the payload_size value must be less than or equal to the size of the referenced payload buffer.

No default
payload_size uint64

Size in bytes of the payload.

This value plus the payload_offest value must be less than or equal to the size of the referenced payload buffer.

No default
flags uint32

An bitwise-or'ed set of flags (see constants below) describing properties of this packet.

0
buffer_config uint64

The buffer configuration associated with this packet. The semantics of this field depend on the interface with which this struct is used. In many contexts, this field is not used. This field is intended for situations in which buffer configurations (i.e. sets of payload buffers) are explicitly identified. In such cases, the payload_buffer_id refers to a payload buffer in the buffer configuration identified by this field.

0
stream_segment_id uint64

The stream segment associated with this packet. The semantics of this field depend on the interface with which this struct is used. In many contexts, this field is not used. This field is intended to distinguish contiguous segments of the stream where stream properties (e.g. encoding) may differ from segment to segment.

0

StreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of an elementary stream.

NameTypeDescriptionDefault
medium_specific MediumSpecificStreamType

Medium-specific type information.

No default
encoding string[255]

Encoding (see constants below). This value is represented as a string so that new encodings can be introduced without modifying this file.

No default
encoding_parameters vector<uint8>?

Encoding-specific parameters, sometimes referred to as 'out-of-band data'. Typically, this data is associated with a compressed stream and provides parameters required to decompress the stream. This data is generally opaque to all parties except the producer and consumer of the stream.

No default

SubpictureStreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of a subpicture elementary stream.

NameTypeDescriptionDefault

SubsampleEntry

Defined in fuchsia.media/stream_common.fidl

SubsampleEntry

A subsample is a byte range within a sample consisting of a clear byte range followed by an encrypted byte range. This structure specifies the size of each range in the subsample.

NameTypeDescriptionDefault
clear_bytes uint32 No default
encrypted_bytes uint32 No default

TextStreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of a text elementary stream.

NameTypeDescriptionDefault

TimelineFunction

Defined in fuchsia.media/timeline_function.fidl

A TimelineFunction represents a relationship between a subject timeline and a reference timeline with a linear relation.

For example, consider a common use case in which reference time is the monotonic clock of a system and subject time is intended presentation time for some media such as a video.

reference_time is the value of the monotonic clock at the beginning of playback. subject_time is 0 assuming playback starts at the beginning of the media. We then choose a reference_delta and subject_delta so that subject_delta / reference_delta represents the desired playback rate, e.g. 0/1 for paused and 1/1 for normal playback.

Formulas

With a function we can determine the subject timeline value s in terms of reference timeline value r with this formula (where reference_delta > 0):

s = (r - reference_time) * (subject_delta / reference_delta) + subject_time

And similarly we can find the reference timeline value r in terms of subject timeline value s with this formula (where subject_delta > 0):

r = (s - subject_time) * (reference_delta / subject_delta) + referenc_time

Choosing time values

Time values can be arbitrary and our linear relation will of course be the same, but we can use them to represent the bounds of pieces in a piecewise linear relation.

For example, if a user performs skip-chapter, we might want to describe this with a TimelineFunction whose subject_time is the time to skip to, reference_time is now plus some epsilon, and delta ratio is 1/1 for normal playback rate.

NameTypeDescriptionDefault
subject_time int64

A value from the subject timeline that correlates to reference_time.

0
reference_time int64

A value from the reference timeline that correlates to subject_time.

0
subject_delta uint32

The change in the subject timeline corresponding to reference_delta.

0
reference_delta uint32

The change in the reference timeline corresponding to subject_delta. Cannot be zero.

1

VideoStreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of a video elementary stream.

NameTypeDescriptionDefault
pixel_format fuchsia.images/PixelFormat No default
color_space ColorSpace No default
width uint32

Dimensions of the video frames as displayed in pixels.

No default
height uint32 No default
coded_width uint32

Dimensions of the video frames as encoded in pixels. These values must be equal to or greater than the respective width/height values.

No default
coded_height uint32 No default
pixel_aspect_ratio_width uint32

The aspect ratio of a single pixel as frames are intended to be displayed.

No default
pixel_aspect_ratio_height uint32 No default
stride uint32

The number of bytes per 'coded' row in the primary video plane.

No default

VideoUncompressedFormat

Defined in fuchsia.media/stream_common.fidl

VideoUncompressedFormat

Uncompressed video format details.

NameTypeDescriptionDefault
image_format fuchsia.sysmem/ImageFormat_2 No default
fourcc uint32 No default
primary_width_pixels uint32 No default
primary_height_pixels uint32 No default
secondary_width_pixels uint32 No default
secondary_height_pixels uint32 No default
planar bool No default
swizzled bool No default
primary_line_stride_bytes uint32 No default
secondary_line_stride_bytes uint32 No default
primary_start_offset uint32 No default
secondary_start_offset uint32 No default
tertiary_start_offset uint32 No default
primary_pixel_stride uint32 No default
secondary_pixel_stride uint32 No default
primary_display_width_pixels uint32 No default
primary_display_height_pixels uint32 No default
has_pixel_aspect_ratio bool false
pixel_aspect_ratio_width uint32 1
pixel_aspect_ratio_height uint32 1

Void

Defined in fuchsia.media/audio_consumer.fidl

NameTypeDescriptionDefault

ENUMS

AacAudioObjectType

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
MPEG2_AAC_LC 0

MPEG-2 Low Complexity

AacChannelMode

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
MONO 0
STEREO 2

AacVariableBitRate

Type: uint32

Defined in fuchsia.media/stream_common.fidl

Variable bit rate modes. The actual resulting bitrate varies based on input signal and other encoding settings.

See https://wiki.hydrogenaud.io/index.php?title=Fraunhofer_FDK_AAC#Bitrate_Modes

NameValueDescription
V1 1
V2 2
V3 3
V4 4
V5 5

AudioBitrateMode

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
UNSPECIFIED 0
CBR 1
VBR 2

AudioCaptureUsage

Type: uint32

Defined in fuchsia.media/audio_core.fidl

Usages annotating the purpose of the stream being used to capture audio. The AudioCaptureUsage is used by audio policy to dictate how audio streams interact with each other.

NameValueDescription
BACKGROUND 0

Stream is used to capture audio while in the background. These streams may be active at any the time and are considered privileged. Example: Listening for Hotwords

FOREGROUND 1

Stream is intended to be used for normal capture functionality. Streams that are used for audio capture while the stream creator is in the foreground should use this. Example: Voice Recorder

SYSTEM_AGENT 2

Stream is for interaction with a system agent. This should only be used once a user has signalled their intent to have the interaction with an interested party. Examples: Assistant, Siri, Alexa

COMMUNICATION 3

Stream is intended to be used for some form of real time user to user communication. Voice/Video chat should use this.

AudioChannelId

Type: uint32

Defined in fuchsia.media/stream_common.fidl

AudioChannelId

Used in specifying which audio channel is for which speaker location / type.

TODO(dustingreen): Do we need more channel IDs than this?

NameValueDescription
SKIP 0
LF 1
RF 2
CF 3
LS 4
RS 5
LFE 6
CS 7
LR 8
RR 9
END_DEFINED 10
EXTENDED_CHANNEL_ID_BASE 1862270976
MAX 2147483647

AudioOutputRoutingPolicy

Type: uint32

Defined in fuchsia.media/audio_core.fidl

NameValueDescription
ALL_PLUGGED_OUTPUTS 0
LAST_PLUGGED_OUTPUT 1

AudioPcmMode

Type: uint32

Defined in fuchsia.media/stream_common.fidl

AudioPcmMode

NameValueDescription
LINEAR 0
ALAW 1
MULAW 2

AudioRenderUsage

Type: uint32

Defined in fuchsia.media/audio_core.fidl

Usage annotating the purpose of the stream being used to render audio. An AudioRenderer's usage cannot be changed after creation. The AudioRenderUsage is used by audio policy to dictate how audio streams interact with each other.

NameValueDescription
BACKGROUND 0

Stream is intended to be used for ambient or background sound. Streams that can be interrupted without consequence should use this.

MEDIA 1

Stream is intended to be used for normal functionality. Streams that are part of normal functionality should use this.

INTERRUPTION 2

Stream is intended to interrupt any ongoing function of the device. Streams that are used for interruptions like notifications should use this.

SYSTEM_AGENT 3

Stream is for interaction with a system agent. This should be used in response to a user initiated trigger.

COMMUNICATION 4

Stream is intended to be used for some form of real time user to user communication. Voice/Video chat should use this.

AudioSampleFormat

Type: uint32

Defined in fuchsia.media/stream_type.fidl

Enumerates the supported audio sample formats.

NameValueDescription
UNSIGNED_8 1

8-bit unsigned samples, sample size 1 byte.

SIGNED_16 2

16-bit signed samples, host-endian, sample size 2 bytes.

SIGNED_24_IN_32 3

24-bit signed samples in 32 bits, host-endian, sample size 4 bytes.

FLOAT 4

32-bit floating-point samples, sample size 4 bytes.

Behavior

Type: uint32

Defined in fuchsia.media/audio_core.fidl

The behaviors applied to streams when multiple are active.

NameValueDescription
NONE 0

Mix the streams.

DUCK 1

Apply a gain to duck the volume of one of the streams. (-14.0db)

MUTE 2

Apply a gain to mute one of the streams. (-160.0db)

ColorSpace

Type: uint32

Defined in fuchsia.media/stream_type.fidl

NameValueDescription
UNKNOWN 0
NOT_APPLICABLE 1
JPEG 2
HD_REC709 3
SD_REC601 4

SbcAllocation

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
ALLOC_LOUDNESS 0
ALLOC_SNR 1

SbcBlockCount

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
BLOCK_COUNT_4 4
BLOCK_COUNT_8 8
BLOCK_COUNT_12 12
BLOCK_COUNT_16 16

SbcChannelMode

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
MONO 0
DUAL 1
STEREO 2
JOINT_STEREO 3

SbcSubBands

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
SUB_BANDS_4 4
SUB_BANDS_8 8

StreamError

Type: uint32

Defined in fuchsia.media/stream_common.fidl

StreamError

This error code encapsulates various errors that might emanate from a StreamProcessor server. It can be sent either as an OnStreamFailed event or as an epitaph for the channel.

NameValueDescription
UNKNOWN 1

An internal error with an unspecified reason.

INVALID_INPUT_FORMAT_DETAILS 2

The client provided invalid input format details.

INCOMPATIBLE_BUFFERS_PROVIDED 3

The server received buffers that are not suitable for the operation to be performed. An example of this would be if a Decoder received output buffers that are too small to decode a frame into.

EOS_PROCESSING 4

Processing of input EOS (end of stream) failed, so the stream failed. Currently this can occur if a core codec watchdog fires while processing EOS.

DECODER_UNKNOWN 16777217

An internal decoder error with an unspecified reason.

DECODER_DATA_PARSING 16777218

Input data that can't be parsed. Only some parsing problems/errors are reported this way. Corrupt input data may be reported as other StreamError, or may not cause a StreamError.

ENCODER_UNKNOWN 33554433

An internal encoder error with an unspecified reason.

DECRYPTOR_UNKNOWN 50331649

An internal decryptor error with an unspecified reason.

DECRYPTOR_NO_KEY 50331650

The requested KeyId is not available for use by the Decryptor. The client may try again later if that key becomes available.

VideoColorSpace

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
INVALID 0

TABLES

AudioConsumerStatus

Defined in fuchsia.media/audio_consumer.fidl

Represents the status of the consumer. In the initial status, error and presentation_timeline are absent. The lead time fields are always present.

OrdinalNameTypeDescription
1 error AudioConsumerError

If present, indicates an error condition currently in effect. Absent if no error.

2 presentation_timeline TimelineFunction

If present, indicates the current relationship between the presentation timeline and local monotonic clock, both in nanosecond units. If not present, indicates there is no relationship. Absent initially.

'Presentation timeline' refers to the pts (presentation timestamp) values on the packets. This timeline function can be used to determine the local monotonic clock time that a packet will be presented based on that packet's pts value.

3 min_lead_time uint64

Indicates the minimum lead time in nanoseconds supported by this AudioConsumer. Or in other words, how small of a gap between the media_time provided to AudioConsumer.Start and the pts on the first packet can be. Values outside this range will be clipped.

4 max_lead_time uint64

Indicates the maximum lead time in nanoseconds supported by this AudioConsumer. Or in other words, how large of a gap between the media_time provided to AudioConsumer.Start and the pts on the first packet can be. Values outside this range will be clipped.

DecryptedFormat

Defined in fuchsia.media/stream_common.fidl

DecryptedFormat

This describes the format of the decrypted content. It is required to be sent by the StreamProcessor server prior to the delivery of output packets. Currently, there is no additional format details for decrypted output.

OrdinalNameTypeDescription
1 ignore_this_field bool

EncryptedFormat

Defined in fuchsia.media/stream_common.fidl

EncryptedFormat

The stream format details payload of a decrypting stream processor. This is a sparsely populated table to specify parameters necessary for decryption other than the data stream. It is only necessary to update fields if they changed, but not an error if the same value is repeated.

OrdinalNameTypeDescription
1
2
7
6 scheme string

scheme specifies which encryption scheme to use, such as fuchsia.media.ENCRYPTION_SCHEME_CENC. Usage:

  • It is required to be set prior to delivery of input packets.
  • Changing the scheme mid-stream is only permitted in some scenarios. Once an encrypted scheme is selected for a stream, the scheme may only be set to fuchsia.media.ENCRYPTION_SCHEME_UNENCRYPTED or that same initial encrypted scheme. The scheme may be set to fuchsia.media.ENCRYPTION_SCHEME_UNENCRYPTED at any point.
8 key_id KeyId

key_id identifies the key that should be used for decrypting subsequent data. Usage:

  • It is required to be set prior to delivery of input packets to a decryptor.
  • This may be changed multiple times during a data stream.
3 init_vector InitVector

init_vector is used in combination with a key and a block of content to create the first cipher block in a chain and derive subsequent cipher blocks in a cipher block chain. Usage:

  • It is required to be set prior to the delivery of input packets to a decryptor.
  • This may be changed multiple times during a data stream.
4 subsamples vector<SubsampleEntry>

subsamples is used to identify the clear and encrypted portions of a subsample. Usage:

  • For whole sample encryption, this parameter should not be sent.
  • This may be changed multiple times during a data stream.
5 pattern EncryptionPattern

pattern is used to identify the clear and encrypted blocks for pattern based encryption. Usage:

  • This is not allowed for CENC and CBC1 and required for CENS and CBCS.
  • If required, it must be set prior to the delivery of input packets to a decryptor.
  • This may be changed multiple times during a data stream.

FormatDetails

Defined in fuchsia.media/stream_common.fidl

FormatDetails

This describes/details the format on input or output of a StreamProcessor (separate instances for input vs. output).

OrdinalNameTypeDescription
1 format_details_version_ordinal uint64
2 mime_type string
3 oob_bytes vector<uint8>
4 domain DomainFormat
5 pass_through_parameters vector<Parameter>
6 encoder_settings EncoderSettings

Instructs an encoder on how to encode raw data.

Decoders may ignore this field but are entitled to rejected requests with this field set because it doesn't make sense.

7 timebase uint64

The number of ticks of the timebase of input packet timestamp_ish values per second.

The timebase is only used used for optional extrapolation of timestamp_ish values when an input timestamp which applies to byte 0 of the valid portion of the input packet does not correspond directly to byte 0 of the valid portion of any output packet.

Leave unset if timestamp extrapolation is not needed, either due to lack of timestamps on input, or due to input being provided in increments of the encoder's input chunk size (based on the encoder settings and calculated independently by the client). Set if timestamp extrapolation is known to be needed or known to be acceptable to the client.

H264EncoderSettings

Defined in fuchsia.media/stream_common.fidl

Settings for H264 Encoders.

OrdinalNameTypeDescription
1 bit_rate uint32

Target bits per second for encoded stream. Defaults to 200,000 if omitted.

2 frame_rate uint32

Target frames per second for encoded stream. Defaults to 30 if omitted.

3 gop_size uint32

Number of pictures per keyframe. Defaults to 8 if omitted. Setting to 0 will disable key frame encoding, except for if force_key_frame is set to true.

4 variable_frame_rate bool

Whether to enable frame rate adjustments in order to meet target bitrate. False if omitted.

5 min_frame_rate uint32

Lowest frame rate allowed if variable_frame_rate is enabled. Defaults to 10 if omitted.

6 force_key_frame bool

If true, next frame encoded will be a key frame. If omitted, defaults to false.

7 quantization_params H264QuantizationParameters

Allow customization of quantization parameters for encoding. Each frame submitted after setting this will use the new values. If omitted, no change from encoder defaults is made.

H264QuantizationParameters

Defined in fuchsia.media/stream_common.fidl

Customization of h264 encoder parameters for macroblock quantization. The values can range from 0 to 51, with lower numbers indicating higher quality/bitrate. While encoders should support these fields if feasible, some encoders may ignore these fields. It's ok to not set this table, or not set some of the fields in this table, as encoders can determine their own defaults. If the targeted bitrate can't be achieved with the specified values, then the user should expect the resulting encoded stream bitrate to differ from the requested bitrate.

OrdinalNameTypeDescription
1 i_base uint32

Starting value for quantization of key frames.

2 i_min uint32

Smallest allowed value for quantization of key frames.

3 i_max uint32

Largest allowed value for quantization of key frames.

4 p_base uint32

Starting value for quantization of predicted frames.

5 p_min uint32

Smallest allowed value for quantization of predicted frames.

6 p_max uint32

Largest allowed value for quantization of predicted frames.

HevcEncoderSettings

Defined in fuchsia.media/stream_common.fidl

Settings for HEVC/H265 Encoders.

OrdinalNameTypeDescription
1 bit_rate uint32

Target bits per second for encoded stream. Defaults to 200,000 if omitted.

2 frame_rate uint32

Target frames per second for encoded stream. Defaults to 30 if omitted.

3 gop_size uint32

Number of pictures per keyframe. Defaults to 8 if omitted.

InputAudioCapturerConfiguration

Defined in fuchsia.media/audio_capturer.fidl

Configuration for a capturer which will receive a stream from an input device.

OrdinalNameTypeDescription
1 usage AudioCaptureUsage

LoopbackAudioCapturerConfiguration

Defined in fuchsia.media/audio_capturer.fidl

Configuration for a capturer which will receive a loopback stream a system output.

OrdinalNameTypeDescription

Packet

Defined in fuchsia.media/stream_processor.fidl

A Packet represents a chunk of input or output data to or from a stream processor.

stream processor output:

While the Packet is outstanding with the client via OnOutputPacket(), the stream processor will avoid modifying the referenced output data. After the client calls RecycleOutputPacket(packet_index), the stream processor is notified that the client is again ok with the referenced data changing.

stream processor input:

The client initially has all packet_index(es) available to fill, and later gets packet_index(s) that are again ready to fill via OnFreeInputPacket(). The client must not modify the referenced data in between QueueInputPacket() and OnFreeInputPacket().

OrdinalNameTypeDescription
1 header PacketHeader
2 buffer_index uint32

Which buffer this packet refers to. For single-buffer mode this will always be 0, but for multi-buffer mode, a given in-flight interval of a packet can refer to any buffer. The packet has an associated buffer only while the packet is in-flight, not while the packet is free.

The default value makes accidental inappropriate use of index 0 less likely (will tend to complain in an obvious way if not filled out instead of a non-obvious data corruption when decoding buffer 0 repeatedly instead of the correct buffers).

TODO(dustingreen): Try to make FIDL table defaults have meaning, and not complain about !has when accessing the field. For now the default specified here does nothing.

3 stream_lifetime_ordinal uint64

The value 1 is the lowest permitted value after stream processor creation. Values sent by the client must be odd. Values must only increase.

A stream_lifetime_ordinal represents the lifetime of a stream. All messages that are specific to a stream have the stream_lifetime_ordinal value and the value is the same for all messages relating to a given stream.

4 start_offset uint32

Which part of the relevant buffer is this packet using. These are valid for input data that's in-flight to the stream processor, and are valid for output data from the stream processor.

For compressed formats and uncompressed audio, the data in [start_offset, start_offset + valid_length_bytes) is the contiguously valid data referred to by this packet.

For uncompressed video frames, FormatDetails is the primary means of determining which bytes are relevant. The offsets in FormatDetails are relative to the start_offset here. The valid_length_bytes must be large enough to include the full last line of pixel data, including the full line stride of the last line (not just the width in pixels of the last line).

Despite these being filled out, some uncompressed video buffers are of types that are not readable by the CPU. These fields being here don't imply there's any way for the CPU to read an uncompressed frame.

5 valid_length_bytes uint32

This must be > 0.

The semantics for valid data per packet vary depending on data type as follows.

uncompressed video - A video frame can't be split across packets. Each packet is one video frame.

uncompressed audio - Regardless of float or int, linear or uLaw, or number of channels, a packet must contain an non-negative number of complete audio frames, where a single audio frame consists of data for all the channels for the same single point in time. Any stream-processor-specific internal details re. lower rate sampling for LFE channel or the like should be hidden by the StreamProcessor server implementation.

compressed data input - A packet must contain at least one byte of data. See also stream_input_bytes_min. Splitting AUs at arbitrary byte boundaries is permitted, including at boundaries that are in AU headers.

compressed data output - The stream processor is not required to fully fill each output packet's buffer.

6 timestamp_ish uint64

This value is not strictly speaking a timestamp. It is an arbitrary unsigned 64-bit number that, under some circumstances, will be passed by a stream processor unmodified from an input packet to the exactly-corresponding output packet.

For timestamp_ish values to be propagated from input to output the following conditions must be true:

  • promise_separate_access_units_on_input must be true
  • has_timestamp_ish must be true for a given input packet, to have that timestamp_ish value (potentially) propagate through to an output
  • the StreamProcessor instance itself decides (async) that the input packet generates an output packet - if a given input never generates an output packet then the timestamp_ish value on the input will never show up on any output packet - depending on the characteristics of the input and output formats, and whether a decoder is willing to join mid-stream, etc this can be more or less likely to occur, but clients should be written to accommodate timestamp_ish values that are fed on input but never show up on output, at least to a reasonable degree (not crashing, not treating as an error).
7 start_access_unit bool

If promise_separate_access_units_on_input (TODO(dustingreen): or any similar mode for output) is true, this bool must be set appropriately depending on whether byte 0 is or is not the start of an access unit. The client is required to know, and required to set this boolean properly. The server is allowed to infer that when this boolean is false, byte 0 is the first byte of a continuation of a previously-started AU. (The byte at start_offset is "byte 0".)

If promise_separate_access_units_on_input is false, this boolean is ignored.

8 known_end_access_unit bool

A client is never required to set this boolean to true.

If promise_separate_access_units_on_input is true, for input data, this boolean must be false if the last byte of this packet is not the last byte of an AU, and this boolean may be true if the last byte of this packet is the last byte of an AU. A client delivering one AU at a time that's interested in the lowest possible latency via the decoder should set this boolean to true when it can be set to true.

If promise_separate_access_units_on_input is false, this boolean is ignored.

9 key_frame bool

Used for compressed video packets. If not present should be assumed to be unknown. If false, indicates the packet is not part of a key frame. If true, indicates the packet is part of a key frame.

PacketHeader

Defined in fuchsia.media/stream_processor.fidl

PacketHeader

When referring to a free packet, we use PacketHeader alone instead of Packet, since while a packet is free it doesn't really have meaningful offset or length etc.

A populated Packet also has a PacketHeader.

OrdinalNameTypeDescription
1 buffer_lifetime_ordinal uint64

This is which buffer configuration lifetime this header is referring to.

A packet_index is only really meaningful with respect to a particular buffer_lifetime_ordinal.

See StreamBufferSettings.buffer_lifetime_ordinal.

For QueueInputPacket(), a server receiving a buffer_lifetime_ordinal that isn't the current input buffer_lifetime_ordinal will close the channel.

For OnFreeInputPacket() and RecycleOutputPacket(), the receiver (client or server) must ignore a message with stale buffer_lifetime_ordinal.

2 packet_index uint32

The overall set of packet_index values is densely packed from 0..count-1 for input and output separately. They can be queued in any order.

Both the client and server should validate the packet_index against the known bound and disconnect if it's out of bounds.

When running in single-buffer mode, the buffer index is always 0.

The packet_index values don't imply anything about order of use of packets. The client should not expect the ordering to remain the same over time - the stream processor is free to hold on to an input or output packet for a while during which other packet_index values may be used multiple times.

For a given properly-functioning StreamProcessor instance, packet_index values will be unique among concurrently-outstanding packets. Servers should validate that a client isn't double-using a packet and clients should validate as necessary to avoid undefined or unexpected client behavior.

StreamBuffer

Defined in fuchsia.media/stream_processor.fidl

The StreamBuffer struct represents a pre-configured buffer.

Both input and output uses StreamBuffer(s), but the two sets of buffers are separate.

The client uses SetInputBufferSettings() + AddInputBuffer() * N to inform the stream processor about all the input buffers.

The client uses SetOutputBufferSettings() + AddOutputBuffer() * N to inform the stream processor about all the output buffers.

When single_buffer_mode is true, there is only buffer_index 0 shared by all packet(s) of the relevant input or output.

OrdinalNameTypeDescription
1 buffer_lifetime_ordinal uint64

When using AddOutputBuffer()/AddInputBuffer(), this must match the buffer_lifetime_ordinal of the most recent SetOutputBufferSettings()/SetInputBufferSettings().

2 buffer_index uint32

Buffers must be added via AddOutputBuffer() / AddInputBuffer() in order by buffer_index. The buffer_index is always equal to 0 if single_buffer_mode is true.

3 data StreamBufferData

For each new buffer_lifetime_ordinal, a client must use new low-level buffers. This rule exists for multiple very good reasons, and is relevant to mid-stream changes, changes on stream boundaries, and both input and output buffers. A new buffer_lifetime_ordinal needs new low-level buffers, not just new StreamBuffer(s). If you find yourself copying compressed input data into new low-level input buffers solely due to this rule, consider asking the source of the data for the ability to directly fill new VMOs. The rule exists for good reasons, even for input buffers.

The previous paragraph does not prohibit carving up VMOs into sub-pieces and using different sub-pieces as different StreamBuffer(s), with some VMOs used for more than one StreamBuffer and possibly others used for only one StreamBuffer. While this is permitted and enables some optimizations, it's not expected to be particularly common.

StreamBufferConstraints

Defined in fuchsia.media/stream_processor.fidl

This struct helps ensure that packet count and buffer space are sufficient to avoid major problems. For example, a video decoder needs sufficient video frame buffers to hold all potential reference frames concurrently + one more video buffer to decode into. Else, the whole video decode pipe can easily deadlock.

The secondary purpose of this struct is to help ensure that packet count and buffer space are sufficient to achieve reasonably performant operation.

There are separate instances of this struct for stream input and stream output.

Notes about fields:

For uncompressed video, separate and complete frames in their separate buffers (buffer-per-packet mode) are always a requirement.

per_packet_buffer_bytes.*: These per-packet buffer bytes constraints apply to both buffer-per-packet mode and single-buffer mode (see single_buffer_mode). If buffer-per-packet mode, the constraints apply to each buffer separately. If single-buffer mode, the constraints need to be multiplied by the number of packets to determine the constraints on the single buffer.

OrdinalNameTypeDescription
1 buffer_constraints_version_ordinal uint64

This is a version number the server sets on the constraints to allow the server to determine when the client has caught up with the latest constraints sent by the server. The server won't emit output data until the client has configured output settings and buffers with a buffer_constraints_version_ordinal >= the latest buffer_constraints_version_ordinal that had buffer_constraints_action_required true. See buffer_constraints_action_required comments for more.

A buffer_constraints_version_ordinal of 0 is not permitted, to simplify initial state handling. Other than 0, both odd and even version ordinals are allowed (in contrast to the stream_lifetime_ordinal, neither the client nor server ever has a reason to consider the latest version to be stale, so there would be no benefit to disallowing even values).

2 default_settings StreamBufferSettings

These settings are "default" settings, not "recommended" settings.

These "default" settings can be passed to SetInputBufferSettings() / SetOutputBufferSettings() as-is without modification, but a client doing that must still obey the semantics of packet_count_for_client, despite the stream processor server not having any way to really know the proper setting for that field.

For StreamBufferConstraints fields whose names end in "recommended", the default_settings will have the corresponding setting field set to that recommended value.

The stream processor promises that these default settings as-is (except for buffer_lifetime_ordinal) are guaranteed to satisfy the constraints indicated by the other fields of StreamBufferConstraints. While client-side checking that these settings are within the constraints is likely unnecessary in the client, the client should still check that these values are within client-side reasonable-ness bounds before using these values, to avoid letting a stream processor server cause problems for the client.

This structure will always have single_buffer_mode false. See single_buffer_mode_allowed for whether single_buffer_mode true is allowed.

The client must set the buffer_lifetime_ordinal field to a proper value before sending back to the server. The 0 initially in this field will be rejected by the server if sent back as-is. See comments on StreamBufferSettings.buffer_lifetime_ordinal.

3 per_packet_buffer_bytes_min uint32

If a client is using buffer per packet mode, each buffer must be at least this large. If a client is using single-buffer mode, the one buffer must be at least per_packet_buffer_bytes_min * packet_count_for_server_min in size.

4 per_packet_buffer_bytes_recommended uint32

Must be >= per_packet_buffer_bytes_min. Delivering more than this per input packet might not perform any better, and in fact might perform worse.

5 per_packet_buffer_bytes_max uint32

Must be >= per_packet_buffer_bytes_recommended. Can be 0xFFFFFFFF if there is no explicitly-enforced limit.

6 packet_count_for_server_min uint32

Minimum number of packet_count_for_server.

Re. input and output:

This is a strict min for packet_count_for_server, but a client can use more packets overall if the client wants to, by using a larger value for packet_count_for_server and/or using a non-zero packets_for_client. A good reason to do the former would be if the client might occasionally deliver a few not-very-full buffers - or to have a few extra packets within which to satisfy stream_input_bytes_min. A good reason to do the latter would be if a client needs to hold onto some packets for any extra duration.

If a client specifies a larger packet_count_for_server value than packet_count_for_server_min, a server is permitted (but not encouraged) to not make progress until packet_count_for_server are with the server, not merely packet_count_for_server_min.

For decoder input and audio encoder input: The packet_count_for_server_min may or may not contain enough data to allow the stream processor to make progress without copying into an internal side buffer. If there isn't enough data delivered in packet_count_for_server_min packets to permit progress, the stream processor must copy into its own side buffer internally to make progress.

If a client intends to use extra packets for client-side purposes, the client should specify the extra packets in packets_for_client instead of packet_count_for_server, but packet_count_for_server must still be >= packet_count_for_server_min.

7 packet_count_for_server_recommended uint32

This must be at least packet_count_for_server_min and at most packet_count_for_server_recommended_max.

This value is likely to be used as-is by most clients, so if having one additional packet is a big performance win in a large percentage of scenarios, it can be good for the server to include that additional packet in this value.

8 packet_count_for_server_recommended_max uint32

This can be the same as packet_count_for_server_max or can be lower. Values above this value and <= packet_count_for_server_max are not recommended by the stream processor, but should still work given sufficient resources available to both the client and the stream processor.

9 packet_count_for_server_max uint32

This can be 0xFFFFFFFF if there's no stream processor-enforced max, but stream processors are encouraged to set a large but still plausibly-workable max, and clients are encouraged to request a number of packets that isn't excessively large for the client's scenario.

10 packet_count_for_client_min uint32

Normally this would be an implicit 0, but for now we have a min so we can force the total number of packets to be a specific number that we know works for the moment.

11 packet_count_for_client_max uint32

The client must set packet_count_for_client to be <= packet_count_for_client_max.

This value must be at least 1. This can be 0xFFFFFFFF if there's no stream-processor-enforced max. Clients are encouraged to request a number of packets that isn't excessively large for the client's scenario.

12 single_buffer_mode_allowed bool

single_buffer_mode_allowed false allows a stream processor that's not required to support single-buffer mode for a given input or output the ability to decline to support single-buffer mode on that input/output.

All encoder output, regardless of audio or video: server support for single-buffer mode is optional.

Audio decoder output: server support for single-buffer mode is required.

Video decoder output: There is little reason for a video decoder to support single-buffer mode on output. Nearly all video decoders will set this to false for their output.

All decoder inputs: Servers must support single-buffer mode on input. The client is responsible for managing the input buffer space such that filling an input packet doesn't over-write any portion of an input packet already in flight to the stream processor.

Encoder inputs: Server support for single-buffer mode on encoder input is optional. This is more often useful for audio than for video.

Support for buffer-per-packet mode is always required on both input and output, regardless of stream processor type.

13 is_physically_contiguous_required bool

If true, the buffers need to be physically contiguous pages, such as those allocated using zx_vmo_create_contiguous().

StreamBufferDataVmo

Defined in fuchsia.media/stream_processor.fidl

StreamBufferDataVmo

Details for a buffer backed by a VMO.

OrdinalNameTypeDescription
1 vmo_handle handle<vmo>

The same VMO can be used by more than one StreamBuffer (only of the same buffer_lifetime_ordinal), but each vmo_handle must be a separate handle.

2 vmo_usable_start uint64

Offset within the VMO of the first usable byte. Must be < the VMO's size in bytes.

3 vmo_usable_size uint64

VMO-relative offset that's one past the last usable byte. This can point one byte beyond the end of the VMO if desired. In other words, this can be equal to the VMO's size, to indicate that the last byte of the VMO is usable (and possibly many byte before that, depending on vmo_usable_start).

StreamBufferPartialSettings

Defined in fuchsia.media/stream_processor.fidl

This struct is used instead of StreamBufferSettings when sysmem is used to allocate buffers. The settings in StreamBufferSettings that are missing from StreamBufferPartialSettings can be conveyed from the client directly to sysmem.

OrdinalNameTypeDescription
1 buffer_lifetime_ordinal uint64

The containing message starts a new buffer_lifetime_ordinal.

There is a separate buffer_lifetime_ordinal for input vs. output.

Re-use of the same value is not allowed. Values must be odd. Values must only increase (increasing by more than 2 is permitted).

A buffer_lifetime_ordinal lifetime starts at SetInputBufferSettings() or SetOutputBufferSettings(), and ends at the earlier of CloseCurrentStream() with release_input_buffers/release_output_buffers set or SetOutputBufferSettings() with new buffer_lifetime_ordinal in the case of mid-stream output config change.

2 buffer_constraints_version_ordinal uint64

This value indicates which version of constraints the client is/was aware of so far.

For input, this must always be 0 because constraints don't change for input (settings can change, but there's no settings vs current constraints synchronization issue on input).

For output, this allows the server to know when the client is sufficiently caught up before the server will generate any more output.

When there is no active stream, a client is permitted to re-configure buffers again using the same buffer_constraints_version_ordinal.

3 single_buffer_mode bool

If true, there is only one buffer, but still typically more than one packet. If false, the # of packets == the number of buffers.

While it's possible to set up single_buffer_mode false with each buffer referring to the same underlying VMO, single_buffer_mode true is more efficient for that case since only one mapping is created.

This setting is specified by the client, and influences the constraints delivered from the StreamProcessor to sysmem (whether there's more than one buffer allocated overall or not). For single_buffer_mode true, the StreamProcessor is the one to ask sysmem for a buffer - the client should refrain from doing so or the StreamProcessor will just fail when more than one buffer gets allocated by sysmem.

4 packet_count_for_server uint32

When single_buffer_mode is false:

The actual packet count will be max(packet_count_for_server + packet_count_for_client, sysmem_buffers). The sysmem_buffers is BufferCollectionInfo.buffer_count from sysmem if using sysmem, or 0 if not using sysmem.

When single_buffer_mode is true:

The actual packet count is packet_count_for_server + packet_count_for_client.

If not using sysmem, or if using single_buffer_mode, these fields must be set and consistent with correpsonding fields in StreamBufferConstraints.

If single_buffer_mode false and using sysmem, these fields can both be non-set, or can both be set and consistent with correpsonding fields in StreamBufferConstraints. If not set, the value used for the fields in the "max" expression above is 0, so buffer_count.

5 packet_count_for_client uint32
6 sysmem_token fuchsia.sysmem/BufferCollectionToken

The client end of a BufferCollectionToken channel, which the StreamProcessor will use to deliver constraints to sysmem and learn of buffers allocated by sysmem.

The client guarantees that the token is already known to sysmem (via BufferCollectionToken.Sync(), BufferCollection.Sync(), or BufferCollectionEvents.OnDuplicatedTokensKnownByServer()).

StreamBufferSettings

Defined in fuchsia.media/stream_processor.fidl

See relevant corresponding constraints in StreamBufferConstraints. The settings must satisfy the constraints.

The client informs the stream processor of these settings and then separately informs the stream processor of each buffer.

The total packet count is split into two pieces to disambiguate how many packets are allocated for the client to hold onto for whatever reason, vs. how many packets are allocated for the server to hold onto for whatever reason.

Extra packets to provide slack for performance reasons can be in either category, but typically packet_count_for_server_recommended will already include any performance-relevant slack for the server's benefit.

OrdinalNameTypeDescription
1 buffer_lifetime_ordinal uint64

The containing message starts a new buffer_lifetime_ordinal.

There is a separate buffer_lifetime_ordinal for input vs. output.

Re-use of the same value is not allowed. Values must be odd. Values must only increase (increasing by more than 2 is permitted).

A buffer_lifetime_ordinal lifetime starts at SetInputBufferSettings() or SetOutputBufferSettings(), and ends at the earlier of CloseCurrentStream() with release_input_buffers/release_output_buffers set or SetOutputBufferSettings() with new buffer_lifetime_ordinal in the case of mid-stream output config change.

2 buffer_constraints_version_ordinal uint64

This value indicates which version of constraints the client is/was aware of so far.

For input, this must always be 0 because constraints don't change for input (settings can change, but there's no settings vs current constraints synchronization issue on input).

For output, this allows the server to know when the client is sufficiently caught up before the server will generate any more output.

When there is no active stream, a client is permitted to re-configure buffers again using the same buffer_constraints_version_ordinal.

3 packet_count_for_server uint32

How many packets the client is allocating for the stream processor server's use. This must be >= StreamBufferConstraints.packet_count_for_server_min. If constraints change such that this would no longer be true, the server will send an OnOutputConstraints() event.

The stream processor server is allowed to demand that all of packet_count_for_server become free before making further progress, even if packet_count_for_server is > packet_count_for_server_min.

A reasonable value for this is StreamBufferConstraints.packet_count_for_server_recommended.

4 packet_count_for_client uint32

This must be at least 1. The server will close the channel if this is 0.

How many packets the client is allocating for the client's use. The client may hold onto this many packets for arbitrarily-long duration without handing these packets to the stream processor, and despite doing so, the stream processor will continue to make progress and function normally without getting stuck. The client holding onto additional packets transiently is ok, but the client needs to hand those additional packets back to the stream processor eventually if the client wants the stream processor to make further progress.

In addition to this value needing to include at least as many packets as the client ever intends to concurrently camp on indefinitely, any extra slack to benefit client-side performance should also be included here.

A typical value for this could be at least 2, but it depends strongly on client implementation and overall client buffering goals. It is up to the client to determine how many packets are needed in this category by any parts of the overall system that will be holding onto packets for any reason. Those parts of the system should have a documented and possibly queryable defined value to help determine this number. Setting this value lower than it actually needs to be can result in the stream processor not making progress as it sits waiting for packets, with the client unable to recycle any more packets to the stream processor. That situation can be difficult to diagnose, while excessively-large values here are wasteful, so care is warranted to set this value properly.

5 per_packet_buffer_bytes uint32

In buffer-per-packet mode, we require that each buffer have usable bytes equal to per_packet_buffer_bytes. Use of differently-sized low-level buffers is possible, but the size of the portion used via the StreamProcessor interface per StreamBuffer must be the same for all the buffers.

In single-buffer mode, we require the portion of the low-level buffer used via the StreamProcessor interface to be size (packet_count_for_server + packet_count_for_client) * per_packet_buffer_bytes.

6 single_buffer_mode bool

If true there is only one buffer, with index 0, which all packets must explicitly refer to with buffer_index == 0.

While it's possible to set up single_buffer_mode false with each buffer referring to the same underlying VMO, single_buffer_mode true is more efficient for that case since only one mapping is created.

StreamOutputConfig

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 stream_lifetime_ordinal uint64
2 buffer_constraints_action_required bool
3 buffer_constraints StreamBufferConstraints
4 format_details FormatDetails

StreamOutputConstraints

Defined in fuchsia.media/stream_processor.fidl

The stream-processor-controlled output configuration, including both StreamBufferConstraints for the output and FormatDetails for the output.

OrdinalNameTypeDescription
1 stream_lifetime_ordinal uint64

A client which always immediately re-configures output buffers on receipt of OnOutputConstraints() with buffer_constraints_action_required true can safely ignore this field.

A client is permitted to ignore an OnOutputConstraints() message even with buffer_constraints_action_required true if the client knows the server has already been told to discard the remainder of the stream with the same stream_lifetime_ordinal or if this stream_lifetime_ordinal field is set to 0. The server is required to re-send needed output config via OnOutputConstraints() with new stream_lifetime_ordinal and buffer_constraints_action_required true, if the most recent completed server-side output config isn't what the server wants/needs yet for the new stream.

2 buffer_constraints_action_required bool

When the buffer constraints are delivered, they indicate whether action is required. A false value here permits delivery of constraints which are fresher without forcing a buffer reconfiguration. If this is false, a client cannot assume that it's safe to immediately re-configure output buffers. If this is true, the client can assume it's safe to immediately configure output buffers once.

A client is permitted to ignore buffer constraint versions which have buffer_constraints_action_required false. The server is not permitted to change buffer_constraints_action_required from false to true for the same buffer_constraints_version_ordinal.

For each configuration, a client must use new buffers, never buffers that were previously used for anything else, and never buffers previously used for any other StreamProcessor purposes. This rule exists for multiple good reasons, relevant to both mid-stream changes, and changes on stream boundaries. A client should just use new buffers each time.

When this is true, the server has already de-refed as many low-level output buffers as the server can while still performing efficient transition to the new buffers and will de-ref the rest asap. A Sync() is not necessary to achieve non-overlap of resource usage to the extent efficiently permitted by the formats involved.

If buffer_constraints_action_required is true, the server must not deliver more output data until after output buffers have been configured (or re-configured) by the client.

3 buffer_constraints StreamBufferConstraints

StreamOutputFormat

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 stream_lifetime_ordinal uint64

A client is permitted to ignore an OnOutputFormat() message even with buffer_constraints_action_required true if the client knows the server has already been told to discard the remainder of the stream with the same stream_lifetime_ordinal or if this stream_lifetime_ordinal field is set to 0. The server is required to re-send needed output config via OnOutputConstraints() with new stream_lifetime_ordinal and buffer_constraints_action_required true, if the most recent completed server-side output config isn't what the server wants/needs yet for the new stream.

The server is required to send an OnOutputFormat() before the first output packet of a stream.

2 format_details FormatDetails

If format_details.format_details_version_ordinal changes, the client should inspect the new format details and determine if it must adjust to the new format. The server guarantees that if the format has changed, then format_details.format_details_version_ordinal will change, but a change to format_details.format_details_version_ordinal does not guarantee that the format details actually changed. Servers are strongly encouraged to not change format_details.format_details_version_ordinal other than before the first output data of a stream unless there is a real mid-stream format change in the stream. Unnecessary mid-stream format changes can cause simpler clients that have no need to handle mid-stream format changes to just close the channel. Format changes before the first output data of a stream are not "mid-stream" in this context - those can be useful for stream format detection / setup reasons.

Note that in case output buffers don't really need to be re-configured despite a format change, a server is encouraged, but not required, to set buffer_constraints_action_required false on the message that conveys the new format details. Simpler servers may just treat the whole output situation as one big thing and demand output buffer reconfiguration on any change in the output situation.

A client may or may not actually handle a new buffer_constraints with buffer_constraints_action_required false, but the client should always track the latest format_details.

An updated format_details is ordered with respect to emitted output packets, and applies to all subsequent packets until the next format_details with larger version_ordinal. A simple client that does not intend to handle mid-stream format changes should still keep track of the most recently received format_details until the first output packet arrives, then lock down the format details, handle those format details, and verify that any format_details.format_details_version_ordinal received from the server is the same as the locked-down format_details, until the client is done with the stream. Even such a simple client must tolerate format_details.format_details_version_ordinal changing multiple times before the start of data output from a stream (any stream - the first stream or a subsequent stream). This allows a stream processor to request that output buffers and output format be configured speculatively, and for the output config to be optionally adjusted by the server before the first data output from a stream after the server knows everything it needs to know to fully establish the initial output format details. This simplifies stream processor server implementation, and allows a clever stream processor server to guess it's output config for lower latency before any input data, while still being able to fix the output config (including format details) if the guess turns out to be wrong.

Whether the format_details.format_details_version_ordinal will actually change mid-stream is a per-stream-processor and per-stream detail that is not specified in comments here, and in most cases also depends on whether the format changes on the input to the stream processor. Probably it'll be fairly common for a client to use a format which technically supports mid-stream format change, but the client happens to know that none of the streams the client intends to process will ever have a mid-stream format change.

UsageStateDucked

Defined in fuchsia.media/usage_reporter.fidl

A state of audio usages in which a policy decision has been made to temporarily lower the volume of all streams with this usage.

OrdinalNameTypeDescription

UsageStateMuted

Defined in fuchsia.media/usage_reporter.fidl

A state of audio usages in which a policy decision has been made to temporarily mute the volume of all streams with this usage.

OrdinalNameTypeDescription

UsageStateUnadjusted

Defined in fuchsia.media/usage_reporter.fidl

A state of audio usages in which no policy actions are taken on any streams with the usage.

OrdinalNameTypeDescription

UNIONS

AacBitRate

Defined in fuchsia.media/stream_common.fidl

NameTypeDescription
constant AacConstantBitRate
variable AacVariableBitRate

AacTransport

Defined in fuchsia.media/stream_common.fidl

NameTypeDescription
raw AacTransportRaw
latm AacTransportLatm
adts AacTransportAdts

AudioCapturerConfiguration

Defined in fuchsia.media/audio_capturer.fidl

Configuration for an audio Capturer.

NameTypeDescription
loopback LoopbackAudioCapturerConfiguration
input InputAudioCapturerConfiguration

AudioCompressedFormat

Defined in fuchsia.media/stream_common.fidl

NameTypeDescription
aac AudioCompressedFormatAac
sbc AudioCompressedFormatSbc

AudioConsumerError

Defined in fuchsia.media/audio_consumer.fidl

Represents a AudioConsumer error condition.

NameTypeDescription
place_holder Void

AudioFormat

Defined in fuchsia.media/stream_common.fidl

AudioFormat

NameTypeDescription
compressed AudioCompressedFormat
uncompressed AudioUncompressedFormat

AudioUncompressedFormat

Defined in fuchsia.media/stream_common.fidl

AudioUncompressedFormat

NameTypeDescription
pcm PcmFormat

CryptoFormat

Defined in fuchsia.media/stream_common.fidl

CryptoFormat

Crypto (encrypted or decrypted) format details.

NameTypeDescription
encrypted EncryptedFormat
decrypted DecryptedFormat

DomainFormat

Defined in fuchsia.media/stream_common.fidl

DomainFormat

NameTypeDescription
audio AudioFormat
video VideoFormat
crypto CryptoFormat

EncoderSettings

Defined in fuchsia.media/stream_common.fidl

Settings for encoders that tell them how to encode raw formats.

NameTypeDescription
sbc SbcEncoderSettings
aac AacEncoderSettings
h264 H264EncoderSettings
hevc HevcEncoderSettings

MediumSpecificStreamType

Defined in fuchsia.media/stream_type.fidl

A union of all medium-specific stream type structs.

NameTypeDescription
audio AudioStreamType
video VideoStreamType
text TextStreamType
subpicture SubpictureStreamType

StreamBufferData

Defined in fuchsia.media/stream_processor.fidl

For the moment, a VMO per buffer is the only type of buffer.

This is extremely likely to change significantly when adding gralloc stuff, but the idea with this union is to have a struct per logical way of storing the data. Any multi-domain storage within a gralloc buffer will likely be only indirectly represented here.

NameTypeDescription
vmo StreamBufferDataVmo

Usage

Defined in fuchsia.media/audio_core.fidl

NameTypeDescription
render_usage AudioRenderUsage
capture_usage AudioCaptureUsage

UsageState

Defined in fuchsia.media/usage_reporter.fidl

The state of audio policy enforcement on a stream or set of streams.

NameTypeDescription
unadjusted UsageStateUnadjusted
ducked UsageStateDucked
muted UsageStateMuted

Value

Defined in fuchsia.media/stream_common.fidl

Value

Generic "value" for use within generic "Parameter" struct.

NameTypeDescription
bool_value bool
uint64_value uint64
int64_value int64
string_value string
bytes_value vector<uint8>

VideoCompressedFormat

Defined in fuchsia.media/stream_common.fidl

VideoCompressedFormat

Compressed video format details.

NameTypeDescription
temp_field_todo_remove uint32

VideoFormat

Defined in fuchsia.media/stream_common.fidl

VideoFormat

Video (compress or uncompressed) format details. In this context, "uncompressed" can include block-based image compression formats that still permit fairly fast random access to image data.

NameTypeDescription
compressed VideoCompressedFormat
uncompressed VideoUncompressedFormat

BITS

AudioConsumerStartFlags

Type: uint32

Defined in fuchsia.media/audio_consumer.fidl

Flags passed to AudioConsumer.Start.

NameValueDescription
LOW_LATENCY 1

Indicates that latency should be kept as low as possible.

SUPPLY_DRIVEN 2

Indicates that the timing of packet delivery is determined by an external process rather than being demand-based. When this flag is set, the service should expect underflow or overflow due to a mismatch between packet arrival rate and presentation rate. When this flag is not set, packets arrive on demand.

AudioGainInfoFlags

Type: uint32

Defined in fuchsia.media/audio_device_enumerator.fidl

NameValueDescription
MUTE 1
AGC_SUPPORTED 2
AGC_ENABLED 4

AudioGainValidFlags

Type: uint32

Defined in fuchsia.media/audio_device_enumerator.fidl

NameValueDescription
GAIN_VALID 1
MUTE_VALID 2
AGC_VALID 4

CONSTANTS

NameValueTypeDescription
AUDIO_ENCODING_AAC fuchsia.media.aac String

Audio encodings.

AUDIO_ENCODING_AACLATM fuchsia.media.aaclatm String
AUDIO_ENCODING_AMRNB fuchsia.media.amrnb String
AUDIO_ENCODING_AMRWB fuchsia.media.amrwb String
AUDIO_ENCODING_APTX fuchsia.media.aptx String
AUDIO_ENCODING_FLAC fuchsia.media.flac String
AUDIO_ENCODING_GSMMS fuchsia.media.gsmms String
AUDIO_ENCODING_LPCM fuchsia.media.lpcm String
AUDIO_ENCODING_MP3 fuchsia.media.mp3 String
AUDIO_ENCODING_OPUS fuchsia.media.opus String
AUDIO_ENCODING_PCMALAW fuchsia.media.pcmalaw String
AUDIO_ENCODING_PCMMULAW fuchsia.media.pcmmulaw String
AUDIO_ENCODING_SBC fuchsia.media.sbc String
AUDIO_ENCODING_VORBIS fuchsia.media.vorbis String
CAPTURE_USAGE_COUNT 4 uint8
ENCRYPTION_SCHEME_CBC1 cbc1 String
ENCRYPTION_SCHEME_CBCS cbcs String
ENCRYPTION_SCHEME_CENC cenc String
ENCRYPTION_SCHEME_CENS cens String
ENCRYPTION_SCHEME_UNENCRYPTED unencrypted String
MAX_ENCRYPTION_SCHEME_SIZE 100 uint32
MAX_FRAMES_PER_RENDERER_PACKET 262143 int64

The maximum number of frames that may be contained within a single StreamPacket.

MAX_INIT_VECTOR_SIZE 16 uint32
MAX_KEY_ID_SIZE 16 uint32
MAX_PCM_CHANNEL_COUNT 8 uint32
MAX_PCM_FRAMES_PER_SECOND 192000 uint32
METADATA_LABEL_ALBUM fuchsia.media.album String
METADATA_LABEL_ARTIST fuchsia.media.artist String
METADATA_LABEL_COMPOSER fuchsia.media.composer String
METADATA_LABEL_EPISODE fuchsia.media.episode String
METADATA_LABEL_GENRE fuchsia.media.genre String
METADATA_LABEL_PUBLISHER fuchsia.media.publisher String
METADATA_LABEL_RELEASE_DATE fuchsia.media.release_date String
METADATA_LABEL_SEASON fuchsia.media.season String
METADATA_LABEL_STUDIO fuchsia.media.studio String
METADATA_LABEL_SUBTITLE fuchsia.media.subtitle String
METADATA_LABEL_TITLE fuchsia.media.title String
METADATA_LABEL_TRACK_NUMBER fuchsia.media.track_number String
METADATA_SOURCE_TITLE fuchsia.media.source_title String

The title of the source of the media, e.g. a player, streaming service, or website.

MIN_PCM_CHANNEL_COUNT 1 uint32

Permitted ranges for AudioRenderer and AudioCapturer

MIN_PCM_FRAMES_PER_SECOND 1000 uint32
NO_TIMESTAMP 9223372036854775807 int64

When used as a StreamPacket.pts value, indicates that the packet has no specific presentation timestamp. The effective presentation time of such a packet depends on the context in which the StreamPacket is used.

RENDER_USAGE_COUNT 5 uint8
STREAM_PACKET_FLAG_DISCONTINUITY 4 uint32

Indicates a discontinuity in an otherwise continuous-in-time sequence of packets. The precise semantics of this flag depend on the context in which the StreamPacket is used.

STREAM_PACKET_FLAG_DROPPABLE 2 uint32

Indicates that all other packets in the stream can be understood without reference to this packet. This is typically used in compressed streams to identify packets containing frames that may be discarded without affecting other frames.

STREAM_PACKET_FLAG_KEY_FRAME 1 uint32

Indicates that the packet can be understood without reference to other packets in the stream. This is typically used in compressed streams to identify packets that contain key frames.

VIDEO_ENCODING_H263 fuchsia.media.h263 String

Video encodings.

VIDEO_ENCODING_H264 fuchsia.media.h264 String
VIDEO_ENCODING_MPEG4 fuchsia.media.mpeg4 String
VIDEO_ENCODING_THEORA fuchsia.media.theora String
VIDEO_ENCODING_UNCOMPRESSED fuchsia.media.uncompressed_video String
VIDEO_ENCODING_VP3 fuchsia.media.vp3 String
VIDEO_ENCODING_VP8 fuchsia.media.vp8 String
VIDEO_ENCODING_VP9 fuchsia.media.vp9 String
kDefaultInputIsSingleBufferMode false bool

For input, this is the default on a fairly arbitrary basis.

kDefaultInputPacketCountForClient 2 uint32

Default values for input and output StreamBufferConstraints.default_settings.packet_count_for_server.

These are defined as "const" in FIDL to avoid all server implementations needing to separately define their own values, and these should be reasonable as default values, but strictly speaking this is not intended to promise that this value won't change from build to build. If a client cares about a specific number, the client should separately define what that number is and ensure that StreamBufferSettings.packet_count_for_client is at least large enough.

In contrast to packet_count_for_client, the packet_count_for_server is much more stream-processor-specific, so this file has no numbers for that - each stream processor will set those as appropriate for the specific stream processor.

These are not "recommended" values, only "default" values, in the sense that the stream processor doesn't really know what the correct setting for these values is for a given client, and if the default is not appropriate for a client, large problems could result such as deadlock. See the comments on packet_count_for_client.

Despite these defaults, every client should ideally care about the packet_count_for_client setting and should ensure that the setting is at least large enough to cover the number of packets the client might ever need to camp on for any non-transient duration concurrently. The defaults are only intended to be plausible for some clients, not all clients.

One for the client to be filling and one in transit.

kDefaultOutputIsSingleBufferMode false bool
kDefaultOutputPacketCountForClient 2 uint32

One for the client to be rendering, and one in transit.

kMaxOobBytesSize 8192 uint64

TYPE ALIASES

NameValueDescription
CompressionType string[256]

An identifier for compression types.

EncryptionScheme string[MAX_ENCRYPTION_SCHEME_SIZE]
InitVector vector[MAX_INIT_VECTOR_SIZE]
KeyId vector[MAX_KEY_ID_SIZE]