fuchsia.media

PROTOCOLS

Audio

Defined in fuchsia.media/audio.fidl

CreateAudioRenderer

Request

NameType
audio_renderer_request request<AudioRenderer>

CreateAudioCapturer

Create an AudioCapturer which either captures from the current default audio input device, or loops-back from the current default audio output device based on value passed for the loopback flag.

Request

NameType
audio_capturer_request request<AudioCapturer>
loopback bool

SetSystemMute

Request

NameType
muted bool

SetSystemGain

Request

NameType
gain_db float32

SystemGainMuteChanged

Response

NameType
gain_db float32
muted bool

SetRoutingPolicy

Request

NameType
policy AudioOutputRoutingPolicy

AudioCapturer

Defined in fuchsia.media/audio_capturer.fidl

AudioCapturer

An AudioCapturer is an interface returned from an fuchsia.media.Audio's CreateAudioCapturer method, which may be used by clients to capture audio from either the current default audio input device, or the current default audio output device depending on the flags passed during creation.

** Format support **

See (Get|Set)StreamType below. By default, the captured stream type will be initially determined by the currently configured stream type of the source that the AudioCapturer was bound to at creation time. Users may either fetch this type using GetStreamType, or they may choose to have the media resampled or converted to a type of their choosing by calling SetStreamType. Note: the stream type may only be set while the system is not running, meaning that there are no pending capture regions (specified using CaptureAt) and that the system is not currently running in 'async' capture mode.

** Buffers and memory management **

Audio data is captured into a shared memory buffer (a VMO) supplied by the user to the AudioCapturer during the AddPayloadBuffer call. Please note the following requirements related to the management of the payload buffer.

++ The payload buffer must be supplied before any capture operation may start. Any attempt to start capture (via either CaptureAt or StartAsyncCapture) before a payload buffer has been established is an error. ++ The payload buffer may not be changed while there are any capture operations pending. ++ The stream type may not be changed after the payload buffer has been set. ++ The payload buffer must be an integral number of audio frame sizes (in bytes) ++ When running in 'async' mode (see below), the payload buffer must be at least as large as twice the frames_per_packet size specified during StartAsyncCapture. ++ The handle to the payload buffer supplied by the user must be readable, writable, and mappable. ++ Users should always treat the payload buffer as read-only.

** Synchronous vs. Asynchronous capture mode **

The AudioCapturer interface can be used in one of two mutually exclusive modes: Synchronous and Asynchronous. A description of each mode and their tradeoffs is given below.

** Synchronous mode **

By default, AudioCapturer instances are running in 'sync' mode. They will only capture data when a user supplies at least one region to capture into using the CaptureAt method. Regions supplied in this way will be filled in the order that they are received and returned to the client as StreamPackets via the return value of the CaptureAt method. If an AudioCapturer instance has data to capture, but no place to put it (because there are no more pending regions to fill), the next payload generated will indicate that their has been an overflow by setting the Discontinuity flag on the next produced StreamPacket. Synchronous mode may not be used in conjunction with Asynchronous mode. It is an error to attempt to call StartAsyncCapture while the system still regions supplied by CaptureAt waiting to be filled.

If a user has supplied regions to be filled by the AudioCapturer instance in the past, but wishes to reclaim those regions, they may do so using the DiscardAllPackets method. Calling the DiscardAllPackets method will cause all pending regions to be returned, but with NO_TIMESTAMP as their StreamPacket's PTS. See "Timing and Overflows", below, for a discussion of timestamps and discontinuity flags. After a DiscardAllPackets operation, an OnEndOfStream event will be produced. While an AudioCapturer will never overwrite any region of the payload buffer after a completed region is returned, it may overwrite the unfilled portions of a partially filled buffer which has been returned as a result of a DiscardAllPackets operation.

** Asynchronous mode **

While running in 'async' mode, clients do not need to explicitly supply shared buffer regions to be filled by the AudioCapturer instance. Instead, a client enters into 'async' mode by calling StartAsyncCapture and supplying a callback interface and the number of frames to capture per-callback. Once running in async mode, the AudioCapturer instance will identify which payload buffer regions to capture into, capture the specified number of frames, then deliver those frames as StreamPackets using the OnPacketCapture FIDL event. Users may stop capturing and return the AudioCapturer instance to 'sync' mode using the StopAsyncCapture method.

It is considered an error to attempt any of the following operations.

++ To attempt to enter 'async' capture mode when no payload buffer has been established. ++ To specify a number of frames to capture per payload which does not permit at least two contiguous capture payloads to exist in the established shared payload buffer simultaneously. ++ To send a region to capture into using the CaptureAt method while the AudioCapturer instance is running in 'async' mode. ++ To attempt to call DiscardAllPackets while the AudioCapturer instance is running in 'async' mode. ++ To attempt to re-start 'async' mode capturing without having first stopped. ++ To attempt any operation except for SetGain while in the process of stopping.

** Synchronizing with a StopAsyncCapture operation **

Stopping asynchronous capture mode and returning to synchronous capture mode is an operation which takes time. Aside from SetGain, users may not call any other methods on the AudioCapturer interface after calling StopAsyncCapture (including calling StopAsyncCapture again) until after the stop operation has completed. Because of this, it is important for users to be able to synchronize with the stop operation. Two mechanisms are provided for doing so.

The first is to use the StopAsyncCaptureWithCallback method. When the user's callback has been called, they can be certain that stop operation is complete and that the AudioCapturer instance has returned to synchronous operation mode.

The second way to determine that a stop operation has completed is to use the flags on the packets which get delivered via the user-supplied AudioCapturerCallback interface after calling StopAsyncCapture. When asked to stop, any partially filled packet will be returned to the user, and the final packet returned will always have the end-of-stream flag (kFlagsEos) set on it to indicate that this is the final frame in the sequence. If there is no partially filled packet to return, the AudioCapturer will synthesize an empty packet with no timestamp, and offset/length set to zero, in order to deliver a packet with the end-of-stream flag set on it. Once users have seen the end-of-stream flag after calling stop, the AudioCapturer has finished the stop operation and returned to synchronous operating mode.

** Timing and Overflows **

All media packets produced by an AudioCapturer instance will have their PTS field filled out with the capture time of the audio expressed as a timestamp given by the CLOCK_MONOTONIC timeline. Note: this timestamp is actually a capture timestamp, not a presentation timestamp (it is more of a CTS than a PTS) and is meant to represent the underlying system's best estimate of the capture time of the first frame of audio, including all outboard and hardware introduced buffering delay. As a result, all timestamps produced by an AudioCapturer should be expected to be in the past relative to 'now' on the CLOCK_MONOTONIC timeline.

The one exception to the "everything has an explicit timestamp" rule is when discarding submitted regions while operating in synchronous mode. Discarded packets have no data in them, but FIDL demands that all pending method-return-value callbacks be executed. Because of this, the regions will be returned to the user, but their timestamps will be set to NO_TIMESTAMP, and their payload sizes will be set to zero. Any partially filled payload will have a valid timestamp, but a payload size smaller than originally requested. The final discarded payload (if there were any to discard) will be followed by an OnEndOfStream event.

Two StreamPackets delivered by an AudioCapturer instance are 'continuous' if the first frame of audio contained in the second packet was capture exactly one nominal frame time after the final frame of audio in the first packet. If this relationship does not hold, the second StreamPacket will have the 'kFlagDiscontinuous' flag set in it's flags field.

Even though explicit timestamps are provided on every StreamPacket produced, users who have very precise timing requirements are encouraged to always reason about time by counting frames delivered since the last discontinuity instead of simply using the raw capture timestamps. This is because the explicit timestamps written on continuous packets may have a small amount of rounding error based on whether or not the units of the capture timeline (CLOCK_MONOTONIC) are divisible by the chosen audio frame rate.

Users should always expect the first StreamPacket produced by an AudioCapturer to have the discontinuous flag set on it (as there is no previous packet to be continuous with). Similarly, the first StreamPacket after a DiscardAllPackets or a Stop/Start cycle will always be discontinuous. After that, there are only two reasons that a StreamPacket will ever be discontinuous:

1) The user is operating an synchronous mode and does not supply regions to be filled quickly enough. If the next continuous frame of data has not been captured by the time it needs to be purged from the source buffers, an overflow has occurred and the AudioCapturer will flag the next captured region as discontinuous. 2) The user is operating in asynchronous mode and some internal error prevents the AudioCapturer instance from capturing the next frame of audio in a continuous fashion. This might be high system load or a hardware error, but in general it is something which should never normally happen. In practice, however, if it does, the next produced packet will be flagged as being discontinuous.

** Synchronous vs. Asynchronous Trade-offs **

The choice of operating in synchronous vs. asynchronous mode is up to the user, and depending on the user's requirements, there are some advantages and disadvantages to each choice.

Synchronous mode requires only a single Zircon channel under the hood and can achieve some small savings because of this. In addition, the user has complete control over the buffer management. Users specify exactly where audio will be captured to and in what order. Because of this, if users do not need to always be capturing, it is simple to stop and restart the capture later (just by ceasing to supply packets, then resuming later on). Payloads do not need to be uniform in size either, clients may specify payloads of whatever granularity is appropriate.

The primary downside of operating in synchronous mode is that two messages will need to be sent for every packet to be captured. One to inform the AudioCapturer of the instance to capture into, and one to inform the user that the packet has been captured. This may end up increasing overhead and potentially complicating client designs.

Asynchronous mode has the advantage requiring only 1/2 of the messages, however, when operating in 'async' mode, AudioCapturer instances have no way of knowing if a user is processing the StreamPackets being sent in a timely fashion, and no way of automatically detecting an overflow condition. Users of 'async' mode should be careful to use a buffer large enough to ensure that they will be able to process their data before an AudioCapturer will be forced to overwrite it.

AddPayloadBuffer

Adds a payload buffer to the current buffer set associated with the connection. A StreamPacket struct reference a payload buffer in the current set by ID using the StreamPacket.payload_buffer_id field.

A buffer with ID id must not be in the current set when this method is invoked, otherwise the service will close the connection.

Request

NameType
id uint32
payload_buffer handle<vmo>

RemovePayloadBuffer

Removes a payload buffer from the current buffer set associated with the connection.

A buffer with ID id must exist in the current set when this method is invoked, otherwise the service will will close the connection.

Request

NameType
id uint32

OnPacketProduced

Delivers a packet produced by the service. When the client is done with the payload memory, the client must call ReleasePacket to release the payload memory.

Response

NameType
packet StreamPacket

OnEndOfStream

Indicates that the stream has ended.

Response

NameType

ReleasePacket

Releases payload memory associated with a packet previously delivered via OnPacketProduced.

Request

NameType
packet StreamPacket

DiscardAllPackets

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Request

NameType

SetPcmStreamType

Sets the stream type of the stream to be delivered. Causes the source material to be reformatted/resampled if needed in order to produce the requested stream type. Note that the stream type may not be changed after the payload buffer has been established.

Request

NameType
stream_type AudioStreamType

CaptureAt

Explicitly specify a region of the shared payload buffer for the audio input to capture into.

Request

NameType
payload_buffer_id uint32
payload_offset uint32
frames uint32

Response

NameType
captured_packet StreamPacket

StartAsyncCapture

Place the AudioCapturer into 'async' capture mode and begin to produce packets of exactly 'frames_per_packet' number of frames each. The OnPacketProduced event (of StreamSink) will be used to inform the client of produced packets.

Request

NameType
frames_per_packet uint32

StopAsyncCapture

Stop capturing in 'async' capture mode and (optionally) deliver a callback that may be used by the client if explicit synchronization is needed.

Request

NameType

Response

NameType

StopAsyncCaptureNoReply

Request

NameType

BindGainControl

Binds to the gain control for this AudioCapturer.

Request

NameType
gain_control_request request<fuchsia.media.audio/GainControl>

SetUsage

Sets the usage of the capture stream. This may be changed on the fly, but packets in flight may be affected by the new usage. By default the Capturer is created with the FOREGROUND usage.

Request

NameType
usage AudioCaptureUsage

GetStreamType

Gets the currently configured stream type. Note: for an AudioCapturer which was just created and has not yet had its stream type explicitly set, this will retrieve the stream type -- at the time the AudioCapturer was created -- of the source (input or looped-back output) to which the AudioCapturer is bound.

Request

NameType

Response

NameType
stream_type StreamType

AudioCore

Defined in fuchsia.media/audio_core.fidl

CreateAudioRenderer

Create an AudioRenderer which outputs audio to the default device.

Request

NameType
audio_out_request request<AudioRenderer>

CreateAudioCapturer

Create an AudioCapturer which either captures from the current default audio input device, or loops-back from the current default audio output device based on value passed for the loopback flag.

Request

NameType
loopback bool
audio_in_request request<AudioCapturer>

SetSystemGain

System Gain and Mute

Fuchsia clients control the volume of individual audio streams via the fuchsia.media.audio.GainControl protocol. System Gain and Mute affect all audio output, and are controlled with methods that use the same concepts as GainControl, namely: independent gain and mute, with change notifications. Setting System Mute to true leads to the same outcome as setting System Gain to MUTED_GAIN_DB: all audio output across the system is silenced.

Sets the systemwide gain in decibels. gain_db values are clamped to the range -160 db to 0 db, inclusive. This setting is applied to all audio output devices. Audio input devices are unaffected. Does not affect System Mute.

Request

NameType
gain_db float32

SetSystemMute

Sets/clears the systemwide 'Mute' state for audio output devices. Audio input devices are unaffected. Changes to the System Mute state do not affect the value of System Gain.

Request

NameType
muted bool

SystemGainMuteChanged

Provides current values for systemwide Gain and Mute. When a client connects to AudioCore, the system immediately sends that client a SystemGainMuteChanged event with the current system Gain|Mute settings. Subsequent events will be sent when these Gain|Mute values change.

Response

NameType
gain_db float32
muted bool

SetRoutingPolicy

Request

NameType
policy AudioOutputRoutingPolicy

EnableDeviceSettings

Request

NameType
enabled bool

SetRenderUsageGain

Set the Usage gain applied to Renderers. By default, the gain for all render usages is set to Unity (0 db).

Request

NameType
usage AudioRenderUsage
gain_db float32

SetCaptureUsageGain

Set the Usage gain applied to Capturers. By default, the gain for all capture usages is set to Unity (0 db).

Request

NameType
usage AudioCaptureUsage
gain_db float32

SetInteraction

SetInteraction allows changing how audio_core handles interactions of multiple active streams simultaneously. If streams of Usage active are processing audio, and streams of Usage affected are as well, the Behavior specified will be applied to the streams of Usage affected.

Request

NameType
active Usage
affected Usage
behavior Behavior

ResetInteractions

Re-initializes the set of rules that are currently governing the interaction of streams in audio_core. The default behavior is 'NONE'.

Request

NameType

LoadDefaults

Re-loads the platform policy configuration. Falls back to a default config if the platform does not provide a config.

Request

NameType

AudioDeviceEnumerator

Defined in fuchsia.media/audio_device_enumerator.fidl

GetDevices

Obtain the list of currently active audio devices.

Request

NameType

Response

NameType
devices vector<AudioDeviceInfo>

OnDeviceAdded

Events sent when devices are added or removed, or when properties of a device change.

Response

NameType
device AudioDeviceInfo

OnDeviceRemoved

Response

NameType
device_token uint64

OnDeviceGainChanged

Response

NameType
device_token uint64
gain_info AudioGainInfo

OnDefaultDeviceChanged

Response

NameType
old_default_token uint64
new_default_token uint64

GetDeviceGain

Gain/Mute/AGC control

Note that each of these operations requires a device_token in order to target the proper input/output.

The Get command returns the device_token of the device whose gain is being reported, or ZX_KOID_INVALID in the case that the requested device_token was invalid or the device had been removed from the system before the Get command could be processed.

Set commands which are given an invalid device token are ignored and have no effect on the system. In addition, users do not need to control all of the gain settings for an audio device with each call. Only the settings with a corresponding flag set in the set_flags parameter will be affected. For example, passing SetAudioGainFlag_MuteValid will cause a SetDeviceGain call to care only about the mute setting in the gain_info structure, while passing (SetAudioGainFlag_GainValid | SetAudioGainFlag_MuteValid) will cause both the mute and the gain status to be changed simultaneously.

Request

NameType
device_token uint64

Response

NameType
device_token uint64
gain_info AudioGainInfo

SetDeviceGain

Request

NameType
device_token uint64
gain_info AudioGainInfo
set_flags uint32

GetDefaultInputDevice

Default Device

Fetch the device ID of the current default input or output device, or ZX_KOID_INVALID if no such device exists.

Request

NameType

Response

NameType
device_token uint64

GetDefaultOutputDevice

Request

NameType

Response

NameType
device_token uint64

AddDeviceByChannel

Request

NameType
device_channel handle<channel>
device_name string[256]
is_input bool

AudioRenderer

Defined in fuchsia.media/audio_renderer.fidl

AudioRenderers can be in one of two states at any point in time, either the configurable state or the operational state. A renderer is considered to be operational any time it has packets queued and waiting to be rendered; otherwise it is considered to be in the configurable state. When an AudioRenderer has entered the operational state of its life, any attempt to call a config method in the interface is considered to be illegal and will result in termination of the interface's connection to the audio service.

If an AudioRenderer must be reconfigured, it is best practice to always call DiscardAllPackets on the AudioRenderer, before starting to reconfigure it.

AddPayloadBuffer

Adds a payload buffer to the current buffer set associated with the connection. A StreamPacket struct reference a payload buffer in the current set by ID using the StreamPacket.payload_buffer_id field.

A buffer with ID id must not be in the current set when this method is invoked, otherwise the service will close the connection.

Request

NameType
id uint32
payload_buffer handle<vmo>

RemovePayloadBuffer

Removes a payload buffer from the current buffer set associated with the connection.

A buffer with ID id must exist in the current set when this method is invoked, otherwise the service will will close the connection.

Request

NameType
id uint32

SendPacket

Sends a packet to the service. The response is sent when the service is done with the associated payload memory.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

Response

NameType

SendPacketNoReply

Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

EndOfStream

Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.

Request

NameType

DiscardAllPackets

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released. The response is sent after all packets have been released.

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released.

Request

NameType

SetPcmStreamType

Sets the type of the stream to be delivered by the client. Using this method implies that the stream encoding is AUDIO_ENCODING_LPCM.

Request

NameType
type AudioStreamType

SetStreamType

Sets the stream type to be delivered by the client. This method is used for compressed pass-through. The media_specific field must be of type audio. NOTE: Not currently implemented.

Request

NameType
type StreamType

SetPtsUnits

Sets the units used by the presentation (media) timeline. By default, PTS units are nanoseconds (as if this were called with values of 1e9 and 1).

Request

NameType
tick_per_second_numerator uint32
tick_per_second_denominator uint32

SetPtsContinuityThreshold

Sets the maximum threshold (in frames) between an explicit PTS (user- provided) and an expected PTS (determined using interpolation). Beyond this threshold, a stream is no longer considered 'continuous' by the renderer.

Defaults to RoundUp((AudioFPS/PTSTicksPerSec) / 2.0) / AudioFPS Most users should not need to change this value from its default.

Example: A user is playing back 48KHz audio from a container, which also contains video and needs to be synchronized with the audio. The timestamps are provided explicitly per packet by the container, and expressed in mSec units. This means that a single tick of the media timeline (1 mSec) represents exactly 48 frames of audio. The application in this scenario delivers packets of audio to the AudioRenderer, each with exactly 470 frames of audio, and each with an explicit timestamp set to the best possible representation of the presentation time (given this media clock's resolution). So, starting from zero, the timestamps would be..

[ 0, 10, 20, 29, 39, 49, 59, 69, 78, 88, ... ]

In this example, attempting to use the presentation time to compute the starting frame number of the audio in the packet would be wrong the majority of the time. The first timestamp is correct (by definition), but it will be 24 packets before the timestamps and frame numbers come back into alignment (the 24th packet would start with the 11280th audio frame and have a PTS of exactly 235).

One way to fix this situation is to set the PTS continuity threshold (henceforth, CT) for the stream to be equal to 1/2 of the time taken by the number of frames contained within a single tick of the media clock, rounded up. In this scenario, that would be 24.0 frames of audio, or 500 uSec. Any packets whose expected PTS was within +/-CT frames of the explicitly provided PTS would be considered to be a continuation of the previous frame of audio.

Other possible uses: Users who are scheduling audio explicitly, relative to a clock which has not been configured as the reference clock, can use this value to control the maximum acceptable synchronization error before a discontinuity is introduced. E.g., if a user is scheduling audio based on a recovered common media clock, and has not published that clock as the reference clock, and they set the CT to 20mSec, then up to 20mSec of drift error can accumulate before the AudioRenderer deliberately inserts a presentation discontinuity to account for the error.

Users whose need to deal with a container where their timestamps may be even less correct than +/- 1/2 of a PTS tick may set this value to something larger. This should be the maximum level of inaccuracy present in the container timestamps, if known. Failing that, it could be set to the maximum tolerable level of drift error before absolute timestamps are explicitly obeyed. Finally, a user could set this number to a very large value (86400.0 seconds, for example) to effectively cause all timestamps to be ignored after the first, thus treating all audio as continuous with previously delivered packets. Conversely, users who wish to always explicitly schedule their audio packets exactly may specify a CT of 0.

Request

NameType
threshold_seconds float32

SetReferenceClock

Set the reference clock used to control playback rate.

Request

NameType
reference_clock handle<handle>

Play

Immediately put the AudioRenderer into a playing state. Start the advance of the media timeline, using specific values provided by the caller (or default values if not specified). In an optional callback, return the timestamp values ultimately used -- these set the ongoing relationship between the media and reference timelines (i.e., how to translate between the domain of presentation timestamps, and the realm of local system time).

Local system time is specified in units of nanoseconds; media_time is specified in the units defined by the user in the SetPtsUnits function, or nanoseconds if SetPtsUnits is not called.

The act of placing an AudioRenderer into the playback state establishes a relationship between 1) the user-defined media (or presentation) timeline for this particular AudioRenderer, and 2) the real-world system reference timeline. To communicate how to translate between timelines, the Play() callback provides an equivalent timestamp in each time domain. The first value ('reference_time') is given in terms of the local system clock; the second value ('media_time') is what media instant exactly corresponds to that local time. Restated, the frame at 'media_time' in the audio stream should be presented at system local time 'reference_time'.

With the corresponding reference_time and media_time values, a user can translate arbitrary time values from one timeline into the other. After calling SetPtsUnits(pts_per_sec_numerator, pts_per_sec_denominator) and given the 'ref_start' and 'media_start' values from Play(), then for any 'ref_time':

media_time = ( (ref_time - ref_start) / 1e9 * (pts_per_sec_numerator / pts_per_sec_denominator) ) + media_start

Conversely, for any presentation timestamp 'media_time':

ref_time = ( (media_time - media_start) * (pts_per_sec_denominator / pts_per_sec_numerator) * 1e9 ) + ref_start

Users, depending on their use case, may optionally choose not to specify one or both of these timestamps. A timestamp may be omitted by supplying the special value 'NO_TIMESTAMP'. The AudioRenderer automatically deduces any omitted timestamp value using the following rules:

Reference Time If 'reference_time' is omitted, the AudioRenderer will select a "safe" reference time to begin presentation, based on the minimum lead times for the output devices that are currently bound to this AudioRenderer. For example, if an AudioRenderer is bound to an internal audio output requiring at least 3 mSec of lead time, and an HDMI output requiring at least 75 mSec of lead time, the AudioRenderer might (if 'reference_time' is omitted) select a reference time 80 mSec from now.

Media Time If media_time is omitted, the AudioRenderer will select one of two values. - If the AudioRenderer is resuming from the paused state, and packets have not been discarded since being paused, then the AudioRenderer will use a media_time corresponding to the instant at which the presentation became paused. - If the AudioRenderer is being placed into a playing state for the first time following startup or a 'discard packets' operation, the initial media_time will be set to the PTS of the first payload in the pending packet queue. If the pending queue is empty, initial media_time will be set to zero.

Return Value When requested, the AudioRenderer will return the 'reference_time' and 'media_time' which were selected and used (whether they were explicitly specified or not) in the return value of the play call.

Examples 1. A user has queued some audio using SendPacket and simply wishes them to start playing as soon as possible. The user may call Play without providing explicit timestamps -- Play(NO_TIMESTAMP, NO_TIMESTAMP).

  1. A user has queued some audio using SendPacket, and wishes to start playback at a specified 'reference_time', in sync with some other media stream, either initially or after discarding packets. The user would call Play(reference_time, NO_TIMESTAMP).

  2. A user has queued some audio using SendPacket. The first of these packets has a PTS of zero, and the user wishes playback to begin as soon as possible, but wishes to skip all of the audio content between PTS 0 and PTS 'media_time'. The user would call Play(NO_TIMESTAMP, media_time).

  3. A user has queued some audio using SendPacket and want to present this media in synch with another player in a different device. The coordinator of the group of distributed players sends an explicit message to each player telling them to begin presentation of audio at PTS 'media_time', at the time (based on the group's shared reference clock) 'reference_time'. Here the user would call Play(reference_time, media_time).

Request

NameType
reference_time int64
media_time int64

Response

NameType
reference_time int64
media_time int64

PlayNoReply

Request

NameType
reference_time int64
media_time int64

Pause

Immediately put the AudioRenderer into the paused state and then report the relationship between the media and reference timelines which was established (if requested).

Request

NameType

Response

NameType
reference_time int64
media_time int64

PauseNoReply

Request

NameType

EnableMinLeadTimeEvents

Enable or disable notifications about changes to the minimum clock lead time (in nanoseconds) for this AudioRenderer. Calling this method with 'enabled' set to true will trigger an immediate OnMinLeadTimeChanged event with the current minimum lead time for the AudioRenderer. If the value changes, an OnMinLeadTimeChanged event will be raised with the new value. This behavior will continue until the user calls EnableMinLeadTimeEvents(false).

The minimum clock lead time is the amount of time ahead of the reference clock's understanding of "now" that packets needs to arrive (relative to the playback clock transformation) in order for the mixer to be able to mix packet. For example...

++ Let the PTS of packet X be P(X) ++ Let the function which transforms PTS -> RefClock be R(p) (this function is determined by the call to Play(...) ++ Let the minimum lead time be MLT

If R(P(X)) < RefClock.Now() + MLT Then the packet is late, and some (or all) of the packet's payload will need to be skipped in order to present the packet at the scheduled time.

Request

NameType
enabled bool

OnMinLeadTimeChanged

Response

NameType
min_lead_time_nsec int64

GetMinLeadTime

Request

NameType

Response

NameType
min_lead_time_nsec int64

BindGainControl

Binds to the gain control for this AudioRenderer.

Request

NameType
gain_control_request request<fuchsia.media.audio/GainControl>

SetUsage

Sets the usage of the render stream. This may be changed on the fly, but packets in flight may be affected by the new usage.

Request

NameType
usage AudioRenderUsage

StreamBufferSet

Defined in fuchsia.media/stream.fidl

Manages a set of payload buffers for a stream. This interface is typically inherited along with StreamSink or StreamSource to enable the transport of elementary streams between clients and services.

AddPayloadBuffer

Adds a payload buffer to the current buffer set associated with the connection. A StreamPacket struct reference a payload buffer in the current set by ID using the StreamPacket.payload_buffer_id field.

A buffer with ID id must not be in the current set when this method is invoked, otherwise the service will close the connection.

Request

NameType
id uint32
payload_buffer handle<vmo>

RemovePayloadBuffer

Removes a payload buffer from the current buffer set associated with the connection.

A buffer with ID id must exist in the current set when this method is invoked, otherwise the service will will close the connection.

Request

NameType
id uint32

StreamSink

Defined in fuchsia.media/stream.fidl

Consumes a stream of packets. This interface is typically inherited along with StreamBufferSet to enable the transport of elementary streams from clients to services.

SendPacket

Sends a packet to the service. The response is sent when the service is done with the associated payload memory.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

Response

NameType

SendPacketNoReply

Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

EndOfStream

Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.

Request

NameType

DiscardAllPackets

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released. The response is sent after all packets have been released.

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released.

Request

NameType

StreamSource

Defined in fuchsia.media/stream.fidl

Produces a stream of packets. This interface is typically inherited along with StreamBufferSet to enable the transport of elementary streams from services to clients.

OnPacketProduced

Delivers a packet produced by the service. When the client is done with the payload memory, the client must call ReleasePacket to release the payload memory.

Response

NameType
packet StreamPacket

OnEndOfStream

Indicates that the stream has ended.

Response

NameType

ReleasePacket

Releases payload memory associated with a packet previously delivered via OnPacketProduced.

Request

NameType
packet StreamPacket

DiscardAllPackets

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Request

NameType

SimpleStreamSink

Defined in fuchsia.media/stream.fidl

A StreamSink that uses StreamBufferSet for buffer management.

AddPayloadBuffer

Adds a payload buffer to the current buffer set associated with the connection. A StreamPacket struct reference a payload buffer in the current set by ID using the StreamPacket.payload_buffer_id field.

A buffer with ID id must not be in the current set when this method is invoked, otherwise the service will close the connection.

Request

NameType
id uint32
payload_buffer handle<vmo>

RemovePayloadBuffer

Removes a payload buffer from the current buffer set associated with the connection.

A buffer with ID id must exist in the current set when this method is invoked, otherwise the service will will close the connection.

Request

NameType
id uint32

SendPacket

Sends a packet to the service. The response is sent when the service is done with the associated payload memory.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

Response

NameType

SendPacketNoReply

Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.

packet must be valid for the current buffer set, otherwise the service will close the connection.

Request

NameType
packet StreamPacket

EndOfStream

Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.

Request

NameType

DiscardAllPackets

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released. The response is sent after all packets have been released.

Request

NameType

Response

NameType

DiscardAllPacketsNoReply

Discards packets previously sent via SendPacket or SendPacketNoReply and not yet released.

Request

NameType

StreamProcessor

Defined in fuchsia.media/stream_processor.fidl

EnableOnStreamFailed

Request

NameType

OnStreamFailed

Response

NameType
stream_lifetime_ordinal uint64
error StreamError

OnInputConstraints

Response

NameType
input_constraints StreamBufferConstraints

SetInputBufferSettings

Request

NameType
input_settings StreamBufferSettings

AddInputBuffer

Request

NameType
buffer StreamBuffer

SetInputBufferPartialSettings

Request

NameType
input_settings StreamBufferPartialSettings

OnOutputConstraints

Response

NameType
output_config StreamOutputConstraints

OnOutputFormat

Response

NameType
output_format StreamOutputFormat

SetOutputBufferSettings

Request

NameType
output_settings StreamBufferSettings

AddOutputBuffer

Request

NameType
buffer StreamBuffer

SetOutputBufferPartialSettings

Request

NameType
output_settings StreamBufferPartialSettings

CompleteOutputBufferPartialSettings

Request

NameType
buffer_lifetime_ordinal uint64

FlushEndOfStreamAndCloseStream

Request

NameType
stream_lifetime_ordinal uint64

CloseCurrentStream

Request

NameType
stream_lifetime_ordinal uint64
release_input_buffers bool
release_output_buffers bool

Sync

Request

NameType

Response

NameType

OnOutputPacket

Response

NameType
output_packet Packet
error_detected_before bool
error_detected_during bool

RecycleOutputPacket

Request

NameType
available_output_packet PacketHeader

OnOutputEndOfStream

Response

NameType
stream_lifetime_ordinal uint64
error_detected_before bool

QueueInputFormatDetails

Request

NameType
stream_lifetime_ordinal uint64
format_details FormatDetails

QueueInputPacket

Request

NameType
packet Packet

OnFreeInputPacket

Response

NameType
free_input_packet PacketHeader

QueueInputEndOfStream

Request

NameType
stream_lifetime_ordinal uint64

STRUCTS

AudioGainInfo

Defined in fuchsia.media/audio_device_enumerator.fidl

NameTypeDescriptionDefault
gain_db float32 No default
flags uint32 No default

AudioDeviceInfo

Defined in fuchsia.media/audio_device_enumerator.fidl

NameTypeDescriptionDefault
name string No default
unique_id string No default
token_id uint64 No default
is_input bool No default
gain_info AudioGainInfo No default
is_default bool No default

Metadata

Defined in fuchsia.media/metadata.fidl

NameTypeDescriptionDefault
properties vector<Property> No default

Property

Defined in fuchsia.media/metadata.fidl

NameTypeDescriptionDefault
label string No default
value string No default

StreamPacket

Defined in fuchsia.media/stream.fidl

Describes a packet consumed by StreamSink or produced by StreamSource.

NameTypeDescriptionDefault
pts int64 Time at which the packet is to be presented, according to the presentation clock. NO_TIMESTAMP
payload_buffer_id uint32 ID of the payload buffer used for this packet. When this struct is used with `StreamBufferSet`, this field is the ID of a payload buffer provided via `StreamBufferSet.AddPayloadBuffer`. In that case, this value must identify a payload buffer in the current set. Other interfaces may define different semantics for this field. No default
payload_offset uint64 Offset of the packet payload in the payload buffer. This value plus the `payload_size` value must be less than or equal to the size of the referenced payload buffer. No default
payload_size uint64 Size in bytes of the payload. This value plus the `payload_offest` value must be less than or equal to the size of the referenced payload buffer. No default
flags uint32 An bitwise-or'ed set of flags (see constants below) describing properties of this packet. 0
buffer_config uint64 The buffer configuration associated with this packet. The semantics of this field depend on the interface with which this struct is used. In many contexts, this field is not used. This field is intended for situations in which buffer configurations (i.e. sets of payload buffers) are explicitly identified. In such cases, the `payload_buffer_id` refers to a payload buffer in the buffer configuration identified by this field. 0
stream_segment_id uint64 The stream segment associated with this packet. The semantics of this field depend on the interface with which this struct is used. In many contexts, this field is not used. This field is intended to distinguish contiguous segments of the stream where stream properties (e.g. encoding) may differ from segment to segment. 0

Parameter

Defined in fuchsia.media/stream_common.fidl

Parameter

Generic parameter.

We want to minimize use of this generic "Parameter" structure by natively defining as many stream-specific parameter semantics as we can.

NameTypeDescriptionDefault
scope string No default
name string No default
value Value No default

AudioCompressedFormatAac

Defined in fuchsia.media/stream_common.fidl

NameTypeDescriptionDefault

AudioCompressedFormatSbc

Defined in fuchsia.media/stream_common.fidl

NameTypeDescriptionDefault

PcmFormat

Defined in fuchsia.media/stream_common.fidl

PcmFormat

PCM audio format details.

NameTypeDescriptionDefault
pcm_mode AudioPcmMode No default
bits_per_sample uint32 No default
frames_per_second uint32 No default
channel_map vector<AudioChannelId>[16] No default

VideoUncompressedFormat

Defined in fuchsia.media/stream_common.fidl

VideoUncompressedFormat

Uncompressed video format details.

NameTypeDescriptionDefault
image_format fuchsia.sysmem/ImageFormat_2 No default
fourcc uint32 No default
primary_width_pixels uint32 No default
primary_height_pixels uint32 No default
secondary_width_pixels uint32 No default
secondary_height_pixels uint32 No default
planar bool No default
swizzled bool No default
primary_line_stride_bytes uint32 No default
secondary_line_stride_bytes uint32 No default
primary_start_offset uint32 No default
secondary_start_offset uint32 No default
tertiary_start_offset uint32 No default
primary_pixel_stride uint32 No default
secondary_pixel_stride uint32 No default
primary_display_width_pixels uint32 No default
primary_display_height_pixels uint32 No default
has_pixel_aspect_ratio bool false
pixel_aspect_ratio_width uint32 1
pixel_aspect_ratio_height uint32 1

KeyId

Defined in fuchsia.media/stream_common.fidl

KeyId

An encryption key identifier.

NameTypeDescriptionDefault
data uint8[16] No default

SubsampleEntry

Defined in fuchsia.media/stream_common.fidl

SubsampleEntry

A subsample is a byte range within a sample consisting of a clear byte range followed by an encrypted byte range. This structure specifies the size of each range in the subsample.

NameTypeDescriptionDefault
clear_bytes uint32 No default
encrypted_bytes uint32 No default

EncryptionPattern

Defined in fuchsia.media/stream_common.fidl

EncryptionPattern

Pattern encryption utilizes a pattern of encrypted and clear 16 byte blocks over the protected range of a subsample (the encrypted_bytes of a SubsampleEntry). This structure specifies the number of encrypted data blocks followed by the number of clear data blocks.

NameTypeDescriptionDefault
clear_blocks uint32 No default
encrypted_blocks uint32 No default

SbcEncoderSettings

Defined in fuchsia.media/stream_common.fidl

Settings for an SBC Encoder.

SBC Encoders take signed little endian 16 bit linear PCM samples and return encoded SBC frames. SBC encoder PCM data in batches of sub_bands * block_count PCM frames. This encoder will accept PCM data on arbitrary frame boundaries, but the output flushed when EOS is queued may be zero-padded to make a full batch for encoding.

NameTypeDescriptionDefault
sub_bands SbcSubBands SbcSubBands.SUB_BANDS_8
allocation SbcAllocation SbcAllocation.ALLOC_LOUDNESS
block_count SbcBlockCount SbcBlockCount.BLOCK_COUNT_4
channel_mode SbcChannelMode No default
bit_pool uint64 SBC bit pool value. No default

AacTransportRaw

Defined in fuchsia.media/stream_common.fidl

Raw AAC access units.

NameTypeDescriptionDefault

AacConstantBitRate

Defined in fuchsia.media/stream_common.fidl

NameTypeDescriptionDefault
bit_rate uint32 Bits per second No default

AacEncoderSettings

Defined in fuchsia.media/stream_common.fidl

NameTypeDescriptionDefault
transport AacTransport No default
channel_mode AacChannelMode No default
bit_rate AacBitRate No default
aot AacAudioObjectType No default

StreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of an elementary stream.

NameTypeDescriptionDefault
medium_specific MediumSpecificStreamType Medium-specific type information. No default
encoding string[255] Encoding (see constants below). This value is represented as a string so that new encodings can be introduced without modifying this file. No default
encoding_parameters vector<uint8>? Encoding-specific parameters, sometimes referred to as 'out-of-band data'. Typically, this data is associated with a compressed stream and provides parameters required to decompress the stream. This data is generally opaque to all parties except the producer and consumer of the stream. No default

AudioStreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of an audio elementary stream.

NameTypeDescriptionDefault
sample_format AudioSampleFormat No default
channels uint32 No default
frames_per_second uint32 No default

VideoStreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of a video elementary stream.

NameTypeDescriptionDefault
pixel_format fuchsia.images/PixelFormat No default
color_space ColorSpace No default
width uint32 Dimensions of the video frames as displayed in pixels. No default
height uint32 No default
coded_width uint32 Dimensions of the video frames as encoded in pixels. These values must be equal to or greater than the respective width/height values. No default
coded_height uint32 No default
pixel_aspect_ratio_width uint32 The aspect ratio of a single pixel as frames are intended to be displayed. No default
pixel_aspect_ratio_height uint32 No default
stride uint32 The number of bytes per 'coded' row in the primary video plane. No default

TextStreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of a text elementary stream.

NameTypeDescriptionDefault

SubpictureStreamType

Defined in fuchsia.media/stream_type.fidl

Describes the type of a subpicture elementary stream.

NameTypeDescriptionDefault

TimelineFunction

Defined in fuchsia.media/timeline_function.fidl

A TimelineFunction represents a relationship between a subject timeline and a reference timeline with a linear relation.

For example, consider a common use case in which reference time is the monotonic clock of a system and subject time is intended presentation time for some media such as a video.

reference_time is the value of the monotonic clock at the beginning of playback. subject_time is 0 assuming playback starts at the beginning of the media. We then choose a reference_delta and subject_delta so that subject_delta / reference_delta represents the desired playback rate, e.g. 0/1 for paused and 1/1 for normal playback.

## Formulas

With a function we can determine the subject timeline value s in terms of reference timeline value r with this formula (where reference_delta > 0):

s = (r - reference_time) * (subject_delta / reference_delta) + subject_time

And similarly we can find the reference timeline value r in terms of subject timeline value s with this formula (where subject_delta > 0):

r = (s - subject_time) * (reference_delta / subject_delta) + referenc_time

## Choosing time values

Time values can be arbitrary and our linear relation will of course be the same, but we can use them to represent the bounds of pieces in a piecewise linear relation.

For example, if a user performs skip-chapter, we might want to describe this with a TimelineFunction whose subject_time is the time to skip to, reference_time is now plus some epsilon, and delta ratio is 1/1 for normal playback rate.

NameTypeDescriptionDefault
subject_time int64 A value from the subject timeline that correlates to reference_time. 0
reference_time int64 A value from the reference timeline that correlates to subject_time. 0
subject_delta uint32 The change in the subject timeline corresponding to reference_delta. 0
reference_delta uint32 The change in the reference timeline corresponding to subject_delta. Cannot be zero. 1

ENUMS

AudioRenderUsage

Type: uint32

Defined in fuchsia.media/audio_core.fidl

Usage annotating the purpose of the stream being used to render audio. An AudioRenderer's usage cannot be changed after creation. The AudioRenderUsage is used by audio policy to dictate how audio streams interact with each other.

NameValueDescription
BACKGROUND 0
MEDIA 1
INTERRUPTION 2
SYSTEM_AGENT 3
COMMUNICATION 4

AudioCaptureUsage

Type: uint32

Defined in fuchsia.media/audio_core.fidl

Usages annotating the purpose of the stream being used to capture audio. The AudioCaptureUsage is used by audio policy to dictate how audio streams interact with each other.

NameValueDescription
BACKGROUND 0
FOREGROUND 1
SYSTEM_AGENT 2
COMMUNICATION 3

Behavior

Type: uint32

Defined in fuchsia.media/audio_core.fidl

The behaviors applied to streams when multiple are active.

NameValueDescription
NONE 0
DUCK 1
MUTE 2

AudioOutputRoutingPolicy

Type: uint32

Defined in fuchsia.media/audio_core.fidl

NameValueDescription
ALL_PLUGGED_OUTPUTS 0
LAST_PLUGGED_OUTPUT 1

StreamError

Type: uint32

Defined in fuchsia.media/stream_common.fidl

StreamError

This error code encapsulates various errors that might emanate from a StreamProcessor server. It can be sent either as an OnStreamFailed event or as an epitaph for the channel.

NameValueDescription
UNKNOWN 1
INVALID_INPUT_FORMAT_DETAILS 2
INCOMPATIBLE_BUFFERS_PROVIDED 3
DECODER_UNKNOWN 16777217
ENCODER_UNKNOWN 33554433
DECRYPTOR_UNKNOWN 50331649
DECRYPTOR_NO_KEY 50331650

AudioBitrateMode

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
UNSPECIFIED 0
CBR 1
VBR 2

AudioPcmMode

Type: uint32

Defined in fuchsia.media/stream_common.fidl

AudioPcmMode

NameValueDescription
LINEAR 0
ALAW 1
MULAW 2

AudioChannelId

Type: uint32

Defined in fuchsia.media/stream_common.fidl

AudioChannelId

Used in specifying which audio channel is for which speaker location / type.

TODO(dustingreen): Do we need more channel IDs than this?

NameValueDescription
SKIP 0
LF 1
RF 2
CF 3
LS 4
RS 5
LFE 6
CS 7
LR 8
RR 9
END_DEFINED 10
EXTENDED_CHANNEL_ID_BASE 1862270976
MAX 2147483647

VideoColorSpace

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
INVALID 0

SbcSubBands

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
SUB_BANDS_4 4
SUB_BANDS_8 8

SbcBlockCount

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
BLOCK_COUNT_4 4
BLOCK_COUNT_8 8
BLOCK_COUNT_12 12
BLOCK_COUNT_16 16

SbcAllocation

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
ALLOC_LOUDNESS 0
ALLOC_SNR 1

SbcChannelMode

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
MONO 0
DUAL 1
STEREO 2
JOINT_STEREO 3

AacChannelMode

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
MONO 0
STEREO 2

AacVariableBitRate

Type: uint32

Defined in fuchsia.media/stream_common.fidl

Variable bit rate modes. The actual resulting bitrate varies based on input signal and other encoding settings.

See https://wiki.hydrogenaud.io/index.php?title=Fraunhofer_FDK_AAC#Bitrate_Modes

NameValueDescription
V1 1
V2 2
V3 3
V4 4
V5 5

AacAudioObjectType

Type: uint32

Defined in fuchsia.media/stream_common.fidl

NameValueDescription
MPEG2_AAC_LC 0

AudioSampleFormat

Type: uint32

Defined in fuchsia.media/stream_type.fidl

Enumerates the supported audio sample formats.

NameValueDescription
UNSIGNED_8 1
SIGNED_16 2
SIGNED_24_IN_32 3
FLOAT 4

ColorSpace

Type: uint32

Defined in fuchsia.media/stream_type.fidl

NameValueDescription
UNKNOWN 0
NOT_APPLICABLE 1
JPEG 2
HD_REC709 3
SD_REC601 4

TABLES

EncryptedFormat

Defined in fuchsia.media/stream_common.fidl

EncryptedFormat

The stream format details payload of a decrypting stream processor. This is a sparsely populated table to specify parameters necessary for decryption other than the data stream. It is only necessary to update fields if they changed, but not an error if the same value is repeated.

OrdinalNameTypeDescription
1 mode string `mode` specifies which encryption scheme to use, such as `fuchsia.media.ENCRYPTION_MODE_CENC`. Usage: - It is required to be set prior to delivery of input packets. - This should only be changed at the beginning of a data stream.
2 key_id KeyId `key_id` identifies the key that should be used for decrypting subsequent data. Usage: - It is required to be set prior to delivery of input packets to a decryptor. - This may be changed multiple times during a data stream.
3 init_vector vector<uint8>[16] `init_vector` is used in combination with a key and a block of content to create the first cipher block in a chain and derive subsequent cipher blocks in a cipher block chain. Usage: - It is required to be set prior to the delivery of input packets to a decryptor. - This may be changed multiple times during a data stream.
4 subsamples vector<SubsampleEntry> `subsamples` is used to identify the clear and encrypted portions of a subsample. Usage: - For whole sample encryption, this parameter should not be sent. - This may be changed multiple times during a data stream.
5 pattern EncryptionPattern `pattern` is used to identify the clear and encrypted blocks for pattern based encryption. Usage: - This is not allowed for CENC and CBC1 and required for CENS and CBCS. - If required, it must be set prior to the delivery of input packets to a decryptor. - This may be changed multiple times during a data stream.

DecryptedFormat

Defined in fuchsia.media/stream_common.fidl

DecryptedFormat

This describes the format of the decrypted content. It is required to be sent by the StreamProcessor server prior to the delivery of output packets. Currently, there is no additional format details for decrypted output.

OrdinalNameTypeDescription
1 ignore_this_field bool

FormatDetails

Defined in fuchsia.media/stream_common.fidl

FormatDetails

This describes/details the format on input or output of a StreamProcessor (separate instances for input vs. output).

OrdinalNameTypeDescription
1 format_details_version_ordinal uint64
2 mime_type string
3 oob_bytes vector<uint8>
4 domain DomainFormat
5 pass_through_parameters vector<Parameter>
6 encoder_settings EncoderSettings Instructs an encoder on how to encode raw data. Decoders may ignore this field but are entitled to rejected requests with this field set because it doesn't make sense.
7 timebase uint64 The number of ticks of the timebase of input packet timestamp_ish values per second. The timebase is only used used for optional extrapolation of timestamp_ish values when an input timestamp which applies to byte 0 of the valid portion of the input packet does not correspond directly to byte 0 of the valid portion of any output packet. Leave unset if timestamp extrapolation is not needed, either due to lack of timestamps on input, or due to input being provided in increments of the encoder's input chunk size (based on the encoder settings and calculated independently by the client). Set if timestamp extrapolation is known to be needed or known to be acceptable to the client.

StreamBufferConstraints

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 buffer_constraints_version_ordinal uint64
2 default_settings StreamBufferSettings
3 per_packet_buffer_bytes_min uint32
4 per_packet_buffer_bytes_recommended uint32
5 per_packet_buffer_bytes_max uint32
6 packet_count_for_server_min uint32
7 packet_count_for_server_recommended uint32
8 packet_count_for_server_recommended_max uint32
9 packet_count_for_server_max uint32
10 packet_count_for_client_min uint32
11 packet_count_for_client_max uint32
12 single_buffer_mode_allowed bool
13 is_physically_contiguous_required bool
14 very_temp_kludge_bti_handle handle<handle>

StreamOutputConstraints

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 stream_lifetime_ordinal uint64
2 buffer_constraints_action_required bool
3 buffer_constraints StreamBufferConstraints

StreamOutputFormat

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 stream_lifetime_ordinal uint64
2 format_details FormatDetails

StreamOutputConfig

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 stream_lifetime_ordinal uint64
2 buffer_constraints_action_required bool
3 buffer_constraints StreamBufferConstraints
4 format_details FormatDetails

StreamBufferSettings

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 buffer_lifetime_ordinal uint64
2 buffer_constraints_version_ordinal uint64
3 packet_count_for_server uint32
4 packet_count_for_client uint32
5 per_packet_buffer_bytes uint32
6 single_buffer_mode bool

StreamBufferPartialSettings

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 buffer_lifetime_ordinal uint64
2 buffer_constraints_version_ordinal uint64
3 single_buffer_mode bool
4 packet_count_for_server uint32 When single_buffer_mode is false: The actual packet count will be max(packet_count_for_server + packet_count_for_client, sysmem_buffers). The sysmem_buffers is BufferCollectionInfo.buffer_count from sysmem if using sysmem, or 0 if not using sysmem. When single_buffer_mode is true: The actual packet count is packet_count_for_server + packet_count_for_client. If not using sysmem, or if using single_buffer_mode, these fields must be set and consistent with correpsonding fields in StreamBufferConstraints. If single_buffer_mode false and using sysmem, these fields can both be non-set, or can both be set and consistent with correpsonding fields in StreamBufferConstraints. If not set, the value used for the fields in the "max" expression above is 0, so buffer_count.
5 packet_count_for_client uint32
6 sysmem_token fuchsia.sysmem/BufferCollectionToken

StreamBuffer

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 buffer_lifetime_ordinal uint64
2 buffer_index uint32
3 data StreamBufferData

StreamBufferDataVmo

Defined in fuchsia.media/stream_processor.fidl

StreamBufferDataVmo

Details for a buffer backed by a VMO.

OrdinalNameTypeDescription
1 vmo_handle handle<vmo>
2 vmo_usable_start uint64
3 vmo_usable_size uint64

PacketHeader

Defined in fuchsia.media/stream_processor.fidl

PacketHeader

When referring to a free packet, we use PacketHeader alone instead of Packet, since while a packet is free it doesn't really have meaningful offset or length etc.

A populated Packet also has a PacketHeader.

OrdinalNameTypeDescription
1 buffer_lifetime_ordinal uint64
2 packet_index uint32

Packet

Defined in fuchsia.media/stream_processor.fidl

OrdinalNameTypeDescription
1 header PacketHeader
2 buffer_index uint32 Which buffer this packet refers to. For single-buffer mode this will always be 0, but for multi-buffer mode, a given in-flight interval of a packet can refer to any buffer. The packet has an associated buffer only while the packet is in-flight, not while the packet is free. The default value makes accidental inappropriate use of index 0 less likely (will tend to complain in an obvious way if not filled out instead of a non-obvious data corruption when decoding buffer 0 repeatedly instead of the correct buffers). TODO(dustingreen): Try to make FIDL table defaults have meaning, and not complain about !has when accessing the field. For now the default specified here does nothing.
3 stream_lifetime_ordinal uint64
4 start_offset uint32
5 valid_length_bytes uint32
6 timestamp_ish uint64
7 start_access_unit bool
8 known_end_access_unit bool

UNIONS

Usage

Defined in fuchsia.media/audio_core.fidl

NameTypeDescription
render_usage AudioRenderUsage
capture_usage AudioCaptureUsage

Value

Defined in fuchsia.media/stream_common.fidl

Value

Generic "value" for use within generic "Parameter" struct.

NameTypeDescription
bool_value bool
uint64_value uint64
int64_value int64
string_value string
bytes_value vector<uint8>

AudioUncompressedFormat

Defined in fuchsia.media/stream_common.fidl

AudioUncompressedFormat

NameTypeDescription
pcm PcmFormat

AudioFormat

Defined in fuchsia.media/stream_common.fidl

AudioFormat

NameTypeDescription
compressed AudioCompressedFormat
uncompressed AudioUncompressedFormat

VideoCompressedFormat

Defined in fuchsia.media/stream_common.fidl

VideoCompressedFormat

Compressed video format details.

NameTypeDescription
temp_field_todo_remove uint32

VideoFormat

Defined in fuchsia.media/stream_common.fidl

VideoFormat

Video (compress or uncompressed) format details. In this context, "uncompressed" can include block-based image compression formats that still permit fairly fast random access to image data.

NameTypeDescription
compressed VideoCompressedFormat
uncompressed VideoUncompressedFormat

DomainFormat

Defined in fuchsia.media/stream_common.fidl

DomainFormat

NameTypeDescription
audio AudioFormat
video VideoFormat
crypto CryptoFormat

AacBitRate

Defined in fuchsia.media/stream_common.fidl

NameTypeDescription
constant AacConstantBitRate
variable AacVariableBitRate

StreamBufferData

Defined in fuchsia.media/stream_processor.fidl

StreamBufferData

For the moment, a VMO per buffer is the only type of buffer.

This is extremely likely to change significantly when adding gralloc stuff, but the idea with this union is to have a struct per logical way of storing the data. Any multi-domain storage within a gralloc buffer will likely be only indirectly represented here.

NameTypeDescription
vmo StreamBufferDataVmo

MediumSpecificStreamType

Defined in fuchsia.media/stream_type.fidl

A union of all medium-specific stream type structs.

NameTypeDescription
audio AudioStreamType
video VideoStreamType
text TextStreamType
subpicture SubpictureStreamType

XUNIONS

AudioCompressedFormat

Defined in fuchsia.media/stream_common.fidl

NameTypeDescription
aac AudioCompressedFormatAac
sbc AudioCompressedFormatSbc

CryptoFormat

Defined in fuchsia.media/stream_common.fidl

CryptoFormat

Crypto (encrypted or decrypted) format details.

NameTypeDescription
encrypted EncryptedFormat
decrypted DecryptedFormat

AacTransport

Defined in fuchsia.media/stream_common.fidl

NameTypeDescription
raw AacTransportRaw

EncoderSettings

Defined in fuchsia.media/stream_common.fidl

Settings for encoders that tell them how to encode raw formats.

NameTypeDescription
sbc SbcEncoderSettings
aac AacEncoderSettings

CONSTANTS

NameValueTypeDescription
MIN_PCM_CHANNEL_COUNT 1 uint32 Permitted ranges for AudioRenderer and AudioCapturer
MAX_PCM_CHANNEL_COUNT 8 uint32
MIN_PCM_FRAMES_PER_SECOND 1000 uint32
MAX_PCM_FRAMES_PER_SECOND 192000 uint32
RENDER_USAGE_COUNT 5 uint8
CAPTURE_USAGE_COUNT 4 uint8
AudioGainInfoFlag_Mute 1 uint32
AudioGainInfoFlag_AgcSupported 2 uint32
AudioGainInfoFlag_AgcEnabled 4 uint32
SetAudioGainFlag_GainValid 1 uint32
SetAudioGainFlag_MuteValid 2 uint32
SetAudioGainFlag_AgcValid 4 uint32
METADATA_LABEL_TITLE fuchsia.media.title String
METADATA_LABEL_ARTIST fuchsia.media.artist String
METADATA_LABEL_ALBUM fuchsia.media.album String
METADATA_LABEL_TRACK_NUMBER fuchsia.media.track_number String
METADATA_LABEL_PUBLISHER fuchsia.media.publisher String
METADATA_LABEL_GENRE fuchsia.media.genre String
METADATA_LABEL_COMPOSER fuchsia.media.composer String
METADATA_LABEL_SUBTITLE fuchsia.media.subtitle String
METADATA_LABEL_RELEASE_DATE fuchsia.media.release_date String
METADATA_LABEL_EPISODE fuchsia.media.episode String
METADATA_LABEL_SEASON fuchsia.media.season String
METADATA_LABEL_STUDIO fuchsia.media.studio String
METADATA_SOURCE_TITLE fuchsia.media.source_title String The title of the source of the media, e.g. a player, streaming service, or website.
NO_TIMESTAMP 9223372036854775807 int64 When used as a `StreamPacket.pts` value, indicates that the packet has no specific presentation timestamp. The effective presentation time of such a packet depends on the context in which the `StreamPacket` is used.
STREAM_PACKET_FLAG_KEY_FRAME 1 uint32 Indicates that the packet can be understood without reference to other packets in the stream. This is typically used in compressed streams to identify packets that contain key frames.
STREAM_PACKET_FLAG_DROPPABLE 2 uint32 Indicates that all other packets in the stream can be understood without reference to this packet. This is typically used in compressed streams to identify packets containing frames that may be discarded without affecting other frames.
STREAM_PACKET_FLAG_DISCONTINUITY 4 uint32 Indicates a discontinuity in an otherwise continuous-in-time sequence of packets. The precise semantics of this flag depend on the context in which the `StreamPacket` is used.
KEY_ID_SIZE 16 uint32
MAX_INIT_VECTOR_SIZE 16 uint32
kMaxOobBytesSize 8192 uint64
kDefaultInputPacketCountForClient 2 uint32
kDefaultOutputPacketCountForClient 2 uint32
kDefaultInputIsSingleBufferMode false bool
kDefaultOutputIsSingleBufferMode false bool
AUDIO_ENCODING_AAC fuchsia.media.aac String Audio encodings.
AUDIO_ENCODING_AMRNB fuchsia.media.amrnb String
AUDIO_ENCODING_AMRWB fuchsia.media.amrwb String
AUDIO_ENCODING_APTX fuchsia.media.aptx String
AUDIO_ENCODING_FLAC fuchsia.media.flac String
AUDIO_ENCODING_GSMMS fuchsia.media.gsmms String
AUDIO_ENCODING_LPCM fuchsia.media.lpcm String
AUDIO_ENCODING_MP3 fuchsia.media.mp3 String
AUDIO_ENCODING_PCMALAW fuchsia.media.pcmalaw String
AUDIO_ENCODING_PCMMULAW fuchsia.media.pcmmulaw String
AUDIO_ENCODING_SBC fuchsia.media.sbc String
AUDIO_ENCODING_VORBIS fuchsia.media.vorbis String
VIDEO_ENCODING_H263 fuchsia.media.h263 String Video encodings.
VIDEO_ENCODING_H264 fuchsia.media.h264 String
VIDEO_ENCODING_MPEG4 fuchsia.media.mpeg4 String
VIDEO_ENCODING_THEORA fuchsia.media.theora String
VIDEO_ENCODING_UNCOMPRESSED fuchsia.media.uncompressed_video String
VIDEO_ENCODING_VP3 fuchsia.media.vp3 String
VIDEO_ENCODING_VP8 fuchsia.media.vp8 String
VIDEO_ENCODING_VP9 fuchsia.media.vp9 String