PROTOCOLS
ActivityReporter
Defined in fuchsia.media/activity_reporter.fidl
A protocol for monitoring the usage activity of the AudioRenderers and AudioCapturers.
WatchCaptureActivity
Notifies the client whenever there is a change in the set of active AudioCaptureUsages. It returns immediately the first time that it is called.
Request
<EMPTY>
Response
Name | Type |
---|---|
active_usages |
vector<AudioCaptureUsage>:4
|
WatchRenderActivity
Notifies the client whenever there is a change in the set of active AudioRenderUsages. It returns immediately the first time that it is called.
Request
<EMPTY>
Response
Name | Type |
---|---|
active_usages |
vector<AudioRenderUsage>:5
|
Audio
Defined in fuchsia.media/audio.fidl
CreateAudioCapturer
Creates an AudioCapturer which either captures from the current default audio input device, or loops-back from the current default audio output device based on value passed for the loopback flag.
Request
Name | Type |
---|---|
audio_capturer_request |
server_end:AudioCapturer
|
loopback |
bool
|
CreateAudioRenderer
Request
Name | Type |
---|---|
audio_renderer_request |
server_end:AudioRenderer
|
AudioCapturer
Defined in fuchsia.media/audio_capturer.fidl
AudioCapturer
An AudioCapturer is an interface returned from an fuchsia.media.Audio's CreateAudioCapturer method, which may be used by clients to capture audio from either the current default audio input device, or the current default audio output device depending on the flags passed during creation.
Format support
See (Get|Set)StreamType below. By default, the captured stream type will be initially determined by the currently configured stream type of the source that the AudioCapturer was bound to at creation time. Users may either fetch this type using GetStreamType, or they may choose to have the media resampled or converted to a type of their choosing by calling SetStreamType. Note: the stream type may only be set while the system is not running, meaning that there are no pending capture regions (specified using CaptureAt) and that the system is not currently running in 'async' capture mode.
Buffers and memory management
Audio data is captured into a shared memory buffer (a VMO) supplied by the user to the AudioCapturer during the AddPayloadBuffer call. Please note the following requirements related to the management of the payload buffer.
- The payload buffer must be supplied before any capture operation may start. Any attempt to start capture (via either CaptureAt or StartAsyncCapture) before a payload buffer has been established is an error.
- The payload buffer may not be changed while there are any capture operations pending.
- The stream type may not be changed after the payload buffer has been set.
- The payload buffer must be an integral number of audio frame sizes (in bytes)
- When running in 'async' mode (see below), the payload buffer must be at least as large as twice the frames_per_packet size specified during StartAsyncCapture.
- The handle to the payload buffer supplied by the user must be readable, writable, mappable and transferable.
- Users should always treat the payload buffer as read-only.
Synchronous vs. Asynchronous capture mode
The AudioCapturer interface can be used in one of two mutually exclusive modes: Synchronous and Asynchronous. A description of each mode and their tradeoffs is given below.
Synchronous mode
By default, AudioCapturer instances are running in 'sync' mode. They will only capture data when a user supplies at least one region to capture into using the CaptureAt method. Regions supplied in this way will be filled in the order that they are received and returned to the client as StreamPackets via the return value of the CaptureAt method. If an AudioCapturer instance has data to capture, but no place to put it (because there are no more pending regions to fill), the next payload generated will indicate that their has been an overflow by setting the Discontinuity flag on the next produced StreamPacket. Synchronous mode may not be used in conjunction with Asynchronous mode. It is an error to attempt to call StartAsyncCapture while the system still regions supplied by CaptureAt waiting to be filled.
If a user has supplied regions to be filled by the AudioCapturer instance in
the past, but wishes to reclaim those regions, they may do so using the
DiscardAllPackets method. Calling the DiscardAllPackets method will cause
all pending regions to be returned, but with NO_TIMESTAMP
as their
StreamPacket's PTS. See "Timing and Overflows", below, for a discussion of
timestamps and discontinuity flags. After a DiscardAllPackets operation,
an OnEndOfStream event will be produced. While an AudioCapturer will never
overwrite any region of the payload buffer after a completed region is
returned, it may overwrite the unfilled portions of a partially filled
buffer which has been returned as a result of a DiscardAllPackets operation.
Asynchronous mode
While running in 'async' mode, clients do not need to explicitly supply shared buffer regions to be filled by the AudioCapturer instance. Instead, a client enters into 'async' mode by calling StartAsyncCapture and supplying a callback interface and the number of frames to capture per-callback. Once running in async mode, the AudioCapturer instance will identify which payload buffer regions to capture into, capture the specified number of frames, then deliver those frames as StreamPackets using the OnPacketCapture FIDL event. Users may stop capturing and return the AudioCapturer instance to 'sync' mode using the StopAsyncCapture method.
It is considered an error to attempt any of the following operations.
- To attempt to enter 'async' capture mode when no payload buffer has been established.
- To specify a number of frames to capture per payload which does not permit at least two contiguous capture payloads to exist in the established shared payload buffer simultaneously.
- To send a region to capture into using the CaptureAt method while the AudioCapturer instance is running in 'async' mode.
- To attempt to call DiscardAllPackets while the AudioCapturer instance is running in 'async' mode.
- To attempt to re-start 'async' mode capturing without having first stopped.
- To attempt any operation except for SetGain while in the process of stopping.
Synchronizing with a StopAsyncCapture operation
Stopping asynchronous capture mode and returning to synchronous capture mode is an operation which takes time. Aside from SetGain, users may not call any other methods on the AudioCapturer interface after calling StopAsyncCapture (including calling StopAsyncCapture again) until after the stop operation has completed. Because of this, it is important for users to be able to synchronize with the stop operation. Two mechanisms are provided for doing so.
The first is to use StopAsyncCapture (not the NoReply variant). When the user's callback has been called, they can be certain that stop operation is complete and that the AudioCapturer instance has returned to synchronous operation mode.
The second way to determine that a stop operation has completed is to use the flags on the packets which get delivered via the user-supplied AudioCapturerCallback interface after calling StopAsyncCapture. When asked to stop, any partially filled packet will be returned to the user, and the final packet returned will always have the end-of-stream flag (kFlagsEos) set on it to indicate that this is the final frame in the sequence. If there is no partially filled packet to return, the AudioCapturer will synthesize an empty packet with no timestamp, and offset/length set to zero, in order to deliver a packet with the end-of-stream flag set on it. Once users have seen the end-of-stream flag after calling stop, the AudioCapturer has finished the stop operation and returned to synchronous operating mode.
Timing and Overflows
All media packets produced by an AudioCapturer instance will have their PTS field filled out with the capture time of the audio expressed as a timestamp given by the reference clock timeline. Note: this timestamp is actually a capture timestamp, not a presentation timestamp (it is more of a CTS than a PTS) and is meant to represent the underlying system's best estimate of the capture time of the first frame of audio, including all outboard and hardware introduced buffering delay. As a result, all timestamps produced by an AudioCapturer should be expected to be in the past relative to 'now' on the stream's reference clock timeline.
The one exception to the "everything has an explicit timestamp" rule is when
discarding submitted regions while operating in synchronous mode. Discarded
packets have no data in them, but FIDL demands that all pending
method-return-value callbacks be executed. Because of this, the regions will
be returned to the user, but their timestamps will be set to
NO_TIMESTAMP
, and their payload sizes will be set to zero. Any
partially filled payload will have a valid timestamp, but a payload size
smaller than originally requested. The final discarded payload (if there
were any to discard) will be followed by an OnEndOfStream event.
Two StreamPackets delivered by an AudioCapturer instance are 'continuous' if
the first frame of audio contained in the second packet was captured exactly
one nominal frame time after the final frame of audio in the first packet.
If this relationship does not hold, the second StreamPacket will have the
STREAM_PACKET_FLAG_DISCONTINUITY
bit set in its flags
field.
Even though explicit timestamps are provided on every StreamPacket produced, users who have very precise timing requirements are encouraged to always reason about time by counting frames delivered since the last discontinuity, rather than simply using the raw capture timestamps. This is because the explicit timestamps written on continuous packets may have a small amount of rounding error based on whether or not the units of the capture timeline reference clock are divisible by the chosen audio frame rate.
Users should always expect the first StreamPacket produced by an AudioCapturer to have the discontinuous flag set on it (as there is no previous packet to be continuous with). Similarly, the first StreamPacket after a DiscardAllPackets or a Stop/Start cycle will always be discontinuous. After that, there are only two reasons that a StreamPacket will ever be discontinuous:
- The user is operating in synchronous mode and does not supply regions to be filled quickly enough. If the next continuous frame of data has not been captured by the time it needs to be purged from the source buffers, an overflow has occurred and the AudioCapturer will flag the next captured region as discontinuous.
- The user is operating in asynchronous mode and some internal error prevents the AudioCapturer instance from capturing the next frame of audio in a continuous fashion. This might be high system load or a hardware error, but in general it is something which should never normally happen. In practice, however, if it does, the next produced packet will be flagged as being discontinuous.
Synchronous vs. Asynchronous Trade-offs
The choice of operating in synchronous vs. asynchronous mode is up to the user, and depending on the user's requirements, there are some advantages and disadvantages to each choice.
Synchronous mode requires only a single Zircon channel under the hood and can achieve some small savings because of this. In addition, the user has complete control over the buffer management. Users specify exactly where audio will be captured to and in what order. Because of this, if users do not need to always be capturing, it is simple to stop and restart the capture later (just by ceasing to supply packets, then resuming later on). Payloads do not need to be uniform in size either, clients may specify payloads of whatever granularity is appropriate.
The primary downside of operating in synchronous mode is that two messages will need to be sent for every packet to be captured. One to inform the AudioCapturer of the instance to capture into, and one to inform the user that the packet has been captured. This may end up increasing overhead and potentially complicating client designs.
Asynchronous mode has the advantage requiring only 1/2 of the messages, however, when operating in 'async' mode, AudioCapturer instances have no way of knowing if a user is processing the StreamPackets being sent in a timely fashion, and no way of automatically detecting an overflow condition. Users of 'async' mode should be careful to use a buffer large enough to ensure that they will be able to process their data before an AudioCapturer will be forced to overwrite it.
AddPayloadBuffer
Adds a payload buffer to the current buffer set associated with the
connection. A StreamPacket
struct reference a payload buffer in the
current set by ID using the StreamPacket.payload_buffer_id
field.
A buffer with ID id
must not be in the current set when this method is
invoked, otherwise the service will close the connection.
Request
Name | Type |
---|---|
id |
uint32
|
payload_buffer |
handle<vmo>
|
BindGainControl
Binds to the gain control for this AudioCapturer.
Request
Name | Type |
---|---|
gain_control_request |
server_end:fuchsia.media.audio/GainControl
|
CaptureAt
Explicitly specifies a region of the shared payload buffer for the audio input to capture into.
Request
Name | Type |
---|---|
payload_buffer_id |
uint32
|
payload_offset |
uint32
|
frames |
uint32
|
Response
Name | Type |
---|---|
captured_packet |
StreamPacket
|
DiscardAllPackets
Request
<EMPTY>
Response
<EMPTY>
DiscardAllPacketsNoReply
Request
<EMPTY>
GetReferenceClock
Retrieves the stream's reference clock. The returned handle will have READ, DUPLICATE and TRANSFER rights, and will refer to a zx::clock that is MONOTONIC and CONTINUOUS.
Request
<EMPTY>
Response
Name | Type |
---|---|
reference_clock |
handle<clock>
|
GetStreamType
Gets the currently configured stream type. Note: for an AudioCapturer
which was just created and has not yet had its stream type explicitly
set, this will retrieve the stream type -- at the time the AudioCapturer
was created -- of the source (input or looped-back output) to which the
AudioCapturer is bound. Even if this matches the client's desired format,
SetPcmStreamType
must still be called.
Request
<EMPTY>
Response
Name | Type |
---|---|
stream_type |
StreamType
|
OnEndOfStream
Indicates that the stream has ended.
Response
<EMPTY>
OnPacketProduced
Delivers a packet produced by the service. When the client is done with
the payload memory, the client must call ReleasePacket
to release the
payload memory.
Response
Name | Type |
---|---|
packet |
StreamPacket
|
ReleasePacket
Releases payload memory associated with a packet previously delivered
via OnPacketProduced
.
Request
Name | Type |
---|---|
packet |
StreamPacket
|
RemovePayloadBuffer
Removes a payload buffer from the current buffer set associated with the connection.
A buffer with ID id
must exist in the current set when this method is
invoked, otherwise the service will will close the connection.
Request
Name | Type |
---|---|
id |
uint32
|
SetPcmStreamType
Sets the stream type of the stream to be delivered. Causes the source material to be reformatted/resampled if needed in order to produce the requested stream type. Must be called before the payload buffer is established.
Request
Name | Type |
---|---|
stream_type |
AudioStreamType
|
SetReferenceClock
Sets the reference clock that controls this capturer's playback rate. If the input
parameter is a valid zx::clock, it must have READ, DUPLICATE, TRANSFER rights and
refer to a clock that is both MONOTONIC and CONTINUOUS. If instead an invalid clock
is passed (such as the uninitialized zx::clock()
), this indicates that the stream
will use a 'flexible' clock generated by AudioCore that tracks the audio device.
SetReferenceClock
cannot be called after the capturer payload buffer has been
added. It also cannot be called a second time (even before capture).
If the client wants a reference clock that is initially CLOCK_MONOTONIC
but may
diverge at some later time, they should create a clone of the monotonic clock, set
this as the stream's reference clock, then rate-adjust it subsequently as needed.
Request
Name | Type |
---|---|
reference_clock |
handle<clock>?
|
SetUsage
Sets the usage of the capture stream. This may be changed on the fly, but packets in flight may be affected by the new usage. By default the Capturer is created with the FOREGROUND usage.
Request
Name | Type |
---|---|
usage |
AudioCaptureUsage
|
StartAsyncCapture
Places the AudioCapturer into 'async' capture mode and begin to produce packets of exactly 'frames_per_packet' number of frames each. The OnPacketProduced event (of StreamSink) will be used to inform the client of produced packets.
Request
Name | Type |
---|---|
frames_per_packet |
uint32
|
StopAsyncCapture
Stops capturing in 'async' capture mode and (optionally) deliver a callback that may be used by the client if explicit synchronization is needed.
Request
<EMPTY>
Response
<EMPTY>
StopAsyncCaptureNoReply
Request
<EMPTY>
AudioConsumer
Defined in fuchsia.media/audio_consumer.fidl
Interface for playing and controlling audio.
BindVolumeControl
Binds to this AudioConsumer
volume control for control and notifications.
Request
Name | Type |
---|---|
volume_control_request |
server_end:fuchsia.media.audio/VolumeControl
|
CreateStreamSink
Creates a StreamSink
for the consumer with the indicated properties.
Multiple stream sinks may be acquired using this method, but they are intended to be used
sequentially rather than concurrently. The first stream sink that's created using this
method is used as the sole source of packets incoming to the logical consumer until that
stream sink is closed or the EndOfStream
method is called on that sink. At that point,
the second stream sink is used, and so on.
If an unsupported compression type is supplied, the
stream_sink_request
request will be closed with an epitaph value of
ZX_ERR_INVALID_ARGS
.
Request
Name | Type |
---|---|
buffers |
vector<handle<vmo>>:16
|
stream_type |
AudioStreamType
|
compression |
Compression?
|
stream_sink_request |
server_end:StreamSink
|
OnEndOfStream
Indicates that the last packet prior to the end of the stream has been rendered.
Response
<EMPTY>
SetRate
Requests to change the playback rate of the renderer. 1.0 means normal
playback. Negative rates are not supported. The new rate will be
reflected in the updated status. The default rate of any newly created StreamSink
is 1.0.
Request
Name | Type |
---|---|
rate |
float32
|
Start
Starts rendering as indicated by flags
.
media_time
indicates the packet timestamp that corresponds to reference_time
.
Typically, this is the timestamp of the first packet that will be
rendered. If packets will be supplied with no timestamps, this value
should be NO_TIMESTAMP
. Passing a media_time
value of
NO_TIMESTAMP
chooses the default media time, established as follows:
1. When starting for the first time, the default media time is the
timestamp on the first packet sent to the stream sink.
2. When resuming after stop, the default media time is the media
time at which the stream stopped.
reference_time
is the monotonic system time at which rendering should
be started. For supply-driven sources, this must be the time at which
the first packet was (or will be) sent plus a lead time, which must be
in the range indicated in the AudioConsumerStatus
. For demand-driven
sources, the client must ensure that the lead time requirement is met at
the start time. Passing the default value of 0 for reference_time
causes the consumer to choose a start time based on the availability of
packets, the lead time requirements, and whether LOW_LATENCY
has been
specified.
The actual start time will be reflected in the updated status.
Request
Name | Type |
---|---|
flags |
AudioConsumerStartFlags
|
reference_time |
zx/Time
|
media_time |
int64
|
Stop
Stops rendering as soon as possible after this method is called. The actual stop time will be reflected in the updated status.
Request
<EMPTY>
WatchStatus
Gets the current status of the consumer using the long get pattern. The consumer responds to this method when the status changes - initially with respect to the initial status value and thereafter with respect to the previously-reported status value.
Request
<EMPTY>
Response
Name | Type |
---|---|
status |
AudioConsumerStatus
|
AudioCore
Defined in fuchsia.media/audio_core.fidl
BindUsageVolumeControl
Binds to a volume control protocol for the given usage.
Request
Name | Type |
---|---|
usage |
Usage
|
volume_control |
server_end:fuchsia.media.audio/VolumeControl
|
CreateAudioCapturer
Creates an AudioCapturer which either captures from the current default audio input device, or loops-back from the current default audio output device based on value passed for the loopback flag.
Request
Name | Type |
---|---|
loopback |
bool
|
audio_in_request |
server_end:AudioCapturer
|
CreateAudioCapturerWithConfiguration
Creates an AudioCapturer according to the given requirements.
pcm_stream_type
sets the stream type of the stream to be delivered.
It causes the source material to be reformatted/resampled if needed
in order to produce the requested stream type.
usage
is used by Fuchsia to make decisions about user experience.
See AudioCaptureUsage
for more details.
configuration
must be initialized to a variant, or no capturer
can be created.
Request
Name | Type |
---|---|
stream_type |
AudioStreamType
|
configuration |
AudioCapturerConfiguration
|
audio_capturer_request |
server_end:AudioCapturer
|
CreateAudioRenderer
Creates an AudioRenderer which outputs audio to the default device.
Request
Name | Type |
---|---|
audio_out_request |
server_end:AudioRenderer
|
EnableDeviceSettings
Request
Name | Type |
---|---|
enabled |
bool
|
GetDbFromVolume
Queries the decibel value that maps to a volume percentage [0, 1] for a particular usage
.
This is the same mapping as used by the VolumeControl from BindUsageVolumeControl
.
Request
Name | Type |
---|---|
usage |
Usage
|
volume |
float32
|
Response
Name | Type |
---|---|
gain_db |
float32
|
GetVolumeFromDb
Queries the volume percentage [0, 1] that maps to a gain_db
value for a particular
usage
. This is the same mapping as used by the VolumeControl from
BindUsageVolumeControl
.
Request
Name | Type |
---|---|
usage |
Usage
|
gain_db |
float32
|
Response
Name | Type |
---|---|
volume |
float32
|
LoadDefaults
Re-loads the platform policy configuration. Falls back to a default config if the platform does not provide a config.
Request
<EMPTY>
ResetInteractions
Re-initializes the set of rules that are currently governing the interaction of streams in audio_core. The default behavior is 'NONE'.
Request
<EMPTY>
SetCaptureUsageGain
Sets the Usage gain applied to Capturers. By default, the gain for all capture usages is set to Unity (0 db).
Request
Name | Type |
---|---|
usage |
AudioCaptureUsage
|
gain_db |
float32
|
SetInteraction
Sets how audio_core handles interactions of multiple active streams simultaneously. If
streams of Usage active
are processing audio, and streams of Usage affected
are as well,
the Behavior specified will be applied to the streams of Usage affected
.
Request
Name | Type |
---|---|
active |
Usage
|
affected |
Usage
|
behavior |
Behavior
|
SetRenderUsageGain
Sets the Usage gain applied to Renderers. By default, the gain for all render usages is set to Unity (0 db).
Request
Name | Type |
---|---|
usage |
AudioRenderUsage
|
gain_db |
float32
|
AudioDeviceEnumerator
Defined in fuchsia.media/audio_device_enumerator.fidl
AddDeviceByChannel
Deprecation
StreamConfig is not supported anymore, instead use an Audio Composite , see Audio Drivers Architecture
Request
Name | Type |
---|---|
device_name |
string:256
|
is_input |
bool
|
channel |
client_end:fuchsia.hardware.audio/StreamConfig
|
GetDeviceGain
Gain/Mute/AGC control
Note that each of these operations requires a device_token in order to target the proper input/output.
The Get command returns the device_token of the device whose gain is
being reported, or ZX_KOID_INVALID
in the case that the requested
device_token was invalid or the device had been removed from the system
before the Get command could be processed.
Set commands which are given an invalid device token are ignored and have no effect on the system. In addition, users do not need to control all of the gain settings for an audio device with each call. Only the settings with a corresponding flag set in the set_flags parameter will be affected. For example, passing SetAudioGainFlag_MuteValid will cause a SetDeviceGain call to care only about the mute setting in the gain_info structure, while passing (SetAudioGainFlag_GainValid | SetAudioGainFlag_MuteValid) will cause both the mute and the gain status to be changed simultaneously.
Request
Name | Type |
---|---|
device_token |
uint64
|
Response
Name | Type |
---|---|
device_token |
uint64
|
gain_info |
AudioGainInfo
|
GetDevices
Obtain the list of currently active audio devices.
Request
<EMPTY>
Response
Name | Type |
---|---|
devices |
vector<AudioDeviceInfo>
|
OnDefaultDeviceChanged
Response
Name | Type |
---|---|
old_default_token |
uint64
|
new_default_token |
uint64
|
OnDeviceAdded
Events sent when devices are added or removed, or when properties of a device change.
Response
Name | Type |
---|---|
device |
AudioDeviceInfo
|
OnDeviceGainChanged
Response
Name | Type |
---|---|
device_token |
uint64
|
gain_info |
AudioGainInfo
|
OnDeviceRemoved
Response
Name | Type |
---|---|
device_token |
uint64
|
SetDeviceGain
Request
Name | Type |
---|---|
device_token |
uint64
|
gain_info |
AudioGainInfo
|
valid_flags |
AudioGainValidFlags
|
AudioRenderer
Defined in fuchsia.media/audio_renderer.fidl
AudioRenderers can be in one of two states at any time: configurable or operational. A
renderer is considered operational whenever it has packets queued to be rendered; otherwise it
is configurable. Once an AudioRenderer enters the operational state, calls to "configuring"
methods are disallowed and will cause the audio service to disconnect the client's connection.
The following are considered configuring methods: AddPayloadBuffer
, SetPcmStreamType
,
SetStreamType
, SetPtsUnits
, SetPtsContinuityThreshold
.
If an AudioRenderer must be reconfigured, the client must ensure that no packets are still
enqueued when these "configuring" methods are called. Thus it is best practice to call
DiscardAllPackets
on the AudioRenderer (and ideally Stop
before DiscardAllPackets
), prior
to reconfiguring the renderer.
AddPayloadBuffer
Adds a payload buffer to the current buffer set associated with the
connection. A StreamPacket
struct reference a payload buffer in the
current set by ID using the StreamPacket.payload_buffer_id
field.
A buffer with ID id
must not be in the current set when this method is
invoked, otherwise the service will close the connection.
Request
Name | Type |
---|---|
id |
uint32
|
payload_buffer |
handle<vmo>
|
BindGainControl
Binds to the gain control for this AudioRenderer.
Request
Name | Type |
---|---|
gain_control_request |
server_end:fuchsia.media.audio/GainControl
|
DiscardAllPackets
Discards packets previously sent via SendPacket
or SendPacketNoReply
and not yet released. The response is sent after all packets have been
released.
Request
<EMPTY>
Response
<EMPTY>
DiscardAllPacketsNoReply
Discards packets previously sent via SendPacket
or SendPacketNoReply
and not yet released.
Request
<EMPTY>
EnableMinLeadTimeEvents
Enables or disables notifications about changes to the minimum clock lead
time (in nanoseconds) for this AudioRenderer. Calling this method with
'enabled' set to true will trigger an immediate OnMinLeadTimeChanged
event with the current minimum lead time for the AudioRenderer. If the
value changes, an OnMinLeadTimeChanged
event will be raised with the
new value. This behavior will continue until the user calls
EnableMinLeadTimeEvents(false)
.
The minimum clock lead time is the amount of time ahead of the reference clock's understanding of "now" that packets needs to arrive (relative to the playback clock transformation) in order for the mixer to be able to mix packet. For example...
- Let the PTS of packet X be P(X)
- Let the function which transforms PTS -> RefClock be R(p) (this function is determined by the call to Play(...)
- Let the minimum lead time be MLT
If R(P(X)) < RefClock.Now() + MLT Then the packet is late, and some (or all) of the packet's payload will need to be skipped in order to present the packet at the scheduled time.
The value min_lead_time_nsec = 0
is a special value which indicates
that the AudioRenderer is not yet routed to an output device. If Play
is called before the AudioRenderer is routed, any played packets will be
dropped. Clients should wait until min_lead_time_nsec > 0
before
calling Play
.
Request
Name | Type |
---|---|
enabled |
bool
|
EndOfStream
Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.
Request
<EMPTY>
GetMinLeadTime
While it is possible to call GetMinLeadTime
before SetPcmStreamType
,
there's little reason to do so. This is because lead time is a function
of format/rate, so lead time will be recalculated after SetPcmStreamType
.
If min lead time events are enabled before SetPcmStreamType
(with
EnableMinLeadTimeEvents(true)
), then an event will be generated in
response to SetPcmStreamType
.
Request
<EMPTY>
Response
Name | Type |
---|---|
min_lead_time_nsec |
int64
|
GetReferenceClock
Retrieves the stream's reference clock. The returned handle will have READ, DUPLICATE and TRANSFER rights, and will refer to a zx::clock that is MONOTONIC and CONTINUOUS.
Request
<EMPTY>
Response
Name | Type |
---|---|
reference_clock |
handle<clock>
|
OnMinLeadTimeChanged
Response
Name | Type |
---|---|
min_lead_time_nsec |
int64
|
Pause
Immediately puts the AudioRenderer into the paused state and then report the relationship between the media and reference timelines which was established (if requested).
If the AudioRenderer is already in the paused state when this called, the previously-established timeline values are returned (if requested).
Request
<EMPTY>
Response
Name | Type |
---|---|
reference_time |
int64
|
media_time |
int64
|
PauseNoReply
Request
<EMPTY>
Play
Immediately puts the AudioRenderer into a playing state. Starts the advance of the media timeline, using specific values provided by the caller (or default values if not specified). In an optional callback, returns the timestamp values ultimately used -- these set the ongoing relationship between the media and reference timelines (i.e., how to translate between the domain of presentation timestamps, and the realm of local system time).
Local system time is specified in units of nanoseconds; media_time is
specified in the units defined by the user in the SetPtsUnits
function,
or nanoseconds if SetPtsUnits
is not called.
The act of placing an AudioRenderer into the playback state establishes a relationship between 1) the user-defined media (or presentation) timeline for this particular AudioRenderer, and 2) the real-world system reference timeline. To communicate how to translate between timelines, the Play() callback provides an equivalent timestamp in each time domain. The first value ('reference_time') is given in terms of this renderer's reference clock; the second value ('media_time') is what media instant exactly corresponds to that local time. Restated, the frame at 'media_time' in the audio stream should be presented at 'reference_time' according to the reference clock.
Note: on calling this API, media_time immediately starts advancing. It is possible (if uncommon) for a caller to specify a system time that is far in the past, or far into the future. This, along with the specified media time, is simply used to determine what media time corresponds to 'now', and THAT media time is then intersected with presentation timestamps of packets already submitted, to determine which media frames should be presented next.
With the corresponding reference_time and media_time values, a user can
translate arbitrary time values from one timeline into the other. After
calling SetPtsUnits(pts_per_sec_numerator, pts_per_sec_denominator)
and
given the 'ref_start' and 'media_start' values from Play
, then for
any 'ref_time':
media_time = ( (ref_time - ref_start) / 1e9 * (pts_per_sec_numerator / pts_per_sec_denominator) ) + media_start
Conversely, for any presentation timestamp 'media_time':
ref_time = ( (media_time - media_start) * (pts_per_sec_denominator / pts_per_sec_numerator) * 1e9 ) + ref_start
Users, depending on their use case, may optionally choose not to specify
one or both of these timestamps. A timestamp may be omitted by supplying
the special value 'NO_TIMESTAMP
'. The AudioRenderer automatically deduces
any omitted timestamp value using the following rules:
Reference Time If 'reference_time' is omitted, the AudioRenderer will select a "safe" reference time to begin presentation, based on the minimum lead times for the output devices that are currently bound to this AudioRenderer. For example, if an AudioRenderer is bound to an internal audio output requiring at least 3 mSec of lead time, and an HDMI output requiring at least 75 mSec of lead time, the AudioRenderer might (if 'reference_time' is omitted) select a reference time 80 mSec from now.
Media Time If media_time is omitted, the AudioRenderer will select one of two values.
- If the AudioRenderer is resuming from the paused state, and packets have not been discarded since being paused, then the AudioRenderer will use a media_time corresponding to the instant at which the presentation became paused.
- If the AudioRenderer is being placed into a playing state for the first time following startup or a 'discard packets' operation, the initial media_time will be set to the PTS of the first payload in the pending packet queue. If the pending queue is empty, initial media_time will be set to zero.
Return Value When requested, the AudioRenderer will return the 'reference_time' and 'media_time' which were selected and used (whether they were explicitly specified or not) in the return value of the play call.
Examples
-
A user has queued some audio using
SendPacket
and simply wishes them to start playing as soon as possible. The user may call Play without providing explicit timestamps --Play(NO_TIMESTAMP, NO_TIMESTAMP)
. -
A user has queued some audio using
SendPacket
, and wishes to start playback at a specified 'reference_time', in sync with some other media stream, either initially or after discarding packets. The user would callPlay(reference_time, NO_TIMESTAMP)
. -
A user has queued some audio using
SendPacket
. The first of these packets has a PTS of zero, and the user wishes playback to begin as soon as possible, but wishes to skip all of the audio content between PTS 0 and PTS 'media_time'. The user would callPlay(NO_TIMESTAMP, media_time)
. -
A user has queued some audio using
SendPacket
and want to present this media in synch with another player in a different device. The coordinator of the group of distributed players sends an explicit message to each player telling them to begin presentation of audio at PTS 'media_time', at the time (based on the group's shared reference clock) 'reference_time'. Here the user would callPlay(reference_time, media_time)
.
Request
Name | Type |
---|---|
reference_time |
int64
|
media_time |
int64
|
Response
Name | Type |
---|---|
reference_time |
int64
|
media_time |
int64
|
PlayNoReply
Request
Name | Type |
---|---|
reference_time |
int64
|
media_time |
int64
|
RemovePayloadBuffer
Removes a payload buffer from the current buffer set associated with the connection.
A buffer with ID id
must exist in the current set when this method is
invoked, otherwise the service will will close the connection.
Request
Name | Type |
---|---|
id |
uint32
|
SendPacket
Sends a packet to the service. The response is sent when the service is done with the associated payload memory.
packet
must be valid for the current buffer set, otherwise the service
will close the connection.
Request
Name | Type |
---|---|
packet |
StreamPacket
|
Response
<EMPTY>
SendPacketNoReply
Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.
packet
must be valid for the current buffer set, otherwise the service
will close the connection.
Request
Name | Type |
---|---|
packet |
StreamPacket
|
SetPcmStreamType
Sets the type of the stream to be delivered by the client. Using this method implies
that the stream encoding is AUDIO_ENCODING_LPCM
.
This must be called before Play
or PlayNoReply
. After a call to SetPcmStreamType
,
the client must then send an AddPayloadBuffer
request, then the various StreamSink
methods such as SendPacket
/SendPacketNoReply
.
Request
Name | Type |
---|---|
type |
AudioStreamType
|
SetPtsContinuityThreshold
Sets the maximum threshold (in seconds) between explicit user-provided PTS and expected PTS (determined using interpolation). Beyond this threshold, a stream is no longer considered 'continuous' by the renderer.
Defaults to an interval of half a PTS 'tick', using the currently-defined PTS units. Most users should not need to change this value from its default.
Example: A user is playing back 48KHz audio from a container, which also contains video and needs to be synchronized with the audio. The timestamps are provided explicitly per packet by the container, and expressed in mSec units. This means that a single tick of the media timeline (1 mSec) represents exactly 48 frames of audio. The application in this scenario delivers packets of audio to the AudioRenderer, each with exactly 470 frames of audio, and each with an explicit timestamp set to the best possible representation of the presentation time (given this media clock's resolution). So, starting from zero, the timestamps would be..
[ 0, 10, 20, 29, 39, 49, 59, 69, 78, 88, ... ]
In this example, attempting to use the presentation time to compute the starting frame number of the audio in the packet would be wrong the majority of the time. The first timestamp is correct (by definition), but it will be 24 packets before the timestamps and frame numbers come back into alignment (the 24th packet would start with the 11280th audio frame and have a PTS of exactly 235).
One way to fix this situation is to set the PTS continuity threshold (henceforth, CT) for the stream to be equal to 1/2 of the time taken by the number of frames contained within a single tick of the media clock, rounded up. In this scenario, that would be 24.0 frames of audio, or 500 uSec. Any packets whose expected PTS was within +/-CT frames of the explicitly provided PTS would be considered to be a continuation of the previous frame of audio. For this example, calling 'SetPtsContinuityThreshold(0.0005)' would work well.
Other possible uses: Users who are scheduling audio explicitly, relative to a clock which has not been configured as the reference clock, can use this value to control the maximum acceptable synchronization error before a discontinuity is introduced. E.g., if a user is scheduling audio based on a recovered common media clock, and has not published that clock as the reference clock, and they set the CT to 20mSec, then up to 20mSec of drift error can accumulate before the AudioRenderer deliberately inserts a presentation discontinuity to account for the error.
Users whose need to deal with a container where their timestamps may be even less correct than +/- 1/2 of a PTS tick may set this value to something larger. This should be the maximum level of inaccuracy present in the container timestamps, if known. Failing that, it could be set to the maximum tolerable level of drift error before absolute timestamps are explicitly obeyed. Finally, a user could set this number to a very large value (86400.0 seconds, for example) to effectively cause all timestamps to be ignored after the first, thus treating all audio as continuous with previously delivered packets. Conversely, users who wish to always explicitly schedule their audio packets exactly may specify a CT of 0.
Note: explicitly specifying high-frequency PTS units reduces the default continuity threshold accordingly. Internally, this threshold is stored as an integer of 1/8192 subframes. The default threshold is computed as follows: RoundUp((AudioFPS/PTSTicksPerSec) * 4096) / (AudioFPS * 8192) For this reason, specifying PTS units with a frequency greater than 8192x the frame rate (or NOT calling SetPtsUnits, which accepts the default PTS unit of 1 nanosec) will result in a default continuity threshold of zero.
Request
Name | Type |
---|---|
threshold_seconds |
float32
|
SetPtsUnits
Sets the units used by the presentation (media) timeline. By default, PTS units are nanoseconds (as if this were called with numerator of 1e9 and denominator of 1). This ratio must lie between 1/60 (1 tick per minute) and 1e9/1 (1ns per tick).
Request
Name | Type |
---|---|
tick_per_second_numerator |
uint32
|
tick_per_second_denominator |
uint32
|
SetReferenceClock
Sets the reference clock that controls this renderer's playback rate. If the input
parameter is a valid zx::clock, it must have READ, DUPLICATE, TRANSFER rights and
refer to a clock that is both MONOTONIC and CONTINUOUS. If instead an invalid clock
is passed (such as the uninitialized zx::clock()
), this indicates that the stream
will use a 'flexible' clock generated by AudioCore that tracks the audio device.
SetReferenceClock
cannot be called once SetPcmStreamType
is called. It also
cannot be called a second time (even if the renderer format has not yet been set).
If a client wants a reference clock that is initially CLOCK_MONOTONIC
but may
diverge at some later time, they should create a clone of the monotonic clock, set
this as the stream's reference clock, then rate-adjust it subsequently as needed.
Request
Name | Type |
---|---|
reference_clock |
handle<clock>?
|
SetUsage
Sets the usage of the render stream. This method may not be called after
SetPcmStreamType
is called. The default usage is MEDIA
.
Request
Name | Type |
---|---|
usage |
AudioRenderUsage
|
ProfileProvider
Defined in fuchsia.media/profile_provider.fidl
RegisterHandlerWithCapacity
Register a thread as a media thread. This notifies the media subsystem that this thread should have an elevated scheduling profile applied to it in order to meet audio or video deadlines.
name
is the name of a system scheduling role to apply to the thread given by
thread_handle
-- different products may customize the underlying scheduling strategy based
on the requested role. period
is the suggested interval to be scheduled at. period
may
be zero if the thread has no preferred scheduling interval. capacity
is the proportion of
the scheduling interval the thread needs to be running to achieve good performance or to
meet the scheduling deadline defined by period
. capacity
may be zero if the workload has
no firm runtime requirements. Note that capacity
should be a good faith estimate based on
the worst case runtime the thread requires each period. Excessive capacity requests may
be rejected or result in scaling back the performance of other threads to fit resource
limits.
Capacity, max runtime, and period have the following relationship:
capacity = max runtime / period
Where:
0 <= max runtime <= period and 0 <= capacity <= 1
For heterogeneous systems, the capacity should be planned / measured against the highest performance processor(s) in the system. The system will automatically adjust the effective capacity to account for slower processors and operating points and will avoid processors and operating points that are too slow to meet the requested scheduling parameters (provided they are reasonable).
Returns the period and capacity (actually maximum runtime) that was applied, either of which may be zero to indicate not applicable.
Request
Name | Type |
---|---|
thread_handle |
handle<thread>
|
name |
string:64
|
period |
zx/Duration
|
capacity |
float32
|
Response
Name | Type |
---|---|
period |
zx/Duration
|
capacity |
zx/Duration
|
RegisterMemoryRange
Register a memory range as being used for media processing. This notifies the media subsystem that this memory should have an elevated memory profile applied to it in order to meet audio or video deadlines.
name
is the name of a system memory role to apply to the memory given by
vmar_handle
-- different products may customize the underlying memory strategy based
on the requested role.
Request
Name | Type |
---|---|
vmar_handle |
handle<vmar>
|
name |
string:64
|
Response
<EMPTY>
UnregisterHandler
Reset a thread's scheduling profile to the default.
Request
Name | Type |
---|---|
thread_handle |
handle<thread>
|
name |
string:64
|
Response
<EMPTY>
UnregisterMemoryRange
Reset a memory range's memory profile.
Request
Name | Type |
---|---|
vmar_handle |
handle<vmar>
|
Response
<EMPTY>
SessionAudioConsumerFactory
Defined in fuchsia.media/audio_consumer.fidl
Interface for creating audio consumers bound to a session.
CreateAudioConsumer
Creates an AudioConsumer
, which is an interface for playing audio, bound
to a particular session. session_id
is the identifier of the media session
for which audio is to be rendered.
Request
Name | Type |
---|---|
session_id |
uint64
|
audio_consumer_request |
server_end:AudioConsumer
|
SimpleStreamSink
Defined in fuchsia.media/stream.fidl
A StreamSink that uses StreamBufferSet for buffer management.
AddPayloadBuffer
Adds a payload buffer to the current buffer set associated with the
connection. A StreamPacket
struct reference a payload buffer in the
current set by ID using the StreamPacket.payload_buffer_id
field.
A buffer with ID id
must not be in the current set when this method is
invoked, otherwise the service will close the connection.
Request
Name | Type |
---|---|
id |
uint32
|
payload_buffer |
handle<vmo>
|
DiscardAllPackets
Discards packets previously sent via SendPacket
or SendPacketNoReply
and not yet released. The response is sent after all packets have been
released.
Request
<EMPTY>
Response
<EMPTY>
DiscardAllPacketsNoReply
Discards packets previously sent via SendPacket
or SendPacketNoReply
and not yet released.
Request
<EMPTY>
EndOfStream
Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.
Request
<EMPTY>
RemovePayloadBuffer
Removes a payload buffer from the current buffer set associated with the connection.
A buffer with ID id
must exist in the current set when this method is
invoked, otherwise the service will will close the connection.
Request
Name | Type |
---|---|
id |
uint32
|
SendPacket
Sends a packet to the service. The response is sent when the service is done with the associated payload memory.
packet
must be valid for the current buffer set, otherwise the service
will close the connection.
Request
Name | Type |
---|---|
packet |
StreamPacket
|
Response
<EMPTY>
SendPacketNoReply
Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.
packet
must be valid for the current buffer set, otherwise the service
will close the connection.
Request
Name | Type |
---|---|
packet |
StreamPacket
|
StreamBufferSet
Defined in fuchsia.media/stream.fidl
Manages a set of payload buffers for a stream. This interface is typically
inherited along with StreamSink
or StreamSource
to enable the transport
of elementary streams between clients and services.
AddPayloadBuffer
Adds a payload buffer to the current buffer set associated with the
connection. A StreamPacket
struct reference a payload buffer in the
current set by ID using the StreamPacket.payload_buffer_id
field.
A buffer with ID id
must not be in the current set when this method is
invoked, otherwise the service will close the connection.
Request
Name | Type |
---|---|
id |
uint32
|
payload_buffer |
handle<vmo>
|
RemovePayloadBuffer
Removes a payload buffer from the current buffer set associated with the connection.
A buffer with ID id
must exist in the current set when this method is
invoked, otherwise the service will will close the connection.
Request
Name | Type |
---|---|
id |
uint32
|
StreamProcessor
Defined in fuchsia.media/stream_processor.fidl
Overview of operation:
- Create
- create via CodecFactory - see CodecFactory
- create via LicenseSession - see LicenseSession
- Get input constraints
- OnInputConstraints() - sent unsolicited by stream processor shortly after stream processor creation.
- Provide input buffers
- SetInputBufferPartialSettings()
- Deliver input data
- QueueInputPacket() + OnFreeInputPacket(), for as long as it takes, possibly working through all input packets repeatedly before...
- Get output constraints and format
- OnOutputConstraints()
- This is not sent until after at least one QueueInput* message is sent by the client, even if the underlying processor behind the StreamProcessor doesn't fundamentally need any input data to determine its output constraints. This server behavior prevents clients taking an incorrect dependency on the output constraints showing up before input is delivered.
- A client must tolerate this arriving as late as after substantial input data has been delivered, including lots of input packet recycling via OnFreeInputPacket().
- This message can arrive more than once before the first output data.
- Provide output buffers
- SetOutputBufferPartialSettings() / CompleteOutputBufferPartialSettings()
- Data flows, with optional EndOfStream
- OnOutputPacket() / RecycleOutputPacket() / QueueInputPacket() / OnFreeInputPacket() / QueueInputEndOfStream() / OnOutputEndOfStream()
Semi-trusted StreamProcessor server - SW decoders run in an isolate (with very few capabilities) just in case the decoding SW has a vulnerability which could be used to take over the StreamProcessor server. Clients of the stream processor interface using decoders and processing streams of separate security contexts, to a greater extent than some other interfaces, need to protect themselves against invalid server behavior, such as double-free of a packet_index and any other invalid server behavior. Having fed in compressed data of one security context, don't place too much trust in a single StreamProcessor instance to not mix data among any buffers that StreamProcessor server has ever been told about. Instead, create separate StreamProcessor instances for use by security-separate client-side contexts. While the picture for HW-based decoders looks somewhat different and is out of scope of this paragraph, the client should always use separate StreamProcessor instances for security-separate client-side contexts.
Descriptions of actions taken by methods of this protocol and the states of things are given as if the methods are synchronously executed by the stream processor server, but in reality, as is typical of FIDL interfaces, the message processing is async. The states described are to be read as the state from the client's point of view unless otherwise stated. Events coming back from the server are of course delivered async, and a client that processes more than one stream per StreamProcessor instance needs to care whether a given event is from the current stream vs. some older soon-to-be-gone stream.
The Sync() method's main purpose is to enable the client to robustly prevent having both old and new buffers allocated in the system at the same time, since media buffers can be significantly large, depending. The Sync() method achieves this by only delivering it's response when all previous calls to the StreamProcessor protocol have actually taken effect in the StreamControl ordering domain. Sync() can also be used to wait for the stream processor server to catch up if there's a possibility that a client might otherwise get too far ahead of the StreamProcessor server, by for example requesting creation of a large number of streams in a row. It can also be used during debugging to ensure that a stream processor server hasn't gotten stuck. Calling Sync() is entirely optional and never required for correctness - only potentially required to de-overlap resource usage.
It's possible to re-use a StreamProcessor instance for another stream, and doing so can sometimes skip over re-allocation of buffers. This can be a useful thing to do for cases like seeking to a new location - at the StreamProcessor interface that can look like switching to a new stream.
CloseCurrentStream
This "closes" the current stream, leaving no current stream. In addition, this message can optionally release input buffers or output buffers.
If there has never been any active stream, the stream_lifetime_ordinal must be zero or the server will close the channel. If there has been an active stream, the stream_lifetime_ordinal must be the most recent active stream whether that stream is still active or not. Else the server will close the channel.
Multiple of this message without any new active stream in between is not to be considered an error, which allows a client to use this message to close the current stream to stop wasting processing power on a stream the user no longer cares about, then later decide that buffers should be released and send this message again with release_input_buffers and/or release_output_buffers true to get the buffers released, if the client is interested in trying to avoid overlap in resource usage between old buffers and new buffers (not all clients are).
See also Sync().
Request
Name | Type |
---|---|
stream_lifetime_ordinal |
uint64
|
release_input_buffers |
bool
|
release_output_buffers |
bool
|
CompleteOutputBufferPartialSettings
After SetOutputBufferPartialSettings(), the server won't send OnOutputConstraints(), OnOutputFormat(), OnOutputPacket(), or OnOutputEndOfStream() until after the client sends CompleteOutputBufferPartialSettings().
Some clients may be able to send CompleteOutputBufferPartialSettings() immediately after SetOutputBufferPartialSettings() - in that case the client needs to be prepared to receive output without knowing the buffer count or packet count yet - such clients may internally delay processing the received output until the client has heard from sysmem (which is when the client will learn the buffer count and packet count).
Other clients may first wait for sysmem to allocate, prepare to receive output, and then send CompleteOutputBufferPartialSettings().
Request
Name | Type |
---|---|
buffer_lifetime_ordinal |
uint64
|
EnableOnStreamFailed
Permit the server to use OnStreamFailed() instead of the server just closing the whole StreamProcessor channel on stream failure.
If the server hasn't seen this message by the time a stream fails, the server will close the StreamProcessor channel instead of sending OnStreamFailed().
Request
<EMPTY>
FlushEndOfStreamAndCloseStream
This message is optional.
This message is only valid after QueueInputEndOfStream() for this stream. The stream_lifetime_ordinal input parameter must match the stream_lifetime_ordinal of the QueueInputEndOfStream(), else the server will close the channel.
A client can use this message to flush through (not discard) the last input data of a stream so that the stream processor server generates corresponding output data for all the input data before the server moves on to the next stream, without forcing the client to wait for OnOutputEndOfStream() before queueing data of another stream.
The difference between QueueInputEndOfStream() and FlushEndOfStreamAndCloseStream(): QueueInputEndOfStream() is a promise from the client that there will not be any more input data for the stream (and this info is needed by some stream processors for the stream processor to ever emit the very last output data). The QueueInputEndOfStream() having been sent doesn't prevent the client from later completely discarding the rest of the current stream by closing the current stream (with or without a stream switch). In contrast, FlushEndOfStreamAndCloseStream() is a request from the client that all the previously-queued input data be processed including the logical "EndOfStream" showing up as OnOutputEndOfStream() (in success case) before moving on to any newer stream - this essentially changes the close-stream handling from discard to flush-through for this stream only.
A client using this message can start providing input data for a new stream without that causing discard of old stream data. That's the purpose of this message - to allow a client to flush through (not discard) the old stream's last data (instead of the default when closing or switching streams which is discard).
Because the old stream is not done processing yet and the old stream's data is not being discarded, the client must be prepared to continue to process OnOutputConstraints() messages until the stream_lifetime_ordinal is done. The client will know the stream_lifetime_ordinal is done when OnOutputEndOfStream(), OnStreamFailed(), or the StreamProcessor channel closes.
Request
Name | Type |
---|---|
stream_lifetime_ordinal |
uint64
|
OnFreeInputPacket
The server sends this message when the stream processor is done consuming the data in this packet (but not necessarily done processing the data) and the packet can be re-filled by the client.
This is not sent for all packets when a new buffer_lifetime_ordinal starts as in that case all the packets are initially free with the client.
After receiving the available input buffer via this event, the stream processor client can call later call QueueInputBuffer with appropriate offset and length set, with the same packet_index, to re-use the packet_index.
OnFreeInputPacket() does not imply that the data in the input packet has been processed successfully, only that the input data is no longer needed by the StreamProcessor. If a client needs to know which input data has generated corresponding output, using timestamp_ish values for that is recommended.
Any reliance on the relative order of OnFreeInputPacket() and OnStreamFailed() is discouraged and deprecated. Instead, use timstamp_ish values to establish which input packets generated corresponding output packets. Note that even using timestamp_ish values doesn't necessarily imply that the processing of input data with a given timestamp_ish value is fully complete, as in some StreamProcessor(s) the data derived from an input packet can be kept for reference purposes for a long time (in general indefinitely) after the input data has generated its primary output data (the output data to which the timestamp_ish value is attached). The StreamProcessor interface currently does not provide any way to determine when all data derived from an input packet has been discarded by the StreamProcessor, and if such a mechanism is ever added to the StreamProcessor protocol, it would be an optional StreamProcessor capability, since it would be infeasible to implement for some StreamProcessor implementations that rely on external means to process data, where the external means won't necessarily provide info regarding when an input packet's derived data is fully discarded. An input packet's derived data will never generate or contribute to any output data for a different stream.
The order of OnFreeInputPacket() is not guaranteed to be the same as the order of QueueInputPacket(). Any reliance on the order being the same is strongly discouraged and deprecated. Clients are expected to work properly even if the order of OnFreeInputPacket() messages is intentionally scrambled with respect to each other (but not scrambled across OnStreamFailed(), for now).
Response
Name | Type |
---|---|
free_input_packet |
PacketHeader
|
OnInputConstraints
The server sends this shortly after StreamProcessor creation to indicate input buffer constraints. The "min" and "max" input constraints don't change for the life of the StreamProcessor.
The "max" values for buffer size and count are large enough to support the most demanding format the server supports on input. The "recommended" values should be workable for use with the input FormatDetails conveyed during StreamProcessor creation. The "recommended" values are not necessarily suitable if the client uses QueueInputFormatDetails() to change the input format. In that case it's up to the client to determine suitable values, either by creating a new StreamProcessor instance instead, or knowing suitable values outside the scope of this protocol.
See comments on StreamBufferConstraints.
This message is guaranteed to be sent unsolicited to the StreamProcessor client during or shortly after StreamProcessor creation. Clients should not depend on this being the very first message to arrive at the client.
The "min" and "max" input constraints are guaranteed not to change for a given StreamProcessor instance. The "recommended" values may effectively change when the server processes QueueInputFormatDetails(). There is not any way in the protocol short of creating a new StreamProcessor instance for the client to get those new "recommended" values.
Response
Name | Type |
---|---|
input_constraints |
StreamBufferConstraints
|
OnOutputConstraints
This event informs the client of new output constraints.
This message is ordered with respect to other output (such as output packets, output format, output end-of-stream).
Before the first OnOutputPacket() of a stream, the server guarantees that at least one OnOutputConstraints() and exactly one OnOutputFormat() will be sent. The server may not set buffer_constraints_action_required true in OnOutputConstraints() if the buffer config is already suitable for the stream (buffer_constraints_action_required false means the buffer config is already fine). The client must tolerate multiple OnOutputConstraints() (and 1 OnOutputFormat() message) before the first output packet. As long as the client hasn't moved to a new stream, the server won't send another OnOutputConstraints() until after the client has configured output buffers.
This message can be sent mid-stream by a server. If buffer_constraints_action_required false, the message is safe to ignore, but a client may choose to stash the new constraints for later use the next time the client wants to unilaterally re-configure buffers (when allowed). If later the server needs the output config to change, the server may send a new OnOutputConstraints() with buffer_constraints_action_required true.
On buffer_constraints_action_required true, a client that does not wish to fully handle mid-stream output buffer config changes should either give up completely on the processing, or at least re-config the output as specified before starting a new stream (and possibly re-delivering input data, if the client wants). This avoids useless retry with a new stream starting from just before the output buffer config change which would hit the same mid-stream output config change again.
Similarly, some servers may only partly support mid-stream format changes, or only support a mid-stream format change if the buffers are already large enough to handle both before and after the format change. Such servers should still indicate buffer_constraints_action_required true, but then send OnStreamFailed() after the client has re-configured output buffers (seamlessly dealing with the mid-stream output config change is even better of course, but is not always feasible depending on format). When the client retries with a new stream starting from a nearby location in the client's logical overall media timeline, the output buffers will already be suitable for the larger size output, so the new stream will not need any mid-stream output buffer re-config, only a mid-stream OnOutputFormat(). This strategy avoids the problem that would otherwise occur if a client were to retry with a new stream starting just before the mid-stream output buffer config change (the retry wouldn't be effective since the same need for an output buffer config change would be hit again). Servers are discouraged from sending OnStreamFailed() solely due to a mid-stream need for different output buffer config without first sending OnOutputConstraints() with buffer_constraints_action_required true and waiting for the client to re-configure output buffers (to avoid the useless client retry with a new stream from a logical location before the config change).
When buffer_constraints_action_required true, the server will not send any OnOutputPacket() for this stream until after the client has configured/re-configured output buffers.
A client that gives up on processing a stream on any mid-stream OnOutputConstraints() or mid-stream OnOutputFormat() should completely ignore any OnOutputConstraints() with buffer_constraints_action_required false. Otherwise the client may needlessly fail processing, or server implementations might not be able to use buffer_constraints_action_required false for fear of simpler clients just disconnecting.
All clients, even those which don't want to support any mid-stream output buffer re-config or mid-stream OnOutputFormat() are required to deal with 1..multiple OnOutputConstraints() messages before the first output packet, and 1 OnOutputFormat() messages before the first output packet.
This message is ordered with respect to output packets, and with respect to OnOutputFormat().
Response
Name | Type |
---|---|
output_config |
StreamOutputConstraints
|
OnOutputEndOfStream
After QueueInputEndOfStream() is sent by the StreamProcessor client, within a reasonable duration the corresponding OnOutputEndOfStream() will be sent by the StreamProcessor server. Similar to QueueInputEndOfStream(), OnOutputEndOfStream() is sent a maximum of once per stream.
No more stream data for this stream will be sent after this message. All input data for this stream was processed.
While a StreamProcessor client is not required to QueueInputEndOfStream() (unless the client wants to use FlushEndOfStreamAndCloseStream()), if a StreamProcessor server receives QueueInputEndOfStream(), and the client hasn't closed the stream, the StreamProcessor server must generate a corresponding OnOutputEndOfStream() if nothing went wrong, or must send OnStreamFailed(), or must close the server end of the StreamProcessor channel. An ideal StreamProcessor server would handle and report stream errors via the error_ flags and complete stream processing without sending OnStreamFailed(), but in any case, the above-listed options are the only ways that an OnOutputEndOfStream() won't happen after QueueInputEndOfStream().
There will be no more OnOutputPacket() or OnOutputConstraints() messages for this stream_lifetime_ordinal after this message - if a server doesn't follow this rule, a client should close the StreamProcessor channel.
The error_detected_before bool has the same semantics as the error_detected_before bool in OnOutputPacket().
Response
Name | Type |
---|---|
stream_lifetime_ordinal |
uint64
|
error_detected_before |
bool
|
OnOutputFormat
This message is sent by the server before the first output packet of any stream, and potentially mid-stream between output packets of the stream, ordered with respect to output packets, and ordered with respect to OnOutputConstraints().
The server guarantees that the first packet of every stream will be preceeded by an OnOutputFormat().
The server guarantees that there will be an OnOutputFormat() between an OnOutputConstraints() with buffer_constraints_action_required true and an OnOutputPacket(). In other words, the client is essentially allowed to forget what the output format is on any OnOutputConstraints() with buffer_constraints_action_required true, because the server promises a subsequent OnOutputFormat() before any OnOutputPacket().
If the server sets buffer_constraints_action_required true in OnOutputConstraints(), the server won't send OnOutputFormat() (and therefore also won't send OnOutputPacket()) until the client has re-configured output buffers.
The server is allowed to send an OnOutputFormat() mid-stream between two output packets.
A server won't send two adjacent OnOutputFormat() messages without any output packet in between. However an OnOutputFormat() message doesn't guarantee a subsequent packet, because for example the server could send OnOutputEndOfStream() or OnStreamFailed() instead.
A client that does not wish to seamlessly handle mid-stream output format changes should either ensure that no stream processed by the client ever has any mid-stream format change, or the client should ensure that any retry of processing starts the new attempt at a point logically at or after the point where the old format has ended and the new format starts, else the client could just hit the same mid-stream format change again.
An example of this message being sent mid-stream is mid-stream change of dimensions of video frames output from a video decoder.
Not all servers will support seamless handling of format change. Those that do support seamless handling of format change may require that the format change not also require output buffer re-config, in order for the handling to be seamless. See the comment block for OnOutputConstraints() for more discussion of how servers and clients should behave - in particular when they don't seamlessly handle output constraint change and/or output format change.
If this message isn't being sent by the server when expected at the start of a stream, the most common reason is that a OnOutputConstraints() with buffer_constraints_action_required true hasn't been processed by the client (by configuring output buffers using SetOutputBufferPartialSettings() etc).
Response
Name | Type |
---|---|
output_format |
StreamOutputFormat
|
OnOutputPacket
This is how the stream processor emits an output packet to the stream processor client.
Order is significant.
The client should eventually call RecycleOutputPacket() (possibly after switching streams multiple times), unless the buffer_lifetime_ordinal has moved on. A stream change doesn't change which packets are busy with the client vs. free with the server.
The relevant buffer is always the one specified in the packet's buffer_index field.
For low-level buffer types that support it, a StreamProcessor is free to emit an output packet before the low-level buffer actually has any usable data in the buffer, with the mechanism for signalling the presence of data separate from the OnOutputPacket() message. For such low-level buffer types, downstream consumers of data from the emitted packet must participate in the low-level buffer signalling mechanism to know when it's safe to consume the data. This is most likely to be relevant when using a video decoder and gralloc-style buffers.
The error_ bool(s) allow (but do not require) a StreamProcessor server to report errors that happen during an AU or between AUs.
The scope of error_detected_before starts at the end of the last delivered output packet on this stream, or the start of stream if there were no previous output packets on this stream. The scope ends at the start of the output_packet.
The error_detected_before bool is separate so that discontinuities can be indicated separately from whether the current packet is damaged.
The scope of error_detected_during is from the start to the end of this output_packet.
Response
Name | Type |
---|---|
output_packet |
Packet
|
error_detected_before |
bool
|
error_detected_during |
bool
|
OnStreamFailed
The stream has failed, but the StreamProcessor instance is still usable for a new stream.
This message is only ever sent by the server if the client previously sent EnableOnStreamFailed(). If the client didn't send EnableOnStreamFailed() then the server closes the StreamProcessor channel instead.
StreamProcessor server implementations are encouraged to handle stream errors (and ideally to also report them via error_ bools of OnOutputPacket() and OnOutputEndOfStream()) without failing the whole stream, but if a stream processor server is unable to do that, but still can cleanly contain the failure to the stream, the stream processor server can (assuming EnableOnStreamFailed() was called) use OnStreamFailed() to indicate the stream failure to the client without closing the StreamProcessor channel.
An ideal StreamProcessor server handles problems with input data without sending this message, but sending this message is preferred vs. closing the server end of the StreamProcessor channel if the StreamProcessor server can 100% reliably contain the stream failure to the stream, without any adverse impact to any later stream.
No further messages will arrive from the server regarding the failed stream. This includes any OnOutputEndOfStream() that the client would have otherwise expected.
Response
Name | Type |
---|---|
stream_lifetime_ordinal |
uint64
|
error |
StreamError
|
QueueInputEndOfStream
Inform the server that all QueueInputPacket() messages for this stream have been sent.
If the stream isn't closed first (by the client, or by OnStreamFailed(), or StreamProcessor channel closing), there will later be a corresponding OnOutputEndOfStream().
The corresponding OnOutputEndOfStream() message will be generated only if the server finishes processing the stream before the server sees the client close the stream (such as by starting a new stream). A way to force the server to finish the stream before closing is to use FlushEndOfStreamAndCloseStream() after QueueInputEndOfStream() before any new stream. Another way to force the server to finish the stream before closing is to wait for the OnOutputEndOfStream() before taking any action that closes the stream.
In addition to serving as an "EndOfStream" marker to make it obvious client-side when all input data has been processed, if a client never sends QueueInputEndOfStream(), no amount of waiting will necessarily result in all input data getting processed through to the output. Some stream processors have some internally-delayed data which only gets pushed through by additional input data or by this EndOfStream marker. In that sense, this message can be viewed as a flush-through at InputData domain level, but the flush-through only takes effect if the stream processor even gets that far before the stream is just closed at StreamControl domain level. This message is not alone sufficient to act as an overall flush-through at StreamControl level. For that, send this message first and then send FlushEndOfStreamAndCloseStream() (at which point it becomes possible to queue input data for a new stream without causing discard of this older stream's data), or wait for the OnOutputEndOfStream() before closing the current stream.
If a client sends QueueInputPacket(), QueueInputFormatDetails(), QueueInputEndOfStream() for this stream after the first QueueInputEndOfStream() for this stream, a server should close the StreamProcessor channel.
Request
Name | Type |
---|---|
stream_lifetime_ordinal |
uint64
|
QueueInputFormatDetails
If the input format details are still the same as specified during StreamProcessor creation, this message is unnecessary and does not need to be sent.
If the stream doesn't exist yet, this message creates the stream.
The server won't send OnOutputConstraints() until after the client has sent at least one QueueInput* message.
All servers must permit QueueInputFormatDetails() at the start of a stream without failing, as long as the new format is supported by the StreamProcessor instance. Technically this allows for a server to only support the exact input format set during StreamProcessor creation, and that is by design. A client that tries to switch formats and gets a StreamProcessor channel failure should try again one more time with a fresh StreamProcessor instance created with CodecFactory using the new input format during creation, before giving up.
These format details override the format details specified during stream processor creation for this stream only. The next stream will default back to the format details set during stream processor creation.
This message is permitted at the start of the first stream (just like at the start of any stream). The format specified need not match what was specified during stream processor creation, but if it doesn't match, the StreamProcessor channel might close as described above.
Request
Name | Type |
---|---|
stream_lifetime_ordinal |
uint64
|
format_details |
FormatDetails
|
QueueInputPacket
This message queues input data to the stream processor for processing.
If the stream doesn't exist yet, this message creates the new stream.
The server won't send OnOutputConstraints() until after the client has sent at least one QueueInput* message.
The client must continue to deliver input data via this message even if the stream processor has not yet generated the first OnOutputConstraints(), and even if the StreamProcessor is generating OnFreeInputPacket() for previously-queued input packets. The input data must continue as long as there are free packets to be assured that the server will ever generate the first OnOutputConstraints().
Request
Name | Type |
---|---|
packet |
Packet
|
RecycleOutputPacket
After the client is done with an output packet, the client needs to tell the stream processor that the output packet can be re-used for more output, via this method.
It's not permitted to recycle an output packet that's already free with the stream processor server. It's permitted but discouraged for a client to recycle an output packet that has been deallocated by an explicit or implicit output buffer de-configuration(). See buffer_lifetime_ordinal for more on that. A server must ignore any such stale RecycleOutputPacket() calls.
Request
Name | Type |
---|---|
available_output_packet |
PacketHeader
|
SetInputBufferPartialSettings
This is the replacement for SetInputBufferSettings().
When the client is using sysmem to allocate buffers, this message is used instead of SetInputBufferSettings()+AddInputBuffer(). Instead, a single SetInputBufferPartialSettings() provides the StreamProcessor with the client-specified input settings and a BufferCollectionToken which the StreamProcessor will use to convey constraints to sysmem. Both the client and the StreamProcessor will be informed of the allocated buffers directly by sysmem via their BufferCollection channel (not via the StreamProcessor channel).
The client must not QueueInput...() until after sysmem informs the client that buffer allocation has completed and was successful.
The server should be prepared to see QueueInput...() before the server has necessarily heard from sysmem that the buffers are allocated - the server must tolerate either ordering, as the QueueInput...() and notification of sysmem allocation completion arrive on different channels, so the client having heard that allocation is complete doesn't mean the server knows that allocation is complete yet. However, the server can expect that allocation is in fact complete and can expect to get the allocation information from sysmem immediately upon requesting the information from sysmem.
Request
Name | Type |
---|---|
input_settings |
StreamBufferPartialSettings
|
SetOutputBufferPartialSettings
This is the replacement for SetOutputBufferSettings().
When the client is using sysmem to allocate buffers, this message is used instead of SetOutputBufferSettings()+AddOutputBuffer(). Instead, a single SetOutputBufferPartialSettings() provides the StreamProcessor with the client-specified output settings and a BufferCollectionToken which the StreamProcessor will use to convey constraints to sysmem. Both the client and the StreamProcessor will be informed of the allocated buffers directly by sysmem via their BufferCollection channel (not via the StreamProcessor channel).
Configuring output buffers is required after OnOutputConstraints() is received by the client with buffer_constraints_action_required true and stream_lifetime_ordinal equal to the client's current stream_lifetime_ordinal (even if there is an active stream), and is permitted any time there is no current stream.
Closing the current stream occurs on the StreamControl ordering domain, so after a CloseCurrentStream() or FlushEndOfStreamAndCloseStream(), a subsequent Sync() completion must be received by the client before the client knows that there's no longer a current stream.
See also CompleteOutputBufferPartialSettings().
Request
Name | Type |
---|---|
output_settings |
StreamBufferPartialSettings
|
Sync
On completion, all previous StreamProcessor calls have done what they're going to do server-side, except for processing of data queued using QueueInputPacket().
The main purpose of this call is to enable the client to wait until CloseCurrentStream() with release_input_buffers and/or release_output_buffers set to true to take effect, before the client allocates new buffers and re-sets-up input and/or output buffers. This de-overlapping of resource usage can be worthwhile for media buffers which can consume resource types whose overall pools aren't necessarily vast in comparison to resources consumed. Especially if a client is reconfiguring buffers multiple times.
Note that Sync() prior to allocating new media buffers is not alone sufficient to achieve non-overlap of media buffer resource usage system wide, but it can be a useful part of achieving that.
The Sync() transits the Output ordering domain and the StreamControl ordering domain, but not the InputData ordering domain.
This request can be used to avoid hitting kMaxInFlightStreams which is presently 10. A client that stays <= 8 in-flight streams will comfortably stay under the limit of 10. While the protocol permits repeated SetInputBufferSettings() and the like, a client that spams the channel can expect that the channel will just close if the server or the channel itself gets too far behind.
Request
<EMPTY>
Response
<EMPTY>
StreamSink
Defined in fuchsia.media/stream.fidl
Consumes a stream of packets. This interface is typically inherited along
with StreamBufferSet
to enable the transport of elementary streams from
clients to services.
DiscardAllPackets
Discards packets previously sent via SendPacket
or SendPacketNoReply
and not yet released. The response is sent after all packets have been
released.
Request
<EMPTY>
Response
<EMPTY>
DiscardAllPacketsNoReply
Discards packets previously sent via SendPacket
or SendPacketNoReply
and not yet released.
Request
<EMPTY>
EndOfStream
Indicates the stream has ended. The precise semantics of this method are determined by the inheriting interface.
Request
<EMPTY>
SendPacket
Sends a packet to the service. The response is sent when the service is done with the associated payload memory.
packet
must be valid for the current buffer set, otherwise the service
will close the connection.
Request
Name | Type |
---|---|
packet |
StreamPacket
|
Response
<EMPTY>
SendPacketNoReply
Sends a packet to the service. This interface doesn't define how the client knows when the sink is done with the associated payload memory. The inheriting interface must define that.
packet
must be valid for the current buffer set, otherwise the service
will close the connection.
Request
Name | Type |
---|---|
packet |
StreamPacket
|
StreamSource
Defined in fuchsia.media/stream.fidl
Produces a stream of packets. This interface is typically inherited along
with StreamBufferSet
to enable the transport of elementary streams from
services to clients.
DiscardAllPackets
Request
<EMPTY>
Response
<EMPTY>
DiscardAllPacketsNoReply
Request
<EMPTY>
OnEndOfStream
Indicates that the stream has ended.
Response
<EMPTY>
OnPacketProduced
Delivers a packet produced by the service. When the client is done with
the payload memory, the client must call ReleasePacket
to release the
payload memory.
Response
Name | Type |
---|---|
packet |
StreamPacket
|
ReleasePacket
Releases payload memory associated with a packet previously delivered
via OnPacketProduced
.
Request
Name | Type |
---|---|
packet |
StreamPacket
|
UsageAudioConsumerFactory
Defined in fuchsia.media/audio_consumer.fidl
Interface for creating audio consumers for local rendering.
CreateAudioConsumer
Creates an AudioConsumer
, which is an interface for playing audio, given a usage value.
Audio submitted to such a consumer is always rendered locally.
Request
Name | Type |
---|---|
usage |
AudioRenderUsage
|
audio_consumer_request |
server_end:AudioConsumer
|
UsageGainListener
Defined in fuchsia.media/usage_reporter.fidl
A protocol for watching changes to usage gain settings.
The channel will close when the device is not present.
OnGainMuteChanged
Called immediately on connection and afterward any time the usage gain setting changes.
Clients must respond to acknowledge the event. Clients that do not acknowledge their events will eventually be disconnected.
Note: This API does not have mute reporting implemented; muted
is always false.
Request
Name | Type |
---|---|
muted |
bool
|
gain_dbfs |
float32
|
Response
<EMPTY>
UsageGainReporter
Defined in fuchsia.media/usage_reporter.fidl
A protocol for setting up watchers of usage gain.
RegisterListener
Connects a listener to a stream of usage gain setting changes
for usage
on the device identified by device_token
. Usage
Gain is not set directly by any client; it is a translation of
the usage volume setting for each device, summed with active
muting/ducking gain adjustments.
Devices may map the same volume level to different dbfs, so
a device_unique_id
is needed to identify the device.
AudioDeviceEnumerator
provides programmatic access to devices
and their unique ids if it is necessary for a client to select
an id at runtime.
Request
Name | Type |
---|---|
device_unique_id |
string:36
|
usage |
Usage
|
usage_gain_listener |
client_end:UsageGainListener
|
UsageReporter
Defined in fuchsia.media/usage_reporter.fidl
A protocol for setting up watchers of audio usages.
Watch
Request
Name | Type |
---|---|
usage |
Usage
|
usage_watcher |
client_end:UsageWatcher
|
UsageWatcher
Defined in fuchsia.media/usage_reporter.fidl
A protocol for listening to changes to the policy state of an audio usage.
User actions, such as lowering the volume or muting a stream, are not reflected in this API.
OnStateChanged
Called on first connection and whenever the watched usage changes. The provided usage will always be the bound usage; it is provided so that an implementation of this protocol may be bound to more than one usage.
Clients must respond to acknowledge the event. Clients that do not acknowledge their events will eventually be disconnected.
Request
Name | Type |
---|---|
usage |
Usage
|
state |
UsageState
|
Response
<EMPTY>
STRUCTS
AacConstantBitRate
Defined in fuchsia.media/stream_common.fidl
Field | Type | Description | Default |
---|---|---|---|
bit_rate |
uint32
|
Bits per second |
No default |
AacEncoderSettings
Defined in fuchsia.media/stream_common.fidl
Field | Type | Description | Default |
---|---|---|---|
transport |
AacTransport
|
No default | |
channel_mode |
AacChannelMode
|
No default | |
bit_rate |
AacBitRate
|
No default | |
aot |
AacAudioObjectType
|
No default |
AacTransportAdts
Defined in fuchsia.media/stream_common.fidl
AAC inside ADTS
<EMPTY>
AacTransportLatm
Defined in fuchsia.media/stream_common.fidl
AAC inside LATM
Field | Type | Description | Default |
---|---|---|---|
mux_config_present |
bool
|
Whether MuxConfiguration stream element is present |
No default |
AacTransportRaw
Defined in fuchsia.media/stream_common.fidl
Raw AAC access units.
<EMPTY>
AudioCompressedFormatAac
Defined in fuchsia.media/stream_common.fidl
<EMPTY>
AudioCompressedFormatSbc
Defined in fuchsia.media/stream_common.fidl
<EMPTY>
AudioDeviceInfo
Defined in fuchsia.media/audio_device_enumerator.fidl
Field | Type | Description | Default |
---|---|---|---|
name |
string
|
No default | |
unique_id |
string
|
No default | |
token_id |
uint64
|
No default | |
is_input |
bool
|
No default | |
gain_info |
AudioGainInfo
|
No default | |
is_default |
bool
|
No default |
AudioGainInfo
Defined in fuchsia.media/audio_device_enumerator.fidl
Field | Type | Description | Default |
---|---|---|---|
gain_db |
float32
|
No default | |
flags |
AudioGainInfoFlags
|
No default |
AudioStreamType
Defined in fuchsia.media/stream_type.fidl
Describes the type of an audio elementary stream.
Field | Type | Description | Default |
---|---|---|---|
sample_format |
AudioSampleFormat
|
No default | |
channels |
uint32
|
No default | |
frames_per_second |
uint32
|
No default |
Compression
Defined in fuchsia.media/stream_type.fidl
Describes the compression applied to a stream. This type can be used in conjunction with
AudioStreamType
or VideoStreamType
to represent a medium-specific compressed type.
Field | Type | Description | Default |
---|---|---|---|
type |
CompressionType
|
The type of compression applied to the stream. This is generally one of the ENCODING
values, though |
No default |
parameters |
vector<uint8>:8192?
|
Type-specific, opaque 'out-of-band' parameters describing the compression of the stream. |
No default |
EncryptionPattern
Defined in fuchsia.media/stream_common.fidl
EncryptionPattern
Pattern encryption utilizes a pattern of encrypted and clear 16 byte blocks
over the protected range of a subsample (the encrypted_bytes of a
SubsampleEntry
). This structure specifies the number of encrypted data
blocks followed by the number of clear data blocks.
Field | Type | Description | Default |
---|---|---|---|
clear_blocks |
uint32
|
No default | |
encrypted_blocks |
uint32
|
No default |
Metadata
Defined in fuchsia.media/metadata.fidl
Field | Type | Description | Default |
---|---|---|---|
properties |
vector<Property>
|
No default |
Parameter
Defined in fuchsia.media/stream_common.fidl
Parameter
Generic parameter.
We want to minimize use of this generic "Parameter" structure by natively defining as many stream-specific parameter semantics as we can.
Field | Type | Description | Default |
---|---|---|---|
scope |
string
|
No default | |
name |
string
|
No default | |
value |
Value
|
No default |
PcmFormat
Defined in fuchsia.media/stream_common.fidl
PcmFormat
PCM audio format details.
Field | Type | Description | Default |
---|---|---|---|
pcm_mode |
AudioPcmMode
|
No default | |
bits_per_sample |
uint32
|
No default | |
frames_per_second |
uint32
|
No default | |
channel_map |
vector<AudioChannelId>:16
|
No default |
Property
Defined in fuchsia.media/metadata.fidl
Field | Type | Description | Default |
---|---|---|---|
label |
string
|
No default | |
value |
string
|
No default |
SbcEncoderSettings
Defined in fuchsia.media/stream_common.fidl
Settings for an SBC Encoder.
SBC Encoders take signed little endian 16 bit linear PCM samples and
return encoded SBC frames. SBC encoder PCM data in batches of
sub_bands * block_count
PCM frames. This encoder will accept PCM data on
arbitrary frame boundaries, but the output flushed when EOS is queued may be
zero-padded to make a full batch for encoding.
Field | Type | Description | Default |
---|---|---|---|
sub_bands |
SbcSubBands
|
SbcSubBands.SUB_BANDS_8 | |
allocation |
SbcAllocation
|
SbcAllocation.ALLOC_LOUDNESS | |
block_count |
SbcBlockCount
|
SbcBlockCount.BLOCK_COUNT_4 | |
channel_mode |
SbcChannelMode
|
No default | |
bit_pool |
uint64
|
SBC bit pool value. |
No default |
StreamPacket
Defined in fuchsia.media/stream.fidl
Describes a packet consumed by StreamSink
or produced by StreamSource
.
Field | Type | Description | Default |
---|---|---|---|
pts |
int64
|
Time at which the packet is to be presented, according to the presentation clock. |
NO_TIMESTAMP |
payload_buffer_id |
uint32
|
ID of the payload buffer used for this packet. When this struct is used with |
No default |
payload_offset |
uint64
|
Offset of the packet payload in the payload buffer. This value plus the |
No default |
payload_size |
uint64
|
Size in bytes of the payload. This value plus the |
No default |
flags |
uint32
|
An bitwise-or'ed set of flags (see constants below) describing properties of this packet. |
0 |
buffer_config |
uint64
|
The buffer configuration associated with this packet. The semantics of
this field depend on the interface with which this struct is used.
In many contexts, this field is not used. This field is intended for
situations in which buffer configurations (i.e. sets of payload buffers)
are explicitly identified. In such cases, the |
0 |
stream_segment_id |
uint64
|
The stream segment associated with this packet. The semantics of this field depend on the interface with which this struct is used. In many contexts, this field is not used. This field is intended to distinguish contiguous segments of the stream where stream properties (e.g. encoding) may differ from segment to segment. |
0 |
StreamType
Defined in fuchsia.media/stream_type.fidl
Describes the type of an elementary stream.
Field | Type | Description | Default |
---|---|---|---|
medium_specific |
MediumSpecificStreamType
|
Medium-specific type information. |
No default |
encoding |
string:255
|
Encoding (see constants below). This value is represented as a string so that new encodings can be introduced without modifying this file. |
No default |
encoding_parameters |
vector<uint8>?
|
Encoding-specific parameters, sometimes referred to as 'out-of-band data'. Typically, this data is associated with a compressed stream and provides parameters required to decompress the stream. This data is generally opaque to all parties except the producer and consumer of the stream. |
No default |
SubpictureStreamType
Defined in fuchsia.media/stream_type.fidl
Describes the type of a subpicture elementary stream.
<EMPTY>
SubsampleEntry
Defined in fuchsia.media/stream_common.fidl
SubsampleEntry
A subsample is a byte range within a sample consisting of a clear byte range followed by an encrypted byte range. This structure specifies the size of each range in the subsample.
Field | Type | Description | Default |
---|---|---|---|
clear_bytes |
uint32
|
No default | |
encrypted_bytes |
uint32
|
No default |
TextStreamType
Defined in fuchsia.media/stream_type.fidl
Describes the type of a text elementary stream.
<EMPTY>
TimelineFunction
Defined in fuchsia.media/timeline_function.fidl
A TimelineFunction represents a relationship between a subject timeline and a reference timeline with a linear relation.
For example, consider a common use case in which reference time is the monotonic clock of a system and subject time is intended presentation time for some media such as a video.
reference_time
is the value of the monotonic clock at the beginning of
playback. subject_time
is 0 assuming playback starts at the beginning of
the media. We then choose a reference_delta
and subject_delta
so that
subject_delta
/ reference_delta
represents the desired playback rate,
e.g. 0/1 for paused and 1/1 for normal playback.
Formulas
With a function we can determine the subject timeline value s
in terms of
reference timeline value r
with this formula (where reference_delta
> 0):
s = (r - reference_time) * (subject_delta / reference_delta) + subject_time
And similarly we can find the reference timeline value r
in terms of
subject timeline value s
with this formula (where subject_delta
> 0):
r = (s - subject_time) * (reference_delta / subject_delta) + referenc_time
Choosing time values
Time values can be arbitrary and our linear relation will of course be the same, but we can use them to represent the bounds of pieces in a piecewise linear relation.
For example, if a user performs skip-chapter, we might want to describe
this with a TimelineFunction whose subject_time
is the time to skip to,
reference_time
is now plus some epsilon, and delta ratio is 1/1 for normal
playback rate.
Field | Type | Description | Default |
---|---|---|---|
subject_time |
int64
|
A value from the subject timeline that correlates to reference_time. |
0 |
reference_time |
int64
|
A value from the reference timeline that correlates to subject_time. |
0 |
subject_delta |
uint32
|
The change in the subject timeline corresponding to reference_delta. |
0 |
reference_delta |
uint32
|
The change in the reference timeline corresponding to subject_delta. Cannot be zero. |
1 |
VideoStreamType
Defined in fuchsia.media/stream_type.fidl
Describes the type of a video elementary stream.
Field | Type | Description | Default |
---|---|---|---|
pixel_format |
fuchsia.images/PixelFormat
|
Deprecated: 13
|
No default |
color_space |
ColorSpace
|
No default | |
width |
uint32
|
Dimensions of the video frames as displayed in pixels. |
No default |
height |
uint32
|
No default | |
coded_width |
uint32
|
Dimensions of the video frames as encoded in pixels. These values must be equal to or greater than the respective width/height values. |
No default |
coded_height |
uint32
|
No default | |
pixel_aspect_ratio_width |
uint32
|
The aspect ratio of a single pixel as frames are intended to be displayed. |
No default |
pixel_aspect_ratio_height |
uint32
|
No default | |
stride |
uint32
|
The number of bytes per 'coded' row in the primary video plane. |
No default |
VideoUncompressedFormat
Defined in fuchsia.media/stream_common.fidl
VideoUncompressedFormat
Uncompressed video format details.
Field | Type | Description | Default |
---|---|---|---|
image_format |
fuchsia.sysmem/ImageFormat_2
|
No default | |
fourcc |
uint32
|
No default | |
primary_width_pixels |
uint32
|
No default | |
primary_height_pixels |
uint32
|
No default | |
secondary_width_pixels |
uint32
|
No default | |
secondary_height_pixels |
uint32
|
No default | |
planar |
bool
|
No default | |
swizzled |
bool
|
No default | |
primary_line_stride_bytes |
uint32
|
No default | |
secondary_line_stride_bytes |
uint32
|
No default | |
primary_start_offset |
uint32
|
No default | |
secondary_start_offset |
uint32
|
No default | |
tertiary_start_offset |
uint32
|
No default | |
primary_pixel_stride |
uint32
|
No default | |
secondary_pixel_stride |
uint32
|
No default | |
primary_display_width_pixels |
uint32
|
No default | |
primary_display_height_pixels |
uint32
|
No default | |
has_pixel_aspect_ratio |
bool
|
false | |
pixel_aspect_ratio_width |
uint32
|
1 | |
pixel_aspect_ratio_height |
uint32
|
1 |
Void
Defined in fuchsia.media/audio_consumer.fidl
<EMPTY>
ENUMS
AacAudioObjectType strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
MPEG2_AAC_LC |
0 |
MPEG-2 Low Complexity |
MPEG4_AAC_LC |
1 |
MPEG-4 Low Complexity |
AacChannelMode strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
MONO |
0 |
|
STEREO |
2 |
AacVariableBitRate strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Variable bit rate modes. The actual resulting bitrate varies based on input signal and other encoding settings.
See https://wiki.hydrogenaud.io/index.php?title=Fraunhofer_FDK_AAC#Bitrate_Modes
Name | Value | Description |
---|---|---|
V1 |
1 |
|
V2 |
2 |
|
V3 |
3 |
|
V4 |
4 |
|
V5 |
5 |
AudioBitrateMode strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
UNSPECIFIED |
0 |
|
CBR |
1 |
|
VBR |
2 |
AudioCaptureUsage strict
Type: uint32
Defined in fuchsia.media/audio_core.fidl
Usages annotating the purpose of the stream being used to capture audio. The AudioCaptureUsage is used by audio policy to dictate how audio streams interact with each other.
Name | Value | Description |
---|---|---|
BACKGROUND |
0 |
Stream is used to capture audio while in the background. These streams may be active at any the time and are considered privileged. Example: Listening for Hotwords |
FOREGROUND |
1 |
Stream is intended to be used for normal capture functionality. Streams that are used for audio capture while the stream creator is in the foreground should use this. Example: Voice Recorder |
SYSTEM_AGENT |
2 |
Stream is for interaction with a system agent. This should only be used once a user has signalled their intent to have the interaction with an interested party. Examples: Assistant, Siri, Alexa |
COMMUNICATION |
3 |
Stream is intended to be used for some form of real time user to user communication. Voice/Video chat should use this. |
AudioChannelId strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
AudioChannelId
Used in specifying which audio channel is for which speaker location / type.
TODO(dustingreen): Do we need more channel IDs than this?
Name | Value | Description |
---|---|---|
SKIP |
0 |
|
LF |
1 |
|
RF |
2 |
|
CF |
3 |
|
LS |
4 |
|
RS |
5 |
|
LFE |
6 |
|
CS |
7 |
|
LR |
8 |
|
RR |
9 |
|
END_DEFINED |
10 |
|
EXTENDED_CHANNEL_ID_BASE |
1862270976 |
|
MAX |
2147483647 |
AudioOutputRoutingPolicy strict
Type: uint32
Defined in fuchsia.media/audio_core.fidl
Name | Value | Description |
---|---|---|
ALL_PLUGGED_OUTPUTS |
0 |
|
LAST_PLUGGED_OUTPUT |
1 |
AudioPcmMode strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
AudioPcmMode
Name | Value | Description |
---|---|---|
LINEAR |
0 |
|
ALAW |
1 |
|
MULAW |
2 |
AudioRenderUsage strict
Type: uint32
Defined in fuchsia.media/audio_core.fidl
Usage annotating the purpose of the stream being used to render audio. An AudioRenderer's usage cannot be changed after creation. The AudioRenderUsage is used by audio policy to dictate how audio streams interact with each other.
Name | Value | Description |
---|---|---|
BACKGROUND |
0 |
Stream is intended to be used for ambient or background sound. Streams that can be interrupted without consequence should use this. |
MEDIA |
1 |
Stream is intended to be used for normal functionality. Streams that are part of normal functionality should use this. |
INTERRUPTION |
2 |
Stream is intended to interrupt any ongoing function of the device. Streams that are used for interruptions like notifications should use this. |
SYSTEM_AGENT |
3 |
Stream is for interaction with a system agent. This should be used in response to a user initiated trigger. |
COMMUNICATION |
4 |
Stream is intended to be used for some form of real time user to user communication. Voice/Video chat should use this. |
AudioSampleFormat strict
Type: uint32
Defined in fuchsia.media/stream_type.fidl
Enumerates the supported audio sample formats.
Name | Value | Description |
---|---|---|
UNSIGNED_8 |
1 |
8-bit unsigned samples, sample size 1 byte. |
SIGNED_16 |
2 |
16-bit signed samples, host-endian, sample size 2 bytes. |
SIGNED_24_IN_32 |
3 |
24-bit signed samples in 32 bits, host-endian, sample size 4 bytes. |
FLOAT |
4 |
32-bit floating-point samples, sample size 4 bytes. |
Behavior strict
Type: uint32
Defined in fuchsia.media/audio_core.fidl
The behaviors applied to streams when multiple are active.
Name | Value | Description |
---|---|---|
NONE |
0 |
Mix the streams. |
DUCK |
1 |
Apply a gain to duck the volume of one of the streams. (-14.0db) |
MUTE |
2 |
Apply a gain to mute one of the streams. (-160.0db) |
CodecProfile flexible
Type: uint32
Defined in fuchsia.media/stream_common.fidl
A list of permitted codec profiles. This list should be flexible since codecs can and will be added in the future. This can contain both video and audio profiles if needed.
Name | Value | Description |
---|---|---|
H264PROFILE_BASELINE |
0 |
|
H264PROFILE_MAIN |
1 |
|
H264PROFILE_EXTENDED |
2 |
|
H264PROFILE_HIGH |
3 |
|
H264PROFILE_HIGH10PROFILE |
4 |
|
H264PROFILE_HIGH422PROFILE |
5 |
|
H264PROFILE_HIGH444PREDICTIVEPROFILE |
6 |
|
H264PROFILE_SCALABLEBASELINE |
7 |
|
H264PROFILE_SCALABLEHIGH |
8 |
|
H264PROFILE_STEREOHIGH |
9 |
|
H264PROFILE_MULTIVIEWHIGH |
10 |
|
VP8PROFILE_ANY |
11 |
|
VP9PROFILE_PROFILE0 |
12 |
|
VP9PROFILE_PROFILE1 |
13 |
|
VP9PROFILE_PROFILE2 |
14 |
|
VP9PROFILE_PROFILE3 |
15 |
|
HEVCPROFILE_MAIN |
16 |
|
HEVCPROFILE_MAIN10 |
17 |
|
HEVCPROFILE_MAIN_STILL_PICTURE |
18 |
|
MJPEG_BASELINE |
19 |
ColorSpace strict
Type: uint32
Defined in fuchsia.media/stream_type.fidl
Name | Value | Description |
---|---|---|
UNKNOWN |
0 |
|
NOT_APPLICABLE |
1 |
|
JPEG |
2 |
|
HD_REC709 |
3 |
|
SD_REC601 |
4 |
Lc3FrameDuration flexible
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
D10_MS |
1 |
|
D7P5_MS |
2 |
SbcAllocation strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
ALLOC_LOUDNESS |
0 |
|
ALLOC_SNR |
1 |
SbcBlockCount strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
BLOCK_COUNT_4 |
4 |
|
BLOCK_COUNT_8 |
8 |
|
BLOCK_COUNT_12 |
12 |
|
BLOCK_COUNT_16 |
16 |
SbcChannelMode strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
MONO |
0 |
|
DUAL |
1 |
|
STEREO |
2 |
|
JOINT_STEREO |
3 |
SbcSubBands strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
SUB_BANDS_4 |
4 |
|
SUB_BANDS_8 |
8 |
StreamError strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
StreamError
This error code encapsulates various errors that might emanate from a StreamProcessor server. It can be sent either as an OnStreamFailed event or as an epitaph for the channel.
Name | Value | Description |
---|---|---|
UNKNOWN |
1 |
An internal error with an unspecified reason. |
INVALID_INPUT_FORMAT_DETAILS |
2 |
The client provided invalid input format details. |
INCOMPATIBLE_BUFFERS_PROVIDED |
3 |
The server received buffers that are not suitable for the operation to be performed. An example of this would be if a Decoder received output buffers that are too small to decode a frame into. |
EOS_PROCESSING |
4 |
Processing of input EOS (end of stream) failed, so the stream failed. Currently this can occur if a core codec watchdog fires while processing EOS. |
DECODER_UNKNOWN |
16777217 |
An internal decoder error with an unspecified reason. |
DECODER_DATA_PARSING |
16777218 |
Input data that can't be parsed. Only some parsing problems/errors are reported this way. Corrupt input data may be reported as other StreamError, or may not cause a StreamError. |
ENCODER_UNKNOWN |
33554433 |
An internal encoder error with an unspecified reason. |
DECRYPTOR_UNKNOWN |
50331649 |
An internal decryptor error with an unspecified reason. |
DECRYPTOR_NO_KEY |
50331650 |
The requested KeyId is not available for use by the Decryptor. The client may try again later if that key becomes available. |
VideoColorSpace strict
Type: uint32
Defined in fuchsia.media/stream_common.fidl
Name | Value | Description |
---|---|---|
INVALID |
0 |
TABLES
AudioCompressedFormatCvsd
Defined in fuchsia.media/stream_common.fidl
AudioCompressedFormatCvsd contains no fields for now since we will be using the parameter values recommended by Bluetooth Core Spec v5.3 section 9.2.
Ordinal | Field | Type | Description |
---|
AudioCompressedFormatLc3
Defined in fuchsia.media/stream_common.fidl
AudioCompressedFormatLc3 contains no fields. The required parameters
for setting up the decoder would be embedded as Codec_Specific_Configuration
(Assigned Numbers section 6.12.5) in FormatDetails.oob_bytes
.
Ordinal | Field | Type | Description |
---|
AudioConsumerStatus
Defined in fuchsia.media/audio_consumer.fidl
Represents the status of the consumer. In the initial status, error
and
presentation_timeline
are absent. The lead time fields are always present.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
error |
AudioConsumerError
|
If present, indicates an error condition currently in effect. Absent if no error. |
2 |
presentation_timeline |
TimelineFunction
|
If present, indicates the current relationship between the presentation timeline and local monotonic clock, both in nanosecond units. If not present, indicates there is no relationship. Absent initially. 'Presentation timeline' refers to the |
3 |
min_lead_time |
uint64
|
Indicates the minimum lead time in nanoseconds supported by this
|
4 |
max_lead_time |
uint64
|
Indicates the maximum lead time in nanoseconds supported by this
|
CvsdEncoderSettings
Defined in fuchsia.media/stream_common.fidl
Settings for CVSD Encoders. It contains no fields for now since we will be using the parameter values recommended by Bluetooth Core Spec v5.3 section 9.2.
Ordinal | Field | Type | Description |
---|
DecryptedFormat
Defined in fuchsia.media/stream_common.fidl
DecryptedFormat
This describes the format of the decrypted content. It is required to be sent by the StreamProcessor server prior to the delivery of output packets. Currently, there is no additional format details for decrypted output.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
ignore_this_field |
bool
|
EncryptedFormat
Defined in fuchsia.media/stream_common.fidl
EncryptedFormat
The stream format details payload of a decrypting stream processor. This is a sparsely populated table to specify parameters necessary for decryption other than the data stream. It is only necessary to update fields if they changed, but not an error if the same value is repeated.
Ordinal | Field | Type | Description |
---|---|---|---|
3 |
init_vector |
InitVector
|
|
4 |
subsamples |
vector<SubsampleEntry>
|
|
5 |
pattern |
EncryptionPattern
|
|
6 |
scheme |
string
|
|
8 |
key_id |
KeyId
|
|
FormatDetails
Defined in fuchsia.media/stream_common.fidl
FormatDetails
This describes/details the format on input or output of a StreamProcessor (separate instances for input vs. output).
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
format_details_version_ordinal |
uint64
|
|
2 |
mime_type |
string
|
|
3 |
oob_bytes |
vector<uint8>
|
|
4 |
domain |
DomainFormat
|
|
5 |
pass_through_parameters |
vector<Parameter>
|
|
6 |
encoder_settings |
EncoderSettings
|
Instructs an encoder on how to encode raw data. Decoders may ignore this field but are entitled to rejected requests with this field set because it doesn't make sense. |
7 |
timebase |
uint64
|
The number of ticks of the timebase of input packet timestamp_ish values per second. The timebase is only used used for optional extrapolation of timestamp_ish values when an input timestamp which applies to byte 0 of the valid portion of the input packet does not correspond directly to byte 0 of the valid portion of any output packet. Leave unset if timestamp extrapolation is not needed, either due to lack of timestamps on input, or due to input being provided in increments of the encoder's input chunk size (based on the encoder settings and calculated independently by the client). Set if timestamp extrapolation is known to be needed or known to be acceptable to the client. |
8 |
profile |
CodecProfile
|
The codec profile for the given encoder or decoder. For encoders: This value is completely optional. A client may want to specify the codec profile used for protocol compatibility reasons (i.e. WebRTC). However if the value is not set then the the encoder is free to choose any supported codec profile. For decoders: This value is optional but providing it is best practice (at least of unencrypted streams). Container formats include the encoded profile and this should be provided to the decoder. Certain formats like VP9 and AV1 include the encoded profile in their elementary bitstream so it is possible for those decoders to figure out the profile the stream is encoded in. Regardless, clients should provide the encoded profile if possible. |
H264EncoderSettings
Defined in fuchsia.media/stream_common.fidl
Settings for H264 Encoders.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
bit_rate |
uint32
|
Target bits per second for encoded stream. If omitted, interpreted as 200,000. |
2 |
frame_rate |
uint32
|
Target frames per second for encoded stream. If omitted, interpreted as 30. |
3 |
gop_size |
uint32
|
Number of pictures per keyframe. Setting to 0 will disable key frame encoding, except for if force_key_frame is set to true. If omitted, interpreted as 8. |
4 |
variable_frame_rate |
bool
|
Whether to enable frame rate adjustments in order to meet target bitrate. If omitted, interpreted as false. |
5 |
min_frame_rate |
uint32
|
Lowest frame rate allowed if |
6 |
force_key_frame |
bool
|
If true, next frame encoded will be a key frame. If omitted, interpreted as false. |
7 |
quantization_params |
H264QuantizationParameters
|
Allow customization of quantization parameters for encoding. Each frame submitted after setting this will use the new values. If omitted, no change from encoder defaults is made. |
H264QuantizationParameters
Defined in fuchsia.media/stream_common.fidl
Customization of h264 encoder parameters for macroblock quantization. The values can range from 0 to 51, with lower numbers indicating higher quality/bitrate. While encoders should support these fields if feasible, some encoders may ignore these fields. It's ok to not set this table, or not set some of the fields in this table, as encoders can determine their own defaults. If the targeted bitrate can't be achieved with the specified values, then the user should expect the resulting encoded stream bitrate to differ from the requested bitrate.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
i_base |
uint32
|
Starting value for quantization of key frames. |
2 |
i_min |
uint32
|
Smallest allowed value for quantization of key frames. |
3 |
i_max |
uint32
|
Largest allowed value for quantization of key frames. |
4 |
p_base |
uint32
|
Starting value for quantization of predicted frames. |
5 |
p_min |
uint32
|
Smallest allowed value for quantization of predicted frames. |
6 |
p_max |
uint32
|
Largest allowed value for quantization of predicted frames. |
HevcEncoderSettings
Defined in fuchsia.media/stream_common.fidl
Settings for HEVC/H265 Encoders.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
bit_rate |
uint32
|
Target bits per second for encoded stream. Defaults to 200,000 if omitted. |
2 |
frame_rate |
uint32
|
Target frames per second for encoded stream. Defaults to 30 if omitted. |
3 |
gop_size |
uint32
|
Number of pictures per keyframe. Defaults to 8 if omitted. |
InputAudioCapturerConfiguration
Defined in fuchsia.media/audio_capturer.fidl
Configuration for a capturer which will receive a stream from an input device.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
usage |
AudioCaptureUsage
|
Lc3EncoderSettings
Defined in fuchsia.media/stream_common.fidl
Settings for LC3 Encoders. Contains parameters stated in LC3 Specification v1.0. Contains fields that are not represented by PcmFormat.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
nbytes |
uint16
|
External byte count values to be used for the frame encoding of audio.
According to the LC3 Specification v1.0, each audio channel could have different
output byte size value, but for Fuchsia use case, we use the same |
2 |
frame_duration |
Lc3FrameDuration
|
Frame duration is used together with sampling frequency to determine the frame size. |
LoopbackAudioCapturerConfiguration
Defined in fuchsia.media/audio_capturer.fidl
Configuration for a capturer which will receive a loopback stream a system output.
Ordinal | Field | Type | Description |
---|
MSbcEncoderSettings
Defined in fuchsia.media/stream_common.fidl
Settings for the mSBC encoder. There are no settings as the mSBC encoder uses the values required by HFP Spec v1.8.
Ordinal | Field | Type | Description |
---|
Packet
Defined in fuchsia.media/stream_processor.fidl
A Packet represents a chunk of input or output data to or from a stream processor.
stream processor output:
While the Packet is outstanding with the client via OnOutputPacket(), the stream processor will avoid modifying the referenced output data. After the client calls RecycleOutputPacket(packet_index), the stream processor is notified that the client is again ok with the referenced data changing.
stream processor input:
The client initially has all packet_index(es) available to fill, and later gets packet_index(s) that are again ready to fill via OnFreeInputPacket(). The client must not modify the referenced data in between QueueInputPacket() and OnFreeInputPacket().
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
header |
PacketHeader
|
|
2 |
buffer_index |
uint32
|
Which buffer this packet refers to. For single-buffer mode this will always be 0, but for multi-buffer mode, a given in-flight interval of a packet can refer to any buffer. The packet has an associated buffer only while the packet is in-flight, not while the packet is free. The default value makes accidental inappropriate use of index 0 less likely (will tend to complain in an obvious way if not filled out instead of a non-obvious data corruption when decoding buffer 0 repeatedly instead of the correct buffers). TODO(dustingreen): Try to make FIDL table defaults have meaning, and not complain about !has when accessing the field. For now the default specified here does nothing. |
3 |
stream_lifetime_ordinal |
uint64
|
The value 1 is the lowest permitted value after stream processor creation. Values sent by the client must be odd. Values must only increase. A stream_lifetime_ordinal represents the lifetime of a stream. All messages that are specific to a stream have the stream_lifetime_ordinal value and the value is the same for all messages relating to a given stream. |
4 |
start_offset |
uint32
|
Which part of the relevant buffer is this packet using. These are valid for input data that's in-flight to the stream processor, and are valid for output data from the stream processor. For compressed formats and uncompressed audio, the data in [start_offset, start_offset + valid_length_bytes) is the contiguously valid data referred to by this packet. For uncompressed video frames, FormatDetails is the primary means of determining which bytes are relevant. The offsets in FormatDetails are relative to the start_offset here. The valid_length_bytes must be large enough to include the full last line of pixel data, including the full line stride of the last line (not just the width in pixels of the last line). Despite these being filled out, some uncompressed video buffers are of types that are not readable by the CPU. These fields being here don't imply there's any way for the CPU to read an uncompressed frame. |
5 |
valid_length_bytes |
uint32
|
This must be > 0. The semantics for valid data per packet vary depending on data type as follows. uncompressed video - A video frame can't be split across packets. Each packet is one video frame. uncompressed audio - Regardless of float or int, linear or uLaw, or number of channels, a packet must contain an non-negative number of complete audio frames, where a single audio frame consists of data for all the channels for the same single point in time. Any stream-processor-specific internal details re. lower rate sampling for LFE channel or the like should be hidden by the StreamProcessor server implementation. compressed data input - A packet must contain at least one byte of data. See also stream_input_bytes_min. Splitting AUs at arbitrary byte boundaries is permitted, including at boundaries that are in AU headers. compressed data output - The stream processor is not required to fully fill each output packet's buffer. |
6 |
timestamp_ish |
uint64
|
This value is not strictly speaking a timestamp. It is an arbitrary unsigned 64-bit number that, under some circumstances, will be passed by a stream processor unmodified from an input packet to the exactly-corresponding output packet. For timestamp_ish values to be propagated from input to output the following conditions must be true:
|
7 |
start_access_unit |
bool
|
If promise_separate_access_units_on_input (TODO(dustingreen): or any similar mode for output) is true, this bool must be set appropriately depending on whether byte 0 is or is not the start of an access unit. The client is required to know, and required to set this boolean properly. The server is allowed to infer that when this boolean is false, byte 0 is the first byte of a continuation of a previously-started AU. (The byte at start_offset is "byte 0".) If promise_separate_access_units_on_input is false, this boolean is ignored. |
8 |
known_end_access_unit |
bool
|
A client is never required to set this boolean to true. If promise_separate_access_units_on_input is true, for input data, this boolean must be false if the last byte of this packet is not the last byte of an AU, and this boolean may be true if the last byte of this packet is the last byte of an AU. A client delivering one AU at a time that's interested in the lowest possible latency via the decoder should set this boolean to true when it can be set to true. If promise_separate_access_units_on_input is false, this boolean is ignored. |
9 |
key_frame |
bool
|
Used for compressed video packets. If not present should be assumed to be unknown. If false, indicates the packet is not part of a key frame. If true, indicates the packet is part of a key frame. |
PacketHeader
Defined in fuchsia.media/stream_processor.fidl
PacketHeader
When referring to a free packet, we use PacketHeader alone instead of Packet, since while a packet is free it doesn't really have meaningful offset or length etc.
A populated Packet also has a PacketHeader.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
buffer_lifetime_ordinal |
uint64
|
This is which buffer configuration lifetime this header is referring to. A packet_index is only really meaningful with respect to a particular buffer_lifetime_ordinal. See StreamBufferPartialSettings.buffer_lifetime_ordinal. For QueueInputPacket(), a server receiving a buffer_lifetime_ordinal that isn't the current input buffer_lifetime_ordinal will close the channel. For OnFreeInputPacket() and RecycleOutputPacket(), the receiver (client or server) must ignore a message with stale buffer_lifetime_ordinal. |
2 |
packet_index |
uint32
|
The overall set of packet_index values is densely packed from 0..count-1 for input and output separately. They can be queued in any order. Both the client and server should validate the packet_index against the known bound and disconnect if it's out of bounds. When running in single-buffer mode, the buffer index is always 0. The packet_index values don't imply anything about order of use of packets. The client should not expect the ordering to remain the same over time - the stream processor is free to hold on to an input or output packet for a while during which other packet_index values may be used multiple times. For a given properly-functioning StreamProcessor instance, packet_index values will be unique among concurrently-outstanding packets. Servers should validate that a client isn't double-using a packet and clients should validate as necessary to avoid undefined or unexpected client behavior. |
StreamBufferConstraints
Defined in fuchsia.media/stream_processor.fidl
This struct conveys the buffer_constraints_version_ordinal.
Historically this table conveyed more fields than it currently does, but those fields are all deprecated in favor of using sysmem instead.
There are separate instances of this struct for stream input and stream output.
Notes about fields:
For uncompressed video, separate and complete frames in their separate buffers (buffer-per-packet mode) are always a requirement.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
buffer_constraints_version_ordinal |
uint64
|
This is a version number the server sets on the constraints to allow the server to determine when the client has caught up with the latest constraints sent by the server. The server won't emit output data until the client has configured output settings and buffers with a buffer_constraints_version_ordinal >= the latest buffer_constraints_version_ordinal that had buffer_constraints_action_required true. See buffer_constraints_action_required comments for more. A buffer_constraints_version_ordinal of 0 is not permitted, to simplify initial state handling. Other than 0, both odd and even version ordinals are allowed (in contrast to the stream_lifetime_ordinal, neither the client nor server ever has a reason to consider the latest version to be stale, so there would be no benefit to disallowing even values). |
2 |
default_settings |
StreamBufferSettings
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
3 |
per_packet_buffer_bytes_min |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
4 |
per_packet_buffer_bytes_recommended |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
5 |
per_packet_buffer_bytes_max |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
6 |
packet_count_for_server_min |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
7 |
packet_count_for_server_recommended |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
8 |
packet_count_for_server_recommended_max |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
9 |
packet_count_for_server_max |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
10 |
packet_count_for_client_min |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
11 |
packet_count_for_client_max |
uint32
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
12 |
single_buffer_mode_allowed |
bool
|
DEPRECATED - Ignore. Obsolete. |
13 |
is_physically_contiguous_required |
bool
|
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
StreamBufferPartialSettings resource
Defined in fuchsia.media/stream_processor.fidl
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
buffer_lifetime_ordinal |
uint64
|
The containing message starts a new buffer_lifetime_ordinal. There is a separate buffer_lifetime_ordinal for input vs. output. Re-use of the same value is not allowed. Values must be odd. Values must only increase (increasing by more than 2 is permitted). A buffer_lifetime_ordinal lifetime starts at SetInputBufferSettings() or SetOutputBufferSettings(), and ends at the earlier of CloseCurrentStream() with release_input_buffers/release_output_buffers set or SetOutputBufferSettings() with new buffer_lifetime_ordinal in the case of mid-stream output config change. |
2 |
buffer_constraints_version_ordinal |
uint64
|
This value indicates which version of constraints the client is/was aware of so far. For input, this must always be 0 because constraints don't change for input (settings can change, but there's no settings vs current constraints synchronization issue on input). For output, this allows the server to know when the client is sufficiently caught up before the server will generate any more output. When there is no active stream, a client is permitted to re-configure buffers again using the same buffer_constraints_version_ordinal. |
3 |
single_buffer_mode |
bool
|
Deprecated: 23
DEPRECATED - Ignore. Obsolete. |
4 |
packet_count_for_server |
uint32
|
Deprecated: 23
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
5 |
packet_count_for_client |
uint32
|
Deprecated: 23
DEPRECATED - Ignore. Use fuchsia.sysmem.BufferCollection.SetConstraints() |
6 |
sysmem_token |
client_end:fuchsia.sysmem/BufferCollectionToken
|
Deprecated: 23
|
7 |
sysmem2_token |
client_end:fuchsia.sysmem2/BufferCollectionToken
|
The client end of a BufferCollectionToken channel, which the StreamProcessor will use to deliver constraints to sysmem and learn of buffers allocated by sysmem. The client guarantees that the token is already known to sysmem (via BufferCollectionToken.Sync(), BufferCollection.Sync(), or BufferCollectionEvents.OnDuplicatedTokensKnownByServer()). Added: 23
|
StreamBufferSettings
Defined in fuchsia.media/stream_processor.fidl
Deprecated. Use SetStreamBufferPartialSettings() and StreamBufferPartialSettings instead.
DEPRECATED - Ignore. Use SetStreamBufferPartialSettings instead.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
buffer_lifetime_ordinal |
uint64
|
DEPRECATED - Ignore. Use SetStreamBufferPartialSettings instead. |
2 |
buffer_constraints_version_ordinal |
uint64
|
DEPRECATED - Ignore. Use SetStreamBufferPartialSettings instead. |
3 |
packet_count_for_server |
uint32
|
DEPRECATED - Ignore. Use SetStreamBufferPartialSettings instead. |
4 |
packet_count_for_client |
uint32
|
DEPRECATED - Ignore. Use SetStreamBufferPartialSettings instead. |
5 |
per_packet_buffer_bytes |
uint32
|
DEPRECATED - Ignore. Use SetStreamBufferPartialSettings instead. |
6 |
single_buffer_mode |
bool
|
DEPRECATED - Ignore. Use SetStreamBufferPartialSettings instead. |
StreamOutputConstraints
Defined in fuchsia.media/stream_processor.fidl
The stream-processor-controlled output configuration, including both StreamBufferConstraints for the output and FormatDetails for the output.
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
stream_lifetime_ordinal |
uint64
|
A client which always immediately re-configures output buffers on receipt of OnOutputConstraints() with buffer_constraints_action_required true can safely ignore this field. A client is permitted to ignore an OnOutputConstraints() message even with buffer_constraints_action_required true if the client knows the server has already been told to discard the remainder of the stream with the same stream_lifetime_ordinal or if this stream_lifetime_ordinal field is set to 0. The server is required to re-send needed output config via OnOutputConstraints() with new stream_lifetime_ordinal and buffer_constraints_action_required true, if the most recent completed server-side output config isn't what the server wants/needs yet for the new stream. |
2 |
buffer_constraints_action_required |
bool
|
When the buffer constraints are delivered, they indicate whether action is required. A false value here permits delivery of constraints which are fresher without forcing a buffer reconfiguration. If this is false, a client cannot assume that it's safe to immediately re-configure output buffers. If this is true, the client can assume it's safe to immediately configure output buffers once. A client is permitted to ignore buffer constraint versions which have buffer_constraints_action_required false. The server is not permitted to change buffer_constraints_action_required from false to true for the same buffer_constraints_version_ordinal. For each configuration, a client must use new buffers, never buffers that were previously used for anything else, and never buffers previously used for any other StreamProcessor purposes. This rule exists for multiple good reasons, relevant to both mid-stream changes, and changes on stream boundaries. A client should just use new buffers each time. When this is true, the server has already de-refed as many low-level output buffers as the server can while still performing efficient transition to the new buffers and will de-ref the rest asap. A Sync() is not necessary to achieve non-overlap of resource usage to the extent efficiently permitted by the formats involved. If buffer_constraints_action_required is true, the server must not deliver more output data until after output buffers have been configured (or re-configured) by the client. |
3 |
buffer_constraints |
StreamBufferConstraints
|
StreamOutputFormat
Defined in fuchsia.media/stream_processor.fidl
Ordinal | Field | Type | Description |
---|---|---|---|
1 |
stream_lifetime_ordinal |
uint64
|
A client is permitted to ignore an OnOutputFormat() message even with buffer_constraints_action_required true if the client knows the server has already been told to discard the remainder of the stream with the same stream_lifetime_ordinal or if this stream_lifetime_ordinal field is set to 0. The server is required to re-send needed output config via OnOutputConstraints() with new stream_lifetime_ordinal and buffer_constraints_action_required true, if the most recent completed server-side output config isn't what the server wants/needs yet for the new stream. The server is required to send an OnOutputFormat() before the first output packet of a stream. |
2 |
format_details |
FormatDetails
|
If format_details.format_details_version_ordinal changes, the client should inspect the new format details and determine if it must adjust to the new format. The server guarantees that if the format has changed, then format_details.format_details_version_ordinal will change, but a change to format_details.format_details_version_ordinal does not guarantee that the format details actually changed. Servers are strongly encouraged to not change format_details.format_details_version_ordinal other than before the first output data of a stream unless there is a real mid-stream format change in the stream. Unnecessary mid-stream format changes can cause simpler clients that have no need to handle mid-stream format changes to just close the channel. Format changes before the first output data of a stream are not "mid-stream" in this context - those can be useful for stream format detection / setup reasons. Note that in case output buffers don't really need to be re-configured despite a format change, a server is encouraged, but not required, to set buffer_constraints_action_required false on the message that conveys the new format details. Simpler servers may just treat the whole output situation as one big thing and demand output buffer reconfiguration on any change in the output situation. A client may or may not actually handle a new buffer_constraints with buffer_constraints_action_required false, but the client should always track the latest format_details. An updated format_details is ordered with respect to emitted output packets, and applies to all subsequent packets until the next format_details with larger version_ordinal. A simple client that does not intend to handle mid-stream format changes should still keep track of the most recently received format_details until the first output packet arrives, then lock down the format details, handle those format details, and verify that any format_details.format_details_version_ordinal received from the server is the same as the locked-down format_details, until the client is done with the stream. Even such a simple client must tolerate format_details.format_details_version_ordinal changing multiple times before the start of data output from a stream (any stream - the first stream or a subsequent stream). This allows a stream processor to request that output buffers and output format be configured speculatively, and for the output config to be optionally adjusted by the server before the first data output from a stream after the server knows everything it needs to know to fully establish the initial output format details. This simplifies stream processor server implementation, and allows a clever stream processor server to guess it's output config for lower latency before any input data, while still being able to fix the output config (including format details) if the guess turns out to be wrong. Whether the format_details.format_details_version_ordinal will actually change mid-stream is a per-stream-processor and per-stream detail that is not specified in comments here, and in most cases also depends on whether the format changes on the input to the stream processor. Probably it'll be fairly common for a client to use a format which technically supports mid-stream format change, but the client happens to know that none of the streams the client intends to process will ever have a mid-stream format change. |
UsageStateDucked
Defined in fuchsia.media/usage_reporter.fidl
A state of audio usages in which a policy decision has been made to temporarily lower the volume of all streams with this usage.
Ordinal | Field | Type | Description |
---|
UsageStateMuted
Defined in fuchsia.media/usage_reporter.fidl
A state of audio usages in which a policy decision has been made to temporarily mute the volume of all streams with this usage.
Ordinal | Field | Type | Description |
---|
UsageStateUnadjusted
Defined in fuchsia.media/usage_reporter.fidl
A state of audio usages in which no policy actions are taken on any streams with the usage.
Ordinal | Field | Type | Description |
---|
UNIONS
AacBitRate strict
Defined in fuchsia.media/stream_common.fidl
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
constant |
AacConstantBitRate
|
|
2 |
variable |
AacVariableBitRate
|
AacTransport flexible
Defined in fuchsia.media/stream_common.fidl
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
raw |
AacTransportRaw
|
|
2 |
latm |
AacTransportLatm
|
|
3 |
adts |
AacTransportAdts
|
AudioCapturerConfiguration strict
Defined in fuchsia.media/audio_capturer.fidl
Configuration for an audio Capturer.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
loopback |
LoopbackAudioCapturerConfiguration
|
|
2 |
input |
InputAudioCapturerConfiguration
|
AudioCompressedFormat flexible
Defined in fuchsia.media/stream_common.fidl
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
aac |
AudioCompressedFormatAac
|
|
2 |
sbc |
AudioCompressedFormatSbc
|
|
3 |
cvsd |
AudioCompressedFormatCvsd
|
Added: HEAD
|
4 |
lc3 |
AudioCompressedFormatLc3
|
Added: HEAD
|
AudioConsumerError strict
Defined in fuchsia.media/audio_consumer.fidl
Represents a AudioConsumer
error condition.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
place_holder |
Void
|
AudioFormat strict
Defined in fuchsia.media/stream_common.fidl
AudioFormat
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
compressed |
AudioCompressedFormat
|
|
2 |
uncompressed |
AudioUncompressedFormat
|
AudioUncompressedFormat strict
Defined in fuchsia.media/stream_common.fidl
AudioUncompressedFormat
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
pcm |
PcmFormat
|
CryptoFormat flexible
Defined in fuchsia.media/stream_common.fidl
CryptoFormat
Crypto (encrypted or decrypted) format details.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
encrypted |
EncryptedFormat
|
|
2 |
decrypted |
DecryptedFormat
|
DomainFormat strict
Defined in fuchsia.media/stream_common.fidl
DomainFormat
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
audio |
AudioFormat
|
|
2 |
video |
VideoFormat
|
|
3 |
crypto |
CryptoFormat
|
EncoderSettings flexible
Defined in fuchsia.media/stream_common.fidl
Settings for encoders that tell them how to encode raw formats.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
sbc |
SbcEncoderSettings
|
|
2 |
aac |
AacEncoderSettings
|
|
3 |
h264 |
H264EncoderSettings
|
|
4 |
hevc |
HevcEncoderSettings
|
|
5 |
cvsd |
CvsdEncoderSettings
|
Added: HEAD
|
6 |
lc3 |
Lc3EncoderSettings
|
Added: HEAD
|
7 |
msbc |
MSbcEncoderSettings
|
Added: HEAD
|
MediumSpecificStreamType strict
Defined in fuchsia.media/stream_type.fidl
A union of all medium-specific stream type structs.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
audio |
AudioStreamType
|
|
2 |
video |
VideoStreamType
|
|
3 |
text |
TextStreamType
|
|
4 |
subpicture |
SubpictureStreamType
|
Usage strict
Defined in fuchsia.media/audio_core.fidl
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
render_usage |
AudioRenderUsage
|
|
2 |
capture_usage |
AudioCaptureUsage
|
UsageState flexible
Defined in fuchsia.media/usage_reporter.fidl
The state of audio policy enforcement on a stream or set of streams.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
unadjusted |
UsageStateUnadjusted
|
|
2 |
ducked |
UsageStateDucked
|
|
3 |
muted |
UsageStateMuted
|
Value strict
Defined in fuchsia.media/stream_common.fidl
Value
Generic "value" for use within generic "Parameter" struct.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
bool_value |
bool
|
|
2 |
uint64_value |
uint64
|
|
3 |
int64_value |
int64
|
|
4 |
string_value |
string
|
|
5 |
bytes_value |
vector<uint8>
|
VideoCompressedFormat strict
Defined in fuchsia.media/stream_common.fidl
VideoCompressedFormat
Compressed video format details.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
temp_field_todo_remove |
uint32
|
VideoFormat strict
Defined in fuchsia.media/stream_common.fidl
VideoFormat
Video (compress or uncompressed) format details. In this context, "uncompressed" can include block-based image compression formats that still permit fairly fast random access to image data.
Ordinal | Variant | Type | Description |
---|---|---|---|
1 |
compressed |
VideoCompressedFormat
|
|
2 |
uncompressed |
VideoUncompressedFormat
|
BITS
AudioConsumerStartFlags strict
Type: uint32
Defined in fuchsia.media/audio_consumer.fidl
Flags passed to AudioConsumer.Start
.
Name | Value | Description |
---|---|---|
LOW_LATENCY |
1 | Indicates that latency should be kept as low as possible. |
SUPPLY_DRIVEN |
2 | Indicates that the timing of packet delivery is determined by an external process rather than being demand-based. When this flag is set, the service should expect underflow or overflow due to a mismatch between packet arrival rate and presentation rate. When this flag is not set, packets arrive on demand. |
AudioGainInfoFlags strict
Type: uint32
Defined in fuchsia.media/audio_device_enumerator.fidl
Name | Value | Description |
---|---|---|
MUTE |
1 | |
AGC_SUPPORTED |
2 | |
AGC_ENABLED |
4 |
AudioGainValidFlags strict
Type: uint32
Defined in fuchsia.media/audio_device_enumerator.fidl
Name | Value | Description |
---|---|---|
GAIN_VALID |
1 | |
MUTE_VALID |
2 | |
AGC_VALID |
4 |
CONSTANTS
Name | Value | Type | Description |
---|---|---|---|
AUDIO_ENCODING_AAC | fuchsia.media.aac |
String |
Audio encodings. |
AUDIO_ENCODING_AACLATM | fuchsia.media.aaclatm |
String |
|
AUDIO_ENCODING_AMRNB | fuchsia.media.amrnb |
String |
|
AUDIO_ENCODING_AMRWB | fuchsia.media.amrwb |
String |
|
AUDIO_ENCODING_APTX | fuchsia.media.aptx |
String |
|
AUDIO_ENCODING_FLAC | fuchsia.media.flac |
String |
|
AUDIO_ENCODING_GSMMS | fuchsia.media.gsmms |
String |
|
AUDIO_ENCODING_LPCM | fuchsia.media.lpcm |
String |
|
AUDIO_ENCODING_MP3 | fuchsia.media.mp3 |
String |
|
AUDIO_ENCODING_OPUS | fuchsia.media.opus |
String |
|
AUDIO_ENCODING_PCMALAW | fuchsia.media.pcmalaw |
String |
|
AUDIO_ENCODING_PCMMULAW | fuchsia.media.pcmmulaw |
String |
|
AUDIO_ENCODING_SBC | fuchsia.media.sbc |
String |
|
AUDIO_ENCODING_VORBIS | fuchsia.media.vorbis |
String |
|
CAPTURE_USAGE_COUNT |
4
|
uint8 |
|
ENCRYPTION_SCHEME_CBC1 | cbc1 |
String |
|
ENCRYPTION_SCHEME_CBCS | cbcs |
String |
|
ENCRYPTION_SCHEME_CENC | cenc |
String |
|
ENCRYPTION_SCHEME_CENS | cens |
String |
|
ENCRYPTION_SCHEME_UNENCRYPTED | unencrypted |
String |
|
MAX_ENCRYPTION_SCHEME_SIZE |
100
|
uint32 |
|
MAX_FRAMES_PER_RENDERER_PACKET |
262143
|
int64 |
The maximum number of frames that may be contained within a single StreamPacket. |
MAX_INIT_VECTOR_SIZE |
16
|
uint32 |
|
MAX_KEY_ID_SIZE |
16
|
uint32 |
|
MAX_PCM_CHANNEL_COUNT |
8
|
uint32 |
|
MAX_PCM_FRAMES_PER_SECOND |
192000
|
uint32 |
|
METADATA_LABEL_ALBUM | fuchsia.media.album |
String |
|
METADATA_LABEL_ARTIST | fuchsia.media.artist |
String |
|
METADATA_LABEL_COMPOSER | fuchsia.media.composer |
String |
|
METADATA_LABEL_EPISODE | fuchsia.media.episode |
String |
|
METADATA_LABEL_GENRE | fuchsia.media.genre |
String |
|
METADATA_LABEL_PUBLISHER | fuchsia.media.publisher |
String |
|
METADATA_LABEL_RELEASE_DATE | fuchsia.media.release_date |
String |
|
METADATA_LABEL_SEASON | fuchsia.media.season |
String |
|
METADATA_LABEL_STUDIO | fuchsia.media.studio |
String |
|
METADATA_LABEL_SUBTITLE | fuchsia.media.subtitle |
String |
|
METADATA_LABEL_TITLE | fuchsia.media.title |
String |
|
METADATA_LABEL_TRACK_NUMBER | fuchsia.media.track_number |
String |
|
METADATA_SOURCE_TITLE | fuchsia.media.source_title |
String |
The title of the source of the media, e.g. a player, streaming service, or website. |
MIN_PCM_CHANNEL_COUNT |
1
|
uint32 |
Permitted ranges for AudioRenderer and AudioCapturer |
MIN_PCM_FRAMES_PER_SECOND |
1000
|
uint32 |
|
NO_TIMESTAMP |
9223372036854775807
|
int64 |
When used as a |
RENDER_USAGE_COUNT |
5
|
uint8 |
|
STREAM_PACKET_FLAG_DISCONTINUITY |
4
|
uint32 |
Indicates a discontinuity in an otherwise continuous-in-time sequence of
packets. The precise semantics of this flag depend on the context in which
the |
STREAM_PACKET_FLAG_DROPPABLE |
2
|
uint32 |
Indicates that all other packets in the stream can be understood without reference to this packet. This is typically used in compressed streams to identify packets containing frames that may be discarded without affecting other frames. |
STREAM_PACKET_FLAG_KEY_FRAME |
1
|
uint32 |
Indicates that the packet can be understood without reference to other packets in the stream. This is typically used in compressed streams to identify packets that contain key frames. |
VIDEO_ENCODING_H263 | fuchsia.media.h263 |
String |
Video encodings. |
VIDEO_ENCODING_H264 | fuchsia.media.h264 |
String |
|
VIDEO_ENCODING_MPEG4 | fuchsia.media.mpeg4 |
String |
|
VIDEO_ENCODING_THEORA | fuchsia.media.theora |
String |
|
VIDEO_ENCODING_UNCOMPRESSED | fuchsia.media.uncompressed_video |
String |
|
VIDEO_ENCODING_VP3 | fuchsia.media.vp3 |
String |
|
VIDEO_ENCODING_VP8 | fuchsia.media.vp8 |
String |
|
VIDEO_ENCODING_VP9 | fuchsia.media.vp9 |
String |
|
kMaxOobBytesSize |
8192
|
uint64 |
ALIASES
Name | Value | Description |
---|---|---|
CompressionType |
string [256 ] |
An identifier for compression types. |
EncryptionScheme |
string [MAX_ENCRYPTION_SCHEME_SIZE ] |
|
InitVector |
vector [MAX_INIT_VECTOR_SIZE ] |
|
KeyId |
vector [MAX_KEY_ID_SIZE ] |