fuchsia.sysmem2

Added: 12

PROTOCOLS

Allocator

Defined in fuchsia.sysmem2/allocator.fidl

Allocates system memory buffers.

Epitaphs are not used in this protocol.

Added: 19

AllocateNonSharedCollection

Allocates a buffer collection on behalf of a single client (aka initiator) who is also the only participant (from the point of view of sysmem).

This call exists mainly for temp/testing purposes. This call skips the fuchsia.sysmem2/BufferCollectionToken stage, so there's no way to allow another participant to specify its constraints.

Real clients are encouraged to use fuchsia.sysmem2/Allocator.AllocateSharedCollection instead, and to let relevant participants directly convey their own constraints to sysmem by sending BufferCollectionTokens to those participants.

Request

NameType
payload AllocatorAllocateNonSharedCollectionRequest

AllocateSharedCollection

Creates a root fuchsia.sysmem2/BufferCollectionToken.

The BufferCollectionToken can be "duplicated" for distribution to participants by using fuchsia.sysmem2/BufferCollectionToken.Duplicate. Each BufferCollectionToken can be converted into a fuchsia.sysmem2.BufferCollection using fuchsia.sysmem2/Allocator.BindSharedCollection.

Buffer constraints can be set via fuchsia.sysmem2/BufferCollection.SetConstraints.

Success/failure to populate the buffer collection with buffers can be determined from fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated.

Closing the client end of a BufferCollectionToken or BufferCollection (without Release first) will fail all client ends in the same failure domain, which by default is all client ends of the buffer collection. See fuchsia.sysmem2/BufferCollection.SetDispensable and fuchsia.sysmem2/BufferCollection.AttachToken for ways to create separate failure domains within a buffer collection.

Request

NameType
payload AllocatorAllocateSharedCollectionRequest

BindSharedCollection

Convert a fuchsia.sysmem2/BufferCollectionToken into a fuchsia.sysmem2/BufferCollection.

At the time of sending this message, the buffer collection hasn't yet been populated with buffers - the participant must first also send fuchsia.sysmem2/BufferCollection.SetConstraints via the BufferCollection client end.

All BufferCollectionToken(s) duplicated from a root BufferCollectionToken (created via AllocateSharedCollection) must be "turned in" via BindSharedCollection (or Releaseed), and all existing BufferCollection client ends must have sent SetConstraints before the logical BufferCollection will be populated with buffers (or will fail if the overall set of constraints can't be satisfied).

Request

NameType
payload AllocatorBindSharedCollectionRequest

GetVmoInfo

Given a handle to a sysmem-provided VMO, this returns additional info about the corresponding sysmem logical buffer.

Most callers will duplicate a VMO handle first and send the duplicate to this call.

If the client has created a child VMO of a sysmem-provided VMO, that child VMO isn't considered a "sysmem VMO" for purposes of this call.

  • request vmo A handle to a sysmem-provided VMO (or see errors).
  • response buffer_collection_id The buffer collection ID, which is unique per logical buffer collection per boot.
  • response buffer_index The buffer index of the buffer within the buffer collection. This is the same as the index of the buffer within fuchsia.sysmem2/BufferCollectionInfo.buffers. The buffer_index is the same for all sysmem-delivered VMOs corresponding to the same logical buffer, even if the VMO koids differ. The buffer_index is only unique across buffers of a buffer collection. For a given buffer, the combination of buffer_collection_id and buffer_index is unique per boot.
  • response close_weak_asap Iff vmo is a handle to a weak sysmem VMO, the close_weak_asap field will be set in the response. This handle will signal ZX_EVENTPAIR_PEER_CLOSED when all weak VMO handles to the buffer should be closed as soon as possible. This is signalled shortly after all strong sysmem VMOs to the buffer are closed (including any held indirectly via strong BufferCollectionToken or strong BufferCollection). Failure to close all weak sysmem VMO handles to the buffer quickly upon ZX_EVENTPAIR_PEER_CLOSED is considered a VMO leak caused by the client still holding a weak sysmem VMO handle and results in loud complaints to the log by sysmem. The buffers of a collection can be freed independently of each other. The ZX_EVENTPAIR_PEER_CLOSED may already be signalled before the response arrives at the client. A client that isn't prepared to handle weak sysmem VMOs, on seeing this field set, can close all handles to the buffer and fail any associated request.
  • error [fuchsia.sysmem2/Error.NOT_FOUND] - the vmo isn't a sysmem VMO. Both strong and weak sysmem VMOs can be passed to this call, and the VMO handle passed in to this call itself keeps the VMO's info alive for purposes of responding to this call. Because of this, ZX_ERR_NOT_FOUND errors are unambiguous (even if there are no other handles to the VMO when calling; even if other handles are closed before the GetVmoInfo response arrives at the client).
  • error [fuchsia.sysmem2/Error.HANDLE_ACCESS_DENIED] The vmo isn't capable of being used with GetVmoInfo due to rights/capability attenuation. The VMO needs to be usable with zx_vmo_get_info with topic ZX_INFO_HANDLE_BASIC.
  • error [fuchsia.sysmem2/Error.UNSPECIFIED] The request failed for an unspecified reason. See the log for more info.
  • error [fuchsia.sysmem2/Error.PROTOCOL_DEVIATION] The vmo field wasn't set, or there was some other problem with the request field(s).

Request

NameType
payload AllocatorGetVmoInfoRequest

Response

NameType
payload Allocator_GetVmoInfo_Result

SetDebugClientInfo

Set information about the current client that can be used by sysmem to help diagnose leaking memory and allocation stalls waiting for a participant to send fuchsia.sysmem2/BufferCollection.SetConstraints.

This sets the debug client info on all fuchsia.sysmem2/Node(s) subsequently created by this this fuchsia.sysmem2/Allocator including any fuchsia.sysmem2/BufferCollection(s) created via fuchsia.sysmem2/Allocator.BindSharedCollection (in the absence of any prior call to fuchsia.sysmem2/Allocator.SetDebugClientInfo, these BufferCollection(s) have the same initial debug client info as the token turned in to create the BufferCollection).

This info can be subsequently overridden on a per-Node basis by sending fuchsia.sysmem2/Node.SetDebugClientInfo.

Sending fuchsia.sysmem2/Allocator.SetDebugClientInfo once per Allocator is the most efficient way to ensure that all fuchsia.sysmem2/Node(s) will have at least some debug client info set, and is also more efficient than separately sending the same debug client info via fuchsia.sysmem2/Node.SetDebugClientInfo for each created fuchsia.sysmem2/Node.

  • request name This can be an arbitrary string, but the current process name (see fsl::GetCurrentProcessName) is a good default.
  • request id This can be an arbitrary id, but the current process ID (see fsl::GetCurrentProcessKoid) is a good default.

Request

NameType
payload AllocatorSetDebugClientInfoRequest

ValidateBufferCollectionToken

Checks whether a fuchsia.sysmem2/BufferCollectionToken is known to the sysmem server.

With this call, the client can determine whether an incoming token is a real sysmem token that is known to the sysmem server, without any risk of getting stuck waiting forever on a potentially fake token to complete fuchsia.sysmem2/BufferCollectionToken.DuplicateSync or fuchsia.sysmem2/BufferCollectionToken.Sync (or any other two-way FIDL message). In cases where the client trusts the source of the token to provide a real token, this call is not typically needed outside of debugging.

If the validate fails sometimes but succeeds other times, the source of the token may itself not be calling fuchsia.sysmem2/BufferCollectionToken.Sync or fuchsia.sysmem2/BufferCollection.Sync after creating/duplicating the token but before sending the token to the current client. It may be more convenient for the source to use fuchsia.sysmem2/BufferCollectionToken.DuplicateSync to duplicate token(s), since that call has the sync step built in. Or, the buffer collection may be failing before this call is processed by the sysmem server, as buffer collection failure cleans up sysmem's tracking of associated tokens.

This call has no effect on any token.

  • request token_server_koid The koid of the server end of a channel that might be a BufferCollectionToken channel. This can be obtained via zx_object_get_info ZX_INFO_HANDLE_BASIC related_koid.
  • response is_known true means sysmem knew of the token at the time sysmem processed the request, but doesn't guarantee that the token is still valid by the time the client receives the reply. What it does guarantee is that the token at least was a real token, so a two-way call to the token won't stall forever (will fail or succeed fairly quickly, not stall). This can already be known implicitly if the source of the token can be trusted to provide a real token. A false value means the token wasn't known to sysmem at the time sysmem processed this call, but the token may have previously been valid, or may yet become valid. Or if the sender of the token isn't trusted to provide a real token, the token may be fake. It's the responsibility of the sender to sync with sysmem to ensure that previously created/duplicated token(s) are known to sysmem, before sending the token(s) to other participants.

Request

NameType
payload AllocatorValidateBufferCollectionTokenRequest

Response

NameType
payload Allocator_ValidateBufferCollectionToken_Result

BufferCollection

Defined in fuchsia.sysmem2/collection.fidl

fuchsia.sysmem2/BufferCollection is a connection directly from a participant to sysmem re. a buffer collection; often the buffer collection is shared with other participants which have their own BufferCollection client end(s) associated with the same buffer collection. In other words, an instance of the BufferCollection interface is a view of a buffer collection, not the buffer collection itself.

The BufferCollection connection exists to facilitate async indication of when the buffer collection has been populated with buffers.

Also, the channel's closure by the sysmem server is an indication to the client that the client should close all VMO handles that were obtained from the BufferCollection ASAP.

Some buffer collections can use enough memory that it can be worth avoiding allocation overlap (in time) using fuchsia.sysmem2/BufferCollection.AttachLifetimeTracking so that the initiator can tell when enough buffers of the buffer collection have been fully deallocated prior to the initiator allocating a new buffer collection.

Epitaphs are not used in this protocol.

Added: 19

AttachLifetimeTracking

Set up an eventpair to be signalled (ZX_EVENTPAIR_PEER_CLOSED) when buffers have been allocated and only the specified number of buffers (or fewer) remain in the buffer collection.

fuchsia.sysmem2/BufferCollection.AttachLifetimeTracking allows a client to wait until an old buffer collection is fully or mostly deallocated before attempting allocation of a new buffer collection. The eventpair is only signalled when the other buffers have been fully deallocated (not just un-referenced by clients, but all the memory consumed by those buffers has been fully reclaimed/recycled), or when allocation or logical allocation fails for the tree or subtree including this fuchsia.sysmem2/BufferCollection.

The eventpair won't be signalled until allocation or logical allocation has completed; until then, the collection's current buffer count is ignored.

If logical allocation fails for an attached subtree (using fuchsia.sysmem2/BufferCollection.AttachToken), the server end of the eventpair will close during that failure regardless of the number of buffers potenitally allocated in the overall buffer collection. This is for logical allocation consistency with normal allocation.

The lifetime signalled by this event includes asynchronous cleanup of allocated buffers, and this asynchronous cleanup cannot occur until all holders of VMO handles to the buffers have closed those VMO handles. Therefore, clients should take care not to become blocked forever waiting for ZX_EVENTPAIR_PEER_CLOSED to be signalled if any of the participants using the logical buffer collection are less trusted or less reliable. Failure to allocate a new/replacement buffer collection is much better than getting stuck forever.

This mechanism is meant to be compatible with other protocols with a similar AttachLifetimeTracking message; duplicates of the same eventpair handle (server end) can be sent via more than one AttachLifetimeTracking message to different protocols, and the ZX_EVENTPAIR_PEER_CLOSED will be signalled for the client end when all the conditions are met (all holders of duplicates have closed their server end handle(s)). Also, thanks to how eventpair endponts work, the client end can be duplicated without preventing the ZX_EVENTPAIR_PEER_CLOSED signal.

The server intentionally doesn't "trust" any signals on the server_end. This mechanism intentionally uses only ZX_EVENTPAIR_PEER_CLOSED which can't be set "early", and is only set when all handles to the server end eventpair are closed. No meaning is associated with any of the other signals, and clients should ignore any other signal bits on either end of the eventpair.

The server_end may lack ZX_RIGHT_SIGNAL or ZX_RIGHT_SIGNAL_PEER, but must have ZX_RIGHT_DUPLICATE (and must have ZX_RIGHT_TRANSFER to transfer without causing BufferCollection channel failure).

All table fields are currently required.

  • request server_end This eventpair handle will be closed by the sysmem server when buffers have been allocated initially and the number of buffers is then less than or equal to buffers_remaining.
  • request buffers_remaining Wait for all but buffers_remaining (or fewer) buffers to be fully deallocated. A number greater than zero can be useful in situations where a known number of buffers are intentionally not closed so that the data can continue to be used, such as for keeping the last available video frame displayed in the UI even if the video stream was using protected output buffers. It's outside the scope of the BufferCollection interface (at least for now) to determine how many buffers may be held without closing, but it'll typically be in the range 0-2.

Request

NameType
payload BufferCollectionAttachLifetimeTrackingRequest

AttachToken

Create a new token to add a new participant to an existing logical buffer collection, if the existing collection's buffer counts, constraints, and participants allow.

This can be useful in replacing a failed participant, and/or in adding/re-adding a participant after buffers have already been allocated.

When fuchsia.sysmem2/BufferCollection.AttachToken is used, the sub tree rooted at the attached fuchsia.sysmem2/BufferCollectionToken goes through the normal procedure of setting constraints or closing fuchsia.sysmem2/Node(s), and then appearing to allocate buffers from clients' point of view, despite the possibility that all the buffers were actually allocated previously. This process is called "logical allocation". Most instances of "allocation" in docs for other messages can also be read as "allocation or logical allocation" while remaining valid, but we just say "allocation" in most places for brevity/clarity of explanation, with the details of "logical allocation" left for the docs here on AttachToken.

Failure of an attached Node does not propagate to the parent of the attached Node. More generally, failure of a child Node is blocked from reaching its parent Node if the child is attached, or if the child is dispensable and the failure occurred after logical allocation (see fuchsia.sysmem2/BufferCollectionToken.SetDispensable).

A participant may in some scenarios choose to initially use a dispensable token for a given instance of a delegate participant, and then later if the first instance of that delegate participant fails, a new second instance of that delegate participant my be given a token created with AttachToken.

From the point of view of the fuchsia.sysmem2/BufferCollectionToken client end, the token acts like any other token. The client can fuchsia.sysmem2/BufferCollectionToken.Duplicate the token as needed, and can send the token to a different process/participant. The BufferCollectionToken Node should be converted to a BufferCollection Node as normal by sending fuchsia.sysmem2/Allocator.BindSharedCollection, or can be closed without causing subtree failure by sending fuchsia.sysmem2/BufferCollectionToken.Release. Assuming the former, the fuchsia.sysmem2/BufferCollection.SetConstraints message or fuchsia.sysmem2/BufferCollection.Release message should be sent to the BufferCollection.

Within the subtree, a success result from fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated means the subtree participants' constraints were satisfiable using the already-existing buffer collection, the already-established fuchsia.sysmem2/BufferCollectionInfo including image format constraints, and the already-existing other participants (already added via successful logical allocation) and their specified buffer counts in their constraints. A failure result means the new participants' constraints cannot be satisfied using the existing buffer collection and its already-added participants. Creating a new collection instead may allow all participants' constraints to be satisfied, assuming SetDispensable is used in place of AttachToken, or a normal token is used.

A token created with AttachToken performs constraints aggregation with all constraints currently in effect on the buffer collection, plus the attached token under consideration plus child tokens under the attached token which are not themselves an attached token or under such a token. Further subtrees under this subtree are considered for logical allocation only after this subtree has completed logical allocation.

Assignment of existing buffers to participants' fuchsia.sysmem2/BufferCollectionConstraints.min_buffer_count_for_camping etc is first-come first-served, but a child can't logically allocate before all its parents have sent SetConstraints.

See also fuchsia.sysmem2/BufferCollectionToken.SetDispensable, which in contrast to AttachToken, has the created token Node + child Node(s) (in the created subtree but not in any subtree under this subtree) participate in constraints aggregation along with its parent during the parent's allocation or logical allocation.

Similar to fuchsia.sysmem2/BufferCollectionToken.Duplicate, the newly created token needs to be fuchsia.sysmem2/Node.Synced to sysmem before the new token can be passed to BindSharedCollection. The Sync of the new token can be accomplished with fuchsia.sysmem2/BufferCollection.Sync after converting the created BufferCollectionToken to a BufferCollection. Alternately, fuchsia.sysmem2/BufferCollectionToken.Sync on the new token also works. Or using fuchsia.sysmem2/BufferCollectionToken.DuplicateSync works. As usual, a BufferCollectionToken.Sync can be started after any BufferCollectionToken.Duplicate messages have been sent via the newly created token, to also sync those additional tokens to sysmem using a single round-trip.

All table fields are currently required.

  • request rights_attentuation_mask This allows attenuating the VMO rights of the subtree. These values for rights_attenuation_mask result in no attenuation (note that 0 is not on this list):
    • ZX_RIGHT_SAME_RIGHTS (preferred)
    • 0xFFFFFFFF (this is reasonable when an attenuation mask is computed)
  • request token_request The server end of the BufferCollectionToken channel. The client retains the client end.

Request

NameType
payload BufferCollectionAttachTokenRequest

CheckAllBuffersAllocated

Checks whether all the buffers have been allocated, in a polling fashion.

Added: HEAD

Request

<EMPTY>

Response

NameType
payload BufferCollection_CheckAllBuffersAllocated_Result

GetBufferCollectionId

Get the buffer collection ID. This ID is also available from fuchsia.sysmem2/Allocator.GetVmoInfo (along with the buffer_index within the collection).

This call is mainly useful in situations where we can't convey a fuchsia.sysmem2/BufferCollectionToken or fuchsia.sysmem2/BufferCollection directly, but can only convey a VMO handle, which can be joined back up with a BufferCollection client end that was created via a different path. Prefer to convey a BufferCollectionToken or BufferCollection directly when feasible.

Trusting a buffer_collection_id value from a source other than sysmem is analogous to trusting a koid value from a source other than zircon. Both should be avoided unless really necessary, and both require caution. In some situations it may be reasonable to refer to a pre-established BufferCollection by buffer_collection_id via a protocol for efficiency reasons, but an incoming value purporting to be a buffer_collection_id is not sufficient alone to justify granting the sender of the buffer_collection_id any capability. The sender must first prove to a receiver that the sender has/had a VMO or has/had a BufferCollectionToken to the same collection by sending a handle that sysmem confirms is a valid sysmem handle and which sysmem maps to the buffer_collection_id value. The receiver should take care to avoid assuming that a sender had a BufferCollectionToken in cases where the sender has only proven that the sender had a VMO.

  • response buffer_collection_id This ID is unique per buffer collection per boot. Each buffer is uniquely identified by the buffer_collection_id and buffer_index together.

Request

<EMPTY>

Response

NameType
payload Node_GetBufferCollectionId_Result

GetNodeRef

This gets a handle that can be used as a parameter to fuchsia.sysmem2/Node.IsAlternateFor called on any fuchsia.sysmem2/Node. This handle is only for use as proof that the client obtained this handle from this Node.

Because this is a get not a set, no fuchsia.sysmem2/Node.Sync is needed between the GetNodeRef and the call to IsAlternateFor, despite the two calls typically being on different channels.

See also fuchsia.sysmem2/Node.IsAlternateFor.

All table fields are currently required.

  • response node_ref This handle can be sent via IsAlternateFor on a different Node channel, to prove that the client obtained the handle from this Node.

Request

<EMPTY>

Response

NameType
payload Node_GetNodeRef_Result

IsAlternateFor

Check whether the calling fuchsia.sysmem2/Node is in a subtree rooted at a different child token of a common parent fuchsia.sysmem2/BufferCollectionTokenGroup, in relation to the passed-in node_ref.

This call is for assisting with admission control de-duplication, and with debugging.

The node_ref must be obtained using fuchsia.sysmem2/Node.GetNodeRef.

The node_ref can be a duplicated handle; it's not necessary to call GetNodeRef for every call to fuchsia.sysmem2/Node.IsAlternateFor.

If a calling token may not actually be a valid token at all due to a potentially hostile/untrusted provider of the token, call fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken first instead of potentially getting stuck indefinitely if IsAlternateFor never responds due to a calling token not being a real token (not really talking to sysmem). Another option is to call fuchsia.sysmem2/Allocator.BindSharedCollection with this token first which also validates the token along with converting it to a fuchsia.sysmem2/BufferCollection, then call IsAlternateFor.

All table fields are currently required.

  • response is_alternate
    • true: The first parent node in common between the calling node and the node_ref Node is a BufferCollectionTokenGroup. This means that the calling Node and the node_ref Node will not have both their constraints apply - rather sysmem will choose one or the other of the constraints - never both. This is because only one child of a BufferCollectionTokenGroup is selected during logical allocation, with only that one child's subtree contributing to constraints aggregation.
    • false: The first parent node in common between the calling Node and the node_ref Node is not a BufferCollectionTokenGroup. Currently, this means the first parent node in common is a BufferCollectionToken or BufferCollection (regardless of not Releaseed). This means that the calling Node and the node_ref Node may have both their constraints apply during constraints aggregation of the logical allocation, if both Node(s) are selected by any parent BufferCollectionTokenGroup(s) involved. In this case, there is no BufferCollectionTokenGroup that will directly prevent the two Node(s) from both being selected and their constraints both aggregated, but even when false, one or both Node(s) may still be eliminated from consideration if one or both Node(s) has a direct or indirect parent BufferCollectionTokenGroup which selects a child subtree other than the subtree containing the calling Node or node_ref Node.
  • error [fuchsia.sysmem2/Error.NOT_FOUND] The node_ref wasn't associated with the same buffer collection as the calling Node. Another reason for this error is if the node_ref is an zx.Handle.EVENT handle with sufficient rights, but isn't actually a real node_ref obtained from GetNodeRef.
  • error [fuchsia.sysmem2/Error.PROTOCOL_DEVIATION] The caller passed a node_ref that isn't a zx.Handle:EVENT handle , or doesn't have the needed rights expected on a real node_ref.
  • No other failing status codes are returned by this call. However, sysmem may add additional codes in future, so the client should have sensible default handling for any failing status code.

Request

NameType
payload NodeIsAlternateForRequest

Response

NameType
payload Node_IsAlternateFor_Result

Release

On a fuchsia.sysmem2/BufferCollectionToken channel:

Normally a participant will convert a BufferCollectionToken into a fuchsia.sysmem2/BufferCollection, but a participant can instead send Release via the token (and then close the channel immediately or shortly later in response to server closing the server end), which avoids causing buffer collection failure. Without a prior Release, closing the BufferCollectionToken client end will cause buffer collection failure.

On a fuchsia.sysmem2/BufferCollection channel:

By default the server handles unexpected closure of a fuchsia.sysmem2/BufferCollection client end (without Release first) by failing the buffer collection. Partly this is to expedite closing VMO handles to reclaim memory when any participant fails. If a participant would like to cleanly close a BufferCollection without causing buffer collection failure, the participant can send Release before closing the BufferCollection client end. The Release can occur before or after SetConstraints. If before SetConstraints, the buffer collection won't require constraints from this node in order to allocate. If after SetConstraints, the constraints are retained and aggregated, despite the lack of BufferCollection connection at the time of constraints aggregation.

On a fuchsia.sysmem2/BufferCollectionTokenGroup channel:

By default, unexpected closure of a BufferCollectionTokenGroup client end (without Release first) will trigger failure of the buffer collection. To close a BufferCollectionTokenGroup channel without failing the buffer collection, ensure that AllChildrenPresent() has been sent, and send Release before closing the BufferCollectionTokenGroup client end.

If Release occurs before [fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the buffer collection will fail (triggered by reception of Releasewithout priorAllChildrenPresent). This is intentionally not analogous to how <a class='link' href='../fuchsia.sysmem2/'>fuchsia.sysmem2</a>/<a class='link' href='../fuchsia.sysmem2/#BufferCollection.Release'>BufferCollection.Release</a> without <a class='link' href='../fuchsia.sysmem2/'>fuchsia.sysmem2</a>/<a class='link' href='../fuchsia.sysmem2/#BufferCollection.SetConstraints'>BufferCollection.SetConstraints</a> first doesn't cause buffer collection failure. For a BufferCollectionTokenGroup, clean close requires AllChildrenPresent(if not already sent), then Release`, then close client end.

If Release occurs after AllChildrenPresent, the children and all their constraints remain intact (just as they would if the BufferCollectionTokenGroup channel had remained open), and the client end close doesn't trigger buffer collection failure.

On all fuchsia.sysmem2/Node channels (any of the above):

For brevity, the per-channel-protocol paragraphs above ignore the separate failure domain created by fuchsia.sysmem2/BufferCollectionToken.SetDispensable or fuchsia.sysmem2/BufferCollection.AttachToken. When a client end unexpectedly closes (without Release first) and that client end is under a failure domain, instead of failing the whole buffer collection, the failure domain is failed, but the buffer collection itself is isolated from failure of the failure domain. Such failure domains can be nested, in which case only the inner-most failure domain in which the Node resides fails.

Request

<EMPTY>

SetConstraints

Provide fuchsia.sysmem2/BufferCollectionConstraints to the buffer collection.

A participant may only call fuchsia.sysmem2/BufferCollection.SetConstraints up to once per fuchsia.sysmem2/BufferCollection.

For buffer allocation to be attempted, all holders of a BufferCollection client end need to call SetConstraints before sysmem will attempt to allocate buffers.

  • request constraints These are the constraints on the buffer collection imposed by the sending client/participant. The constraints field is not required to be set. If not set, the client is not setting any actual constraints, but is indicating that the client has no constraints to set. A client that doesn't set the constraints field won't receive any VMO handles, but can still find out how many buffers were allocated and can still refer to buffers by their buffer_index.

Request

NameType
payload BufferCollectionSetConstraintsRequest

SetDebugClientInfo

Set information about the current client that can be used by sysmem to help diagnose leaking memory and allocation stalls waiting for a participant to send fuchsia.sysmem2/BufferCollection.SetConstraints.

This sets the debug client info on this fuchsia.sysmem2/Node and all Node(s) derived from this Node, unless overriden by fuchsia.sysmem2/Allocator.SetDebugClientInfo or a later fuchsia.sysmem2/Node.SetDebugClientInfo.

Sending fuchsia.sysmem2/Allocator.SetDebugClientInfo once per Allocator is the most efficient way to ensure that all fuchsia.sysmem2/Node(s) will have at least some debug client info set, and is also more efficient than separately sending the same debug client info via fuchsia.sysmem2/Node.SetDebugClientInfo for each created fuchsia.sysmem2/Node.

Also used when verbose logging is enabled (see SetVerboseLogging) to indicate which client is closing their channel first, leading to subtree failure (which can be normal if the purpose of the subtree is over, but if happening earlier than expected, the client-channel-specific name can help diagnose where the failure is first coming from, from sysmem's point of view).

All table fields are currently required.

  • request name This can be an arbitrary string, but the current process name (see fsl::GetCurrentProcessName) is a good default.
  • request id This can be an arbitrary id, but the current process ID (see fsl::GetCurrentProcessKoid) is a good default.

Request

NameType
payload NodeSetDebugClientInfoRequest

SetDebugTimeoutLogDeadline

Sysmem logs a warning if sysmem hasn't seen fuchsia.sysmem2/BufferCollection.SetConstraints from all clients within 5 seconds after creation of a new collection.

Clients can call this method to change when the log is printed. If multiple client set the deadline, it's unspecified which deadline will take effect.

In most cases the default works well.

All table fields are currently required.

  • request deadline The time at which sysmem will start trying to log the warning, unless all constraints are with sysmem by then.

Request

NameType
payload NodeSetDebugTimeoutLogDeadlineRequest

SetName

Set a name for VMOs in this buffer collection.

If the name doesn't fit in ZX_MAX_NAME_LEN, the name of the vmo itself will be truncated to fit. The name of the vmo will be suffixed with the buffer index within the collection (if the suffix fits within ZX_MAX_NAME_LEN). The name specified here (without truncation) will be listed in the inspect data.

The name only affects VMOs allocated after the name is set; this call does not rename existing VMOs. If multiple clients set different names then the larger priority value will win. Setting a new name with the same priority as a prior name doesn't change the name.

All table fields are currently required.

  • request priority The name is only set if this is the first SetName or if priority is greater than any previous priority value in prior SetName calls across all Node(s) of this buffer collection.
  • request name The name for VMOs created under this buffer collection.

Request

NameType
payload NodeSetNameRequest

SetVerboseLogging

This enables verbose logging for the buffer collection.

Verbose logging includes constraints set via fuchsia.sysmem2/BufferCollection.SetConstraints from each client along with info set via fuchsia.sysmem2/Node.SetDebugClientInfo (or fuchsia.sysmem2/Allocator.SetDebugClientInfo) and the structure of the tree of Node(s).

Normally sysmem prints only a single line complaint when aggregation fails, with just the specific detailed reason that aggregation failed, with little surrounding context. While this is often enough to diagnose a problem if only a small change was made and everything was working before the small change, it's often not particularly helpful for getting a new buffer collection to work for the first time. Especially with more complex trees of nodes, involving things like fuchsia.sysmem2/BufferCollection.AttachToken, fuchsia.sysmem2/BufferCollectionToken.SetDispensable, fuchsia.sysmem2/BufferCollectionTokenGroup nodes, and associated subtrees of nodes, verbose logging may help in diagnosing what the tree looks like and why it's failing a logical allocation, or why a tree or subtree is failing sooner than expected.

The intent of the extra logging is to be acceptable from a performance point of view, under the assumption that verbose logging is only enabled on a low number of buffer collections. If we're not tracking down a bug, we shouldn't send this message.

Request

<EMPTY>

SetWeak

Sets the current fuchsia.sysmem2/Node and all child Node(s) created after this message to weak, which means that a client's Node client end (or a child created after this message) is not alone sufficient to keep allocated VMOs alive.

All VMOs obtained from weak Node(s) are weak sysmem VMOs. See also close_weak_asap.

This message is only permitted before the Node becomes ready for allocation (else the server closes the channel with ZX_ERR_BAD_STATE):

  • BufferCollectionToken: any time
  • BufferCollection: before SetConstraints
  • BufferCollectionTokenGroup: before AllChildrenPresent

Currently, no conversion from strong Node to weak Node after ready for allocation is provided, but a client can simulate that by creating an additional Node before allocation and setting that additional Node to weak, and then potentially at some point later sending Release and closing the client end of the client's strong Node, but keeping the client's weak Node.

Zero strong Node(s) and zero strong VMO handles will result in buffer collection failure (all Node client end(s) will see ZX_CHANNEL_PEER_CLOSED and all close_weak_asap client_end(s) will see ZX_EVENTPAIR_PEER_CLOSED), but sysmem (intentionally) won't notice this situation until all Node(s) are ready for allocation. For initial allocation to succeed, at least one strong Node is required to exist at allocation time, but after that client receives VMO handles, that client can BufferCollection.Release and close the client end without causing this type of failure.

This implies fuchsia.sysmem2/Node.SetWeakOk as well, but does not imply SetWeakOk with for_children_also true, which can be sent separately as appropriate.

Request

<EMPTY>

SetWeakOk

This indicates to sysmem that the client is prepared to pay attention to close_weak_asap.

If sent, this message must be before fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated.

All participants using a weak fuchsia.sysmem2/BufferCollection must send this message before WaitForAllBuffersAllocated, or a parent Node must have sent fuchsia.sysmem2/Node.SetWeakOk with for_child_nodes_also true, else the WaitForAllBuffersAllocated will trigger buffer collection failure.

This message is necessary because weak sysmem VMOs have not always been a thing, so older clients are not aware of the need to pay attention to close_weak_asap ZX_EVENTPAIR_PEER_CLOSED and close all remaining sysmem weak VMO handles asap. By having this message and requiring participants to indicate their acceptance of this aspect of the overall protocol, we avoid situations where an older client is delivered a weak VMO without any way for sysmem to get that VMO to close quickly later (and on a per-buffer basis).

A participant that doesn't handle close_weak_asap and also doesn't retrieve any VMO handles via WaitForAllBuffersAllocated doesn't need to send SetWeakOk (and doesn't need to have a parent Node send SetWeakOk with for_child_nodes_also true either). However, if that same participant has a child/delegate which does retrieve VMOs, that child/delegate will need to send SetWeakOk before WaitForAllBuffersAllocated.

  • request for_child_nodes_also If present and true, this means direct child nodes of this node created after this message plus all descendants of those nodes will behave as if SetWeakOk was sent on those nodes. Any child node of this node that was created before this message is not included. This setting is "sticky" in the sense that a subsequent SetWeakOk without this bool set to true does not reset the server-side bool. If this creates a problem for a participant, a workaround is to SetWeakOk with for_child_nodes_also true on child tokens instead, as appropriate. A participant should only set for_child_nodes_also true if the participant can really promise to obey close_weak_asap both for its own weak VMO handles, and for all weak VMO handles held by participants holding the corresponding child Node(s). When for_child_nodes_also is set, descendent Node(s) which are using sysmem(1) can be weak, despite the clients of those sysmem1 Node(s) not having any direct way to SetWeakOk or any direct way to find out about close_weak_asap. This only applies to descendents of this Node which are using sysmem(1), not to this Node when converted directly from a sysmem2 token to a sysmem(1) token, which will fail allocation unless an ancestor of this Node specified for_child_nodes_also true.

Request

NameType
payload NodeSetWeakOkRequest

Sync

Ensure that previous messages have been received server side. This is particularly useful after previous messages that created new tokens, because a token must be known to the sysmem server before sending the token to another participant.

Calling fuchsia.sysmem2/BufferCollectionToken.Sync on a token that isn't/wasn't a valid token risks the Sync stalling forever. See fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken for one way to mitigate the possibility of a hostile/fake fuchsia.sysmem2/BufferCollectionToken at the cost of one round trip. Another way is to pass the token to fuchsia.sysmem2/Allocator/BindSharedCollection, which also validates the token as part of exchanging it for a fuchsia.sysmem2/BufferCollection channel, and fuchsia.sysmem2/BufferCollection.Sync can then be used without risk of stalling.

After creating one or more fuchsia.sysmem2/BufferCollectionToken(s) and then starting and completing a Sync, it's then safe to send the BufferCollectionToken client ends to other participants knowing the server will recognize the tokens when they're sent by the other participants to sysmem in a fuchsia.sysmem2/Allocator.BindSharedCollection message. This is an efficient way to create tokens while avoiding unnecessary round trips.

Other options include waiting for each fuchsia.sysmem2/BufferCollectionToken.Duplicate to complete individually (using separate call to Sync after each), or calling fuchsia.sysmem2/BufferCollection.Sync after a token has been converted to a BufferCollection via fuchsia.sysmem2/Allocator.BindSharedCollection, or using fuchsia.sysmem2/BufferCollectionToken.DuplicateSync which includes the sync step and can create multiple tokens at once.

Request

<EMPTY>

Response

NameType
payload Node_Sync_Result

WaitForAllBuffersAllocated

Wait until all buffers are allocated.

This FIDL call completes when buffers have been allocated, or completes with some failure detail if allocation has been attempted but failed.

The following must occur before buffers will be allocated:

  • result buffer_collection_info The VMO handles and other related info.
  • error [fuchsia.sysmem2/Error.NO_MEMORY] The request is valid but cannot be fulfilled due to resource exhaustion.
  • error [fuchsia.sysmem2/Error.PROTOCOL_DEVIATION] The request is malformed.
  • error [fuchsia.sysmem2/Error.CONSTRAINTS_INTERSECTION_EMPTY] The request is valid but cannot be satisfied, perhaps due to hardware limitations. This can happen if participants have incompatible constraints (empty intersection, roughly speaking). See the log for more info. In cases where a participant could potentially be treated as optional, see BufferCollectionTokenGroup. When using fuchsia.sysmem2/BufferCollection.AttachToken, this will be the error code if there aren't enough buffers in the pre-existing collection to satisfy the constraints set on the attached token and any sub-tree of tokens derived from the attached token.

Request

<EMPTY>

Response

NameType
payload BufferCollection_WaitForAllBuffersAllocated_Result

BufferCollectionToken

Defined in fuchsia.sysmem2/collection.fidl

A fuchsia.sysmem2/BufferCollectionToken is not a buffer collection, but rather is a way to identify a specific potential shared buffer collection, and a way to distribute that potential shared buffer collection to additional participants prior to the buffer collection allocating any buffers.

Epitaphs are not used in this protocol.

We use a channel for the BufferCollectionToken instead of a single eventpair (pair) because this way we can detect error conditions like a participant failing mid-create.

Added: 19

CreateBufferCollectionTokenGroup

Create a logical OR among a set of tokens, called a fuchsia.sysmem2/BufferCollectionTokenGroup.

Most sysmem clients and many participants don't need to care about this message or about BufferCollectionTokenGroup(s). However, in some cases a participant wants to attempt to include one set of delegate participants, but if constraints don't combine successfully that way, fall back to a different (possibly overlapping) set of delegate participants, and/or fall back to a less demanding strategy (in terms of how strict the fuchisa.sysmem2/BufferCollectionConstraints are, across all involved delegate participants). In such cases, a BufferCollectionTokenGroup is useful.

A BufferCollectionTokenGroup is used to create a 1 of N OR among N child fuchsia.sysmem2/BufferCollectionToken(s). The child tokens which are not selected during aggregation will fail (close), which a potential participant should notice when their BufferCollection channel client endpoint sees PEER_CLOSED, allowing the participant to clean up the speculative usage that didn't end up happening (this is simimlar to a normal BufferCollection server end closing on failure to allocate a logical buffer collection or later async failure of a buffer collection).

See comments on protocol BufferCollectionTokenGroup.

Any rights_attenuation_mask or AttachToken/SetDispensable to be applied to the whole group can be achieved with a BufferCollectionToken for this purpose as a direct parent of the BufferCollectionTokenGroup.

All table fields are currently required.

  • request group_request The server end of a BufferCollectionTokenGroup channel to be served by sysmem.

Request

NameType
payload BufferCollectionTokenCreateBufferCollectionTokenGroupRequest

Duplicate

Create an additional fuchsia.sysmem2/BufferCollectionToken from this one, referring to the same buffer collection.

The created token is a child of this token in the fuchsia.sysmem2/Node heirarchy.

This method can be used to add a participant, by transferring the newly created token to another participant.

This one-way message can be used instead of the two-way fuchsia.sysmem2/BufferCollectionToken.DuplicateSync FIDL call in performance sensitive cases where it would be undesireable to wait for sysmem to respond to fuchsia.sysmem2/BufferCollectionToken.DuplicateSync or when the client code isn't structured to make it easy to duplicate all the needed tokens at once.

After sending one or more Duplicate messages, and before sending the newly created child tokens to other participants (or to other fuchsia.sysmem2/Allocator channels), the client must send a fuchsia.sysmem2/Node.Sync and wait for the Sync response. The Sync call can be made on the token, or on the BufferCollection obtained by passing this token to BindSharedCollection. Either will ensure that the server knows about the tokens created via Duplicate before the other participant sends the token to the server via separate Allocator channel.

All tokens must be turned in via fuchsia.sysmem2/Allocator.BindSharedCollection or fuchsia.sysmem2/Node.Release for a BufferCollection to successfully allocate buffers.

All table fields are currently required.

  • request rights_attenuation_mask The rights bits that are zero in this mask will be absent in the buffer VMO rights obtainable via the client end of token_request. This allows an initiator or intermediary participant to attenuate the rights available to a delegate participant. This does not allow a participant to gain rights that the participant doesn't already have. The value ZX_RIGHT_SAME_RIGHTS can be used to specify that no attenuation should be applied.
    • These values for rights_attenuation_mask result in no attenuation:
      • ZX_RIGHT_SAME_RIGHTS (preferred)
      • 0xFFFFFFFF (this is reasonable when an attenuation mask is computed)
      • 0 (deprecated - do not use 0 - an ERROR will go to the log)
  • request token_request is the server end of a BufferCollectionToken channel. The client end of this channel acts as another participant in the shared buffer collection.

Request

NameType
payload BufferCollectionTokenDuplicateRequest

DuplicateSync

Create additional fuchsia.sysmem2/BufferCollectionToken(s) from this one, referring to the same buffer collection.

The created tokens are children of this token in the fuchsia.sysmem2/Node heirarchy.

This method can be used to add more participants, by transferring the newly created tokens to additional participants.

A new token will be returned for each entry in the rights_attenuation_masks array.

If the called token may not actually be a valid token due to a potentially hostile/untrusted provider of the token, consider using fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken first instead of potentially getting stuck indefinitely if fuchsia.sysmem2/BufferCollectionToken.DuplicateSync never responds due to the calling token not being a real token.

In contrast to fuchsia.sysmem2/BufferCollectionToken.Duplicate, no separate fuchsia.sysmem2/Node.Sync is needed after calling this method, because the sync step is included in this call, at the cost of a round trip during this call.

All tokens must be turned in to sysmem via fuchsia.sysmem2/Allocator.BindSharedCollection or fuchsia.sysmem2/Node.Release for a BufferCollection to successfully allocate buffers (or to logically allocate buffers in the case of subtrees involving fuchsia.sysmem2/BufferCollectionToken.AttachToken).

All table fields are currently required.

  • request rights_attenuation_mask In each entry of rights_attenuation_masks, rights bits that are zero will be absent in the buffer VMO rights obtainable via the corresponding returned token. This allows an initiator or intermediary participant to attenuate the rights available to a participant. This does not allow a participant to gain rights that the participant doesn't already have. The value ZX_RIGHT_SAME_RIGHTS can be used to specify that no attenuation should be applied.
  • response tokens The client ends of each newly created token.

Request

NameType
payload BufferCollectionTokenDuplicateSyncRequest

Response

NameType
payload BufferCollectionToken_DuplicateSync_Result

GetBufferCollectionId

Get the buffer collection ID. This ID is also available from fuchsia.sysmem2/Allocator.GetVmoInfo (along with the buffer_index within the collection).

This call is mainly useful in situations where we can't convey a fuchsia.sysmem2/BufferCollectionToken or fuchsia.sysmem2/BufferCollection directly, but can only convey a VMO handle, which can be joined back up with a BufferCollection client end that was created via a different path. Prefer to convey a BufferCollectionToken or BufferCollection directly when feasible.

Trusting a buffer_collection_id value from a source other than sysmem is analogous to trusting a koid value from a source other than zircon. Both should be avoided unless really necessary, and both require caution. In some situations it may be reasonable to refer to a pre-established BufferCollection by buffer_collection_id via a protocol for efficiency reasons, but an incoming value purporting to be a buffer_collection_id is not sufficient alone to justify granting the sender of the buffer_collection_id any capability. The sender must first prove to a receiver that the sender has/had a VMO or has/had a BufferCollectionToken to the same collection by sending a handle that sysmem confirms is a valid sysmem handle and which sysmem maps to the buffer_collection_id value. The receiver should take care to avoid assuming that a sender had a BufferCollectionToken in cases where the sender has only proven that the sender had a VMO.

  • response buffer_collection_id This ID is unique per buffer collection per boot. Each buffer is uniquely identified by the buffer_collection_id and buffer_index together.

Request

<EMPTY>

Response

NameType
payload Node_GetBufferCollectionId_Result

GetNodeRef

This gets a handle that can be used as a parameter to fuchsia.sysmem2/Node.IsAlternateFor called on any fuchsia.sysmem2/Node. This handle is only for use as proof that the client obtained this handle from this Node.

Because this is a get not a set, no fuchsia.sysmem2/Node.Sync is needed between the GetNodeRef and the call to IsAlternateFor, despite the two calls typically being on different channels.

See also fuchsia.sysmem2/Node.IsAlternateFor.

All table fields are currently required.

  • response node_ref This handle can be sent via IsAlternateFor on a different Node channel, to prove that the client obtained the handle from this Node.

Request

<EMPTY>

Response

NameType
payload Node_GetNodeRef_Result

IsAlternateFor

Check whether the calling fuchsia.sysmem2/Node is in a subtree rooted at a different child token of a common parent fuchsia.sysmem2/BufferCollectionTokenGroup, in relation to the passed-in node_ref.

This call is for assisting with admission control de-duplication, and with debugging.

The node_ref must be obtained using fuchsia.sysmem2/Node.GetNodeRef.

The node_ref can be a duplicated handle; it's not necessary to call GetNodeRef for every call to fuchsia.sysmem2/Node.IsAlternateFor.

If a calling token may not actually be a valid token at all due to a potentially hostile/untrusted provider of the token, call fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken first instead of potentially getting stuck indefinitely if IsAlternateFor never responds due to a calling token not being a real token (not really talking to sysmem). Another option is to call fuchsia.sysmem2/Allocator.BindSharedCollection with this token first which also validates the token along with converting it to a fuchsia.sysmem2/BufferCollection, then call IsAlternateFor.

All table fields are currently required.

  • response is_alternate
    • true: The first parent node in common between the calling node and the node_ref Node is a BufferCollectionTokenGroup. This means that the calling Node and the node_ref Node will not have both their constraints apply - rather sysmem will choose one or the other of the constraints - never both. This is because only one child of a BufferCollectionTokenGroup is selected during logical allocation, with only that one child's subtree contributing to constraints aggregation.
    • false: The first parent node in common between the calling Node and the node_ref Node is not a BufferCollectionTokenGroup. Currently, this means the first parent node in common is a BufferCollectionToken or BufferCollection (regardless of not Releaseed). This means that the calling Node and the node_ref Node may have both their constraints apply during constraints aggregation of the logical allocation, if both Node(s) are selected by any parent BufferCollectionTokenGroup(s) involved. In this case, there is no BufferCollectionTokenGroup that will directly prevent the two Node(s) from both being selected and their constraints both aggregated, but even when false, one or both Node(s) may still be eliminated from consideration if one or both Node(s) has a direct or indirect parent BufferCollectionTokenGroup which selects a child subtree other than the subtree containing the calling Node or node_ref Node.
  • error [fuchsia.sysmem2/Error.NOT_FOUND] The node_ref wasn't associated with the same buffer collection as the calling Node. Another reason for this error is if the node_ref is an zx.Handle.EVENT handle with sufficient rights, but isn't actually a real node_ref obtained from GetNodeRef.
  • error [fuchsia.sysmem2/Error.PROTOCOL_DEVIATION] The caller passed a node_ref that isn't a zx.Handle:EVENT handle , or doesn't have the needed rights expected on a real node_ref.
  • No other failing status codes are returned by this call. However, sysmem may add additional codes in future, so the client should have sensible default handling for any failing status code.

Request

NameType
payload NodeIsAlternateForRequest

Response

NameType
payload Node_IsAlternateFor_Result

Release

On a fuchsia.sysmem2/BufferCollectionToken channel:

Normally a participant will convert a BufferCollectionToken into a fuchsia.sysmem2/BufferCollection, but a participant can instead send Release via the token (and then close the channel immediately or shortly later in response to server closing the server end), which avoids causing buffer collection failure. Without a prior Release, closing the BufferCollectionToken client end will cause buffer collection failure.

On a fuchsia.sysmem2/BufferCollection channel:

By default the server handles unexpected closure of a fuchsia.sysmem2/BufferCollection client end (without Release first) by failing the buffer collection. Partly this is to expedite closing VMO handles to reclaim memory when any participant fails. If a participant would like to cleanly close a BufferCollection without causing buffer collection failure, the participant can send Release before closing the BufferCollection client end. The Release can occur before or after SetConstraints. If before SetConstraints, the buffer collection won't require constraints from this node in order to allocate. If after SetConstraints, the constraints are retained and aggregated, despite the lack of BufferCollection connection at the time of constraints aggregation.

On a fuchsia.sysmem2/BufferCollectionTokenGroup channel:

By default, unexpected closure of a BufferCollectionTokenGroup client end (without Release first) will trigger failure of the buffer collection. To close a BufferCollectionTokenGroup channel without failing the buffer collection, ensure that AllChildrenPresent() has been sent, and send Release before closing the BufferCollectionTokenGroup client end.

If Release occurs before [fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the buffer collection will fail (triggered by reception of Releasewithout priorAllChildrenPresent). This is intentionally not analogous to how <a class='link' href='../fuchsia.sysmem2/'>fuchsia.sysmem2</a>/<a class='link' href='../fuchsia.sysmem2/#BufferCollection.Release'>BufferCollection.Release</a> without <a class='link' href='../fuchsia.sysmem2/'>fuchsia.sysmem2</a>/<a class='link' href='../fuchsia.sysmem2/#BufferCollection.SetConstraints'>BufferCollection.SetConstraints</a> first doesn't cause buffer collection failure. For a BufferCollectionTokenGroup, clean close requires AllChildrenPresent(if not already sent), then Release`, then close client end.

If Release occurs after AllChildrenPresent, the children and all their constraints remain intact (just as they would if the BufferCollectionTokenGroup channel had remained open), and the client end close doesn't trigger buffer collection failure.

On all fuchsia.sysmem2/Node channels (any of the above):

For brevity, the per-channel-protocol paragraphs above ignore the separate failure domain created by fuchsia.sysmem2/BufferCollectionToken.SetDispensable or fuchsia.sysmem2/BufferCollection.AttachToken. When a client end unexpectedly closes (without Release first) and that client end is under a failure domain, instead of failing the whole buffer collection, the failure domain is failed, but the buffer collection itself is isolated from failure of the failure domain. Such failure domains can be nested, in which case only the inner-most failure domain in which the Node resides fails.

Request

<EMPTY>

SetDebugClientInfo

Set information about the current client that can be used by sysmem to help diagnose leaking memory and allocation stalls waiting for a participant to send fuchsia.sysmem2/BufferCollection.SetConstraints.

This sets the debug client info on this fuchsia.sysmem2/Node and all Node(s) derived from this Node, unless overriden by fuchsia.sysmem2/Allocator.SetDebugClientInfo or a later fuchsia.sysmem2/Node.SetDebugClientInfo.

Sending fuchsia.sysmem2/Allocator.SetDebugClientInfo once per Allocator is the most efficient way to ensure that all fuchsia.sysmem2/Node(s) will have at least some debug client info set, and is also more efficient than separately sending the same debug client info via fuchsia.sysmem2/Node.SetDebugClientInfo for each created fuchsia.sysmem2/Node.

Also used when verbose logging is enabled (see SetVerboseLogging) to indicate which client is closing their channel first, leading to subtree failure (which can be normal if the purpose of the subtree is over, but if happening earlier than expected, the client-channel-specific name can help diagnose where the failure is first coming from, from sysmem's point of view).

All table fields are currently required.

  • request name This can be an arbitrary string, but the current process name (see fsl::GetCurrentProcessName) is a good default.
  • request id This can be an arbitrary id, but the current process ID (see fsl::GetCurrentProcessKoid) is a good default.

Request

NameType
payload NodeSetDebugClientInfoRequest

SetDebugTimeoutLogDeadline

Sysmem logs a warning if sysmem hasn't seen fuchsia.sysmem2/BufferCollection.SetConstraints from all clients within 5 seconds after creation of a new collection.

Clients can call this method to change when the log is printed. If multiple client set the deadline, it's unspecified which deadline will take effect.

In most cases the default works well.

All table fields are currently required.

  • request deadline The time at which sysmem will start trying to log the warning, unless all constraints are with sysmem by then.

Request

NameType
payload NodeSetDebugTimeoutLogDeadlineRequest

SetDispensable

Set this fuchsia.sysmem2/BufferCollectionToken to dispensable.

When the BufferCollectionToken is converted to a fuchsia.sysmem2/BufferCollection, the dispensable status applies to the BufferCollection also.

Normally, if a client closes a fuchsia.sysmem2/BufferCollection client end without having sent fuchsia.sysmem2/BufferCollection.Release first, the BufferCollection fuchisa.sysmem2/Node will fail, which also propagates failure to the parent fuchsia.sysmem2/Node and so on up to the root Node, which fails the whole buffer collection. In contrast, a dispensable Node can fail after buffers are allocated without causing failure of its parent in the fuchsia.sysmem2/Node heirarchy.

The dispensable Node participates in constraints aggregation along with its parent before buffer allocation. If the dispensable Node fails before buffers are allocated, the failure propagates to the dispensable Node's parent.

After buffers are allocated, failure of the dispensable Node (or any child of the dispensable Node) does not propagate to the dispensable Node's parent. Failure does propagate from a normal child of a dispensable Node to the dispensable Node. Failure of a child is blocked from reaching its parent if the child is attached using fuchsia.sysmem2/BufferCollection.AttachToken, or if the child is dispensable and the failure occurred after allocation.

A dispensable Node can be used in cases where a participant needs to provide constraints, but after buffers are allocated, the participant can fail without causing buffer collection failure from the parent Node's point of view.

In contrast, BufferCollection.AttachToken can be used to create a BufferCollectionToken which does not participate in constraints aggregation with its parent Node, and whose failure at any time does not propagate to its parent Node, and whose potential delay providing constraints does not prevent the parent Node from completing its buffer allocation.

An initiator (creator of the root Node using fuchsia.sysmem2/Allocator.AllocateSharedCollection) may in some scenarios choose to initially use a dispensable Node for a first instance of a participant, and then later if the first instance of that participant fails, a new second instance of that participant my be given a BufferCollectionToken created with AttachToken.

Normally a client will SetDispensable on a BufferCollectionToken shortly before sending the dispensable BufferCollectionToken to a delegate participant. Because SetDispensable prevents propagation of child Node failure to parent Node(s), if the client was relying on noticing child failure via failure of the parent Node retained by the client, the client may instead need to notice failure via other means. If other means aren't available/convenient, the client can instead retain the dispensable Node and create a child Node under that to send to the delegate participant, retaining this Node in order to notice failure of the subtree rooted at this Node via this Node's ZX_CHANNEL_PEER_CLOSED signal, and take whatever action is appropriate (e.g. starting a new instance of the delegate participant and handing it a BufferCollectionToken created using fuchsia.sysmem2/BufferCollection.AttachToken, or propagate failure and clean up in a client-specific way).

While it is possible (and potentially useful) to SetDispensable on a direct child of a BufferCollectionTokenGroup Node, it isn't possible to later replace a failed dispensable Node that was a direct child of a BufferCollectionTokenGroup with a new token using AttachToken (since there's no AttachToken on a group). Instead, to enable AttachToken replacement in this case, create an additional non-dispensable token that's a direct child of the group and make the existing dispensable token a child of the additional token. This way, the additional token that is a direct child of the group has BufferCollection.AttachToken which can be used to replace the failed dispensable token.

SetDispensable on an already-dispensable token is idempotent.

Request

<EMPTY>

SetName

Set a name for VMOs in this buffer collection.

If the name doesn't fit in ZX_MAX_NAME_LEN, the name of the vmo itself will be truncated to fit. The name of the vmo will be suffixed with the buffer index within the collection (if the suffix fits within ZX_MAX_NAME_LEN). The name specified here (without truncation) will be listed in the inspect data.

The name only affects VMOs allocated after the name is set; this call does not rename existing VMOs. If multiple clients set different names then the larger priority value will win. Setting a new name with the same priority as a prior name doesn't change the name.

All table fields are currently required.

  • request priority The name is only set if this is the first SetName or if priority is greater than any previous priority value in prior SetName calls across all Node(s) of this buffer collection.
  • request name The name for VMOs created under this buffer collection.

Request

NameType
payload NodeSetNameRequest

SetVerboseLogging

This enables verbose logging for the buffer collection.

Verbose logging includes constraints set via fuchsia.sysmem2/BufferCollection.SetConstraints from each client along with info set via fuchsia.sysmem2/Node.SetDebugClientInfo (or fuchsia.sysmem2/Allocator.SetDebugClientInfo) and the structure of the tree of Node(s).

Normally sysmem prints only a single line complaint when aggregation fails, with just the specific detailed reason that aggregation failed, with little surrounding context. While this is often enough to diagnose a problem if only a small change was made and everything was working before the small change, it's often not particularly helpful for getting a new buffer collection to work for the first time. Especially with more complex trees of nodes, involving things like fuchsia.sysmem2/BufferCollection.AttachToken, fuchsia.sysmem2/BufferCollectionToken.SetDispensable, fuchsia.sysmem2/BufferCollectionTokenGroup nodes, and associated subtrees of nodes, verbose logging may help in diagnosing what the tree looks like and why it's failing a logical allocation, or why a tree or subtree is failing sooner than expected.

The intent of the extra logging is to be acceptable from a performance point of view, under the assumption that verbose logging is only enabled on a low number of buffer collections. If we're not tracking down a bug, we shouldn't send this message.

Request

<EMPTY>

SetWeak

Sets the current fuchsia.sysmem2/Node and all child Node(s) created after this message to weak, which means that a client's Node client end (or a child created after this message) is not alone sufficient to keep allocated VMOs alive.

All VMOs obtained from weak Node(s) are weak sysmem VMOs. See also close_weak_asap.

This message is only permitted before the Node becomes ready for allocation (else the server closes the channel with ZX_ERR_BAD_STATE):

  • BufferCollectionToken: any time
  • BufferCollection: before SetConstraints
  • BufferCollectionTokenGroup: before AllChildrenPresent

Currently, no conversion from strong Node to weak Node after ready for allocation is provided, but a client can simulate that by creating an additional Node before allocation and setting that additional Node to weak, and then potentially at some point later sending Release and closing the client end of the client's strong Node, but keeping the client's weak Node.

Zero strong Node(s) and zero strong VMO handles will result in buffer collection failure (all Node client end(s) will see ZX_CHANNEL_PEER_CLOSED and all close_weak_asap client_end(s) will see ZX_EVENTPAIR_PEER_CLOSED), but sysmem (intentionally) won't notice this situation until all Node(s) are ready for allocation. For initial allocation to succeed, at least one strong Node is required to exist at allocation time, but after that client receives VMO handles, that client can BufferCollection.Release and close the client end without causing this type of failure.

This implies fuchsia.sysmem2/Node.SetWeakOk as well, but does not imply SetWeakOk with for_children_also true, which can be sent separately as appropriate.

Request

<EMPTY>

SetWeakOk

This indicates to sysmem that the client is prepared to pay attention to close_weak_asap.

If sent, this message must be before fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated.

All participants using a weak fuchsia.sysmem2/BufferCollection must send this message before WaitForAllBuffersAllocated, or a parent Node must have sent fuchsia.sysmem2/Node.SetWeakOk with for_child_nodes_also true, else the WaitForAllBuffersAllocated will trigger buffer collection failure.

This message is necessary because weak sysmem VMOs have not always been a thing, so older clients are not aware of the need to pay attention to close_weak_asap ZX_EVENTPAIR_PEER_CLOSED and close all remaining sysmem weak VMO handles asap. By having this message and requiring participants to indicate their acceptance of this aspect of the overall protocol, we avoid situations where an older client is delivered a weak VMO without any way for sysmem to get that VMO to close quickly later (and on a per-buffer basis).

A participant that doesn't handle close_weak_asap and also doesn't retrieve any VMO handles via WaitForAllBuffersAllocated doesn't need to send SetWeakOk (and doesn't need to have a parent Node send SetWeakOk with for_child_nodes_also true either). However, if that same participant has a child/delegate which does retrieve VMOs, that child/delegate will need to send SetWeakOk before WaitForAllBuffersAllocated.

  • request for_child_nodes_also If present and true, this means direct child nodes of this node created after this message plus all descendants of those nodes will behave as if SetWeakOk was sent on those nodes. Any child node of this node that was created before this message is not included. This setting is "sticky" in the sense that a subsequent SetWeakOk without this bool set to true does not reset the server-side bool. If this creates a problem for a participant, a workaround is to SetWeakOk with for_child_nodes_also true on child tokens instead, as appropriate. A participant should only set for_child_nodes_also true if the participant can really promise to obey close_weak_asap both for its own weak VMO handles, and for all weak VMO handles held by participants holding the corresponding child Node(s). When for_child_nodes_also is set, descendent Node(s) which are using sysmem(1) can be weak, despite the clients of those sysmem1 Node(s) not having any direct way to SetWeakOk or any direct way to find out about close_weak_asap. This only applies to descendents of this Node which are using sysmem(1), not to this Node when converted directly from a sysmem2 token to a sysmem(1) token, which will fail allocation unless an ancestor of this Node specified for_child_nodes_also true.

Request

NameType
payload NodeSetWeakOkRequest

Sync

Ensure that previous messages have been received server side. This is particularly useful after previous messages that created new tokens, because a token must be known to the sysmem server before sending the token to another participant.

Calling fuchsia.sysmem2/BufferCollectionToken.Sync on a token that isn't/wasn't a valid token risks the Sync stalling forever. See fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken for one way to mitigate the possibility of a hostile/fake fuchsia.sysmem2/BufferCollectionToken at the cost of one round trip. Another way is to pass the token to fuchsia.sysmem2/Allocator/BindSharedCollection, which also validates the token as part of exchanging it for a fuchsia.sysmem2/BufferCollection channel, and fuchsia.sysmem2/BufferCollection.Sync can then be used without risk of stalling.

After creating one or more fuchsia.sysmem2/BufferCollectionToken(s) and then starting and completing a Sync, it's then safe to send the BufferCollectionToken client ends to other participants knowing the server will recognize the tokens when they're sent by the other participants to sysmem in a fuchsia.sysmem2/Allocator.BindSharedCollection message. This is an efficient way to create tokens while avoiding unnecessary round trips.

Other options include waiting for each fuchsia.sysmem2/BufferCollectionToken.Duplicate to complete individually (using separate call to Sync after each), or calling fuchsia.sysmem2/BufferCollection.Sync after a token has been converted to a BufferCollection via fuchsia.sysmem2/Allocator.BindSharedCollection, or using fuchsia.sysmem2/BufferCollectionToken.DuplicateSync which includes the sync step and can create multiple tokens at once.

Request

<EMPTY>

Response

NameType
payload Node_Sync_Result

BufferCollectionTokenGroup

Defined in fuchsia.sysmem2/collection.fidl

The sysmem implementation is consistent with a logical / conceptual model of allocation / logical allocation as follows:

As usual, a logical allocation considers either the root and all nodes with connectivity to the root that don't transit a fuchsia.sysmem2/Node created with fuchsia.sysmem2/BufferCollection.AttachToken, or a subtree rooted at an AttachToken Node and all Node(s) with connectivity to that subtree that don't transit another AttachToken. This is called the logical allocation pruned subtree, or pruned subtree for short.

During constraints aggregation, each fuchsia.sysmem2/BufferCollectionTokenGroup will select a single child Node among its direct children. The rest of the children will appear to fail the logical allocation, while the selected child may succeed.

When more than one BufferCollectionTokenGroup exists in the overall logical allocation pruned subtree, the relative priority between two groups is equivalent to their ordering in a DFS pre-order iteration of the tree, with parents higher priority than children, and left children higher priority than right children.

When a particular child of a group is selected (whether provisionally during a constraints aggregation attempt, or as a final selection), the non-selection of other children of the group will "hide" any other groups under those non-selected children.

Within a logical allocation, aggregation is attempted first by provisionally selecting child 0 of the highest-priority group, and child 0 of the next highest-priority group that isn't hidden by the provisional selections so far, etc.

If that aggregation attempt fails, aggregation will be attempted with the ordinal 0 child of all the same groups except the lowest priority non-hidden group which will provisionally select its ordinal 1 child (and then child 2 and so on). If a new lowest-priority group is un-hidden as provisional selections are updated, that newly un-hidden lowest-priority group has all its children considered in order, before changing the provisional selection in the former lowest-priority group. In terms of result, this is equivalent to systematic enumeration of all possible combinations of choices in a counting-like order updating the lowest-priority group the most often and the highest-priority group the least often. Rather than actually attempting aggregation with all the combinations, we can skip over combinations which are redundant/equivalent due to hiding without any change to the result.

Attempted constraint aggregations of enumerated non-equivalent combinations of choices continue in this manner until either (a) all aggregation attempts fail in which case the overall logical allocation fails, or (b) until an attempted aggregation succeeds, in which case buffer allocation (if needed; if this is the pruned subtree rooted at the overall root Node) is attempted once. If buffer allocation based on the first successful constraints aggregation fails, the overall logical allocation fails (there is no buffer allocation retry / re-attempt). If buffer allocation succeeds (or is not needed due to being a pruned subtree that doesn't include the root), the logical allocation succeeds.

If this prioritization scheme cannot reasonably work for your usage of sysmem, please don't hesitate to contact sysmem folks to discuss potentially adding a way to achieve what you need.

Please avoid creating a large number of BufferCollectionTokenGroup(s) per logical allocation, especially with large number of children overall, and especially in cases where aggregation may reasonably be expected to often fail using ordinal 0 children and possibly with later children as well. Sysmem mitigates potentially high time complexity of evaluating too many child combinations/selections across too many groups by simply failing logical allocation beyond a certain (fairly high, but not huge) max number of considered group child combinations/selections. More advanced (and more complicated) mitigation is not anticipated to be practically necessary or worth the added complexity. Please contact sysmem folks if the max limit is getting hit or if you anticipate it getting hit, to discuss potential options.

Prefer to use multiple fuchsia.sysmem2/ImageFormatConstraints in a single fuchsia.sysmem2/BufferCollectionConstraints when feasible (when a participant just needs to express the ability to work with more than a single fuchsia.images2/PixelFormat, with sysmem choosing which PixelFormat to use among those supported by all participants).

Similar to fuchsia.sysmem2/BufferCollectionToken and fuchsia.sysmem2/BufferCollection, closure of the BufferCollectionTokenGroup channel without sending fuchsia.sysmem2/Node.Release first will cause buffer collection failure (or subtree failure if using fuchsia.sysmem2/BufferCollectionToken.SetDispensable or fuchsia.sysmem2/BufferCollection.AttachToken and the BufferCollectionTokenGroup is part of a subtree under such a node that doesn't propagate failure to its parent).

Epitaphs are not used in this protocol.

Added: 19

AllChildrenPresent

Indicate that no more children will be created.

After creating all children, the client should send fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent to inform sysmem that no more children will be created, so that sysmem can know when it's ok to start aggregating constraints.

Sending CreateChild after AllChildrenPresent is not permitted; this will fail the group's subtree and close the connection.

If fuchsia.sysmem2/Node.Release is to be sent, it should be sent after AllChildrenPresent, else failure of the group's subtree will be triggered. This is intentionally not analogous to how Release without prior fuchsia.sysmem2/BufferCollection.SetConstraints doesn't cause subtree failure.

Request

<EMPTY>

CreateChild

Create a child fuchsia.sysmem2/BufferCollectionToken. Only one child (including its children) will be selected during allocation (or logical allocation).

Before passing the client end of this token to fuchsia.sysmem2/Allocator.BindSharedCollection, completion of fuchsia.sysmem2/Node.Sync after fuchsia.sysmem2/BufferCollectionTokenGroup.CreateChild is required. Or the client can use fuchsia.sysmem2/BufferCollectionTokenGroup.CreateChildrenSync which essentially includes the Sync.

Sending CreateChild after AllChildrenPresent is not permitted; this will fail the group's subtree and close the connection.

After all children have been created, send AllChildrenPresent.

  • request token_request The server end of the new token channel.
  • request rights_attenuation_mask If ZX_RIGHT_SAME_RIGHTS, the created token allows the holder to get the same rights to buffers as the parent token (of the group) had. When the value isn't ZX_RIGHT_SAME_RIGHTS, the value is interpretted as a bitmask with 0 bits ensuring those rights are attentuated, so 0xFFFFFFFF is a synonym for ZX_RIGHT_SAME_RIGHTS. The value 0 is not allowed and intentionally causes subtree failure.

Request

NameType
payload BufferCollectionTokenGroupCreateChildRequest

CreateChildrenSync

Create 1 or more child tokens at once, synchronously. In contrast to fuchsia.sysmem2/BufferCollectionTokenGroup.CreateChild, no fuchsia.sysmem2/Node.Sync is required before passing the client end of a returned token to fuchsia.sysmem2/Allocator/BindSharedCollection.

The lower-index child tokens are higher priority (attempted sooner) than higher-index child tokens.

As per all child tokens, successful aggregation will choose exactly one child among all created children (across all children created across potentially multiple calls to fuchsia.sysmem2/BufferCollectionTokenGroup.CreateChild and fuchsia.sysmem2/BufferCollectionTokenGroup.CreateChildrenSync).

The maximum permissible total number of children per group, and total number of nodes in an overall tree (from the root) are capped to limits which are not configurable via these protocols.

Sending CreateChildrenSync after AllChildrenPresent is not permitted; this will fail the group's subtree and close the connection.

After all children have been created, send AllChildrenPresent.

  • request rights_attentuation_masks The size of the rights_attentuation_masks determines the number of created child tokens. The value ZX_RIGHT_SAME_RIGHTS doesn't attenuate any rights. The value 0xFFFFFFFF is a synonym for ZX_RIGHT_SAME_RIGHTS. For any other value, each 0 bit in the mask attenuates that right.
  • response tokens The created child tokens.

Request

NameType
payload BufferCollectionTokenGroupCreateChildrenSyncRequest

Response

NameType
payload BufferCollectionTokenGroup_CreateChildrenSync_Result

GetBufferCollectionId

Get the buffer collection ID. This ID is also available from fuchsia.sysmem2/Allocator.GetVmoInfo (along with the buffer_index within the collection).

This call is mainly useful in situations where we can't convey a fuchsia.sysmem2/BufferCollectionToken or fuchsia.sysmem2/BufferCollection directly, but can only convey a VMO handle, which can be joined back up with a BufferCollection client end that was created via a different path. Prefer to convey a BufferCollectionToken or BufferCollection directly when feasible.

Trusting a buffer_collection_id value from a source other than sysmem is analogous to trusting a koid value from a source other than zircon. Both should be avoided unless really necessary, and both require caution. In some situations it may be reasonable to refer to a pre-established BufferCollection by buffer_collection_id via a protocol for efficiency reasons, but an incoming value purporting to be a buffer_collection_id is not sufficient alone to justify granting the sender of the buffer_collection_id any capability. The sender must first prove to a receiver that the sender has/had a VMO or has/had a BufferCollectionToken to the same collection by sending a handle that sysmem confirms is a valid sysmem handle and which sysmem maps to the buffer_collection_id value. The receiver should take care to avoid assuming that a sender had a BufferCollectionToken in cases where the sender has only proven that the sender had a VMO.

  • response buffer_collection_id This ID is unique per buffer collection per boot. Each buffer is uniquely identified by the buffer_collection_id and buffer_index together.

Request

<EMPTY>

Response

NameType
payload Node_GetBufferCollectionId_Result

GetNodeRef

This gets a handle that can be used as a parameter to fuchsia.sysmem2/Node.IsAlternateFor called on any fuchsia.sysmem2/Node. This handle is only for use as proof that the client obtained this handle from this Node.

Because this is a get not a set, no fuchsia.sysmem2/Node.Sync is needed between the GetNodeRef and the call to IsAlternateFor, despite the two calls typically being on different channels.

See also fuchsia.sysmem2/Node.IsAlternateFor.

All table fields are currently required.

  • response node_ref This handle can be sent via IsAlternateFor on a different Node channel, to prove that the client obtained the handle from this Node.

Request

<EMPTY>

Response

NameType
payload Node_GetNodeRef_Result

IsAlternateFor

Check whether the calling fuchsia.sysmem2/Node is in a subtree rooted at a different child token of a common parent fuchsia.sysmem2/BufferCollectionTokenGroup, in relation to the passed-in node_ref.

This call is for assisting with admission control de-duplication, and with debugging.

The node_ref must be obtained using fuchsia.sysmem2/Node.GetNodeRef.

The node_ref can be a duplicated handle; it's not necessary to call GetNodeRef for every call to fuchsia.sysmem2/Node.IsAlternateFor.

If a calling token may not actually be a valid token at all due to a potentially hostile/untrusted provider of the token, call fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken first instead of potentially getting stuck indefinitely if IsAlternateFor never responds due to a calling token not being a real token (not really talking to sysmem). Another option is to call fuchsia.sysmem2/Allocator.BindSharedCollection with this token first which also validates the token along with converting it to a fuchsia.sysmem2/BufferCollection, then call IsAlternateFor.

All table fields are currently required.

  • response is_alternate
    • true: The first parent node in common between the calling node and the node_ref Node is a BufferCollectionTokenGroup. This means that the calling Node and the node_ref Node will not have both their constraints apply - rather sysmem will choose one or the other of the constraints - never both. This is because only one child of a BufferCollectionTokenGroup is selected during logical allocation, with only that one child's subtree contributing to constraints aggregation.
    • false: The first parent node in common between the calling Node and the node_ref Node is not a BufferCollectionTokenGroup. Currently, this means the first parent node in common is a BufferCollectionToken or BufferCollection (regardless of not Releaseed). This means that the calling Node and the node_ref Node may have both their constraints apply during constraints aggregation of the logical allocation, if both Node(s) are selected by any parent BufferCollectionTokenGroup(s) involved. In this case, there is no BufferCollectionTokenGroup that will directly prevent the two Node(s) from both being selected and their constraints both aggregated, but even when false, one or both Node(s) may still be eliminated from consideration if one or both Node(s) has a direct or indirect parent BufferCollectionTokenGroup which selects a child subtree other than the subtree containing the calling Node or node_ref Node.
  • error [fuchsia.sysmem2/Error.NOT_FOUND] The node_ref wasn't associated with the same buffer collection as the calling Node. Another reason for this error is if the node_ref is an zx.Handle.EVENT handle with sufficient rights, but isn't actually a real node_ref obtained from GetNodeRef.
  • error [fuchsia.sysmem2/Error.PROTOCOL_DEVIATION] The caller passed a node_ref that isn't a zx.Handle:EVENT handle , or doesn't have the needed rights expected on a real node_ref.
  • No other failing status codes are returned by this call. However, sysmem may add additional codes in future, so the client should have sensible default handling for any failing status code.

Request

NameType
payload NodeIsAlternateForRequest

Response

NameType
payload Node_IsAlternateFor_Result

Release

On a fuchsia.sysmem2/BufferCollectionToken channel:

Normally a participant will convert a BufferCollectionToken into a fuchsia.sysmem2/BufferCollection, but a participant can instead send Release via the token (and then close the channel immediately or shortly later in response to server closing the server end), which avoids causing buffer collection failure. Without a prior Release, closing the BufferCollectionToken client end will cause buffer collection failure.

On a fuchsia.sysmem2/BufferCollection channel:

By default the server handles unexpected closure of a fuchsia.sysmem2/BufferCollection client end (without Release first) by failing the buffer collection. Partly this is to expedite closing VMO handles to reclaim memory when any participant fails. If a participant would like to cleanly close a BufferCollection without causing buffer collection failure, the participant can send Release before closing the BufferCollection client end. The Release can occur before or after SetConstraints. If before SetConstraints, the buffer collection won't require constraints from this node in order to allocate. If after SetConstraints, the constraints are retained and aggregated, despite the lack of BufferCollection connection at the time of constraints aggregation.

On a fuchsia.sysmem2/BufferCollectionTokenGroup channel:

By default, unexpected closure of a BufferCollectionTokenGroup client end (without Release first) will trigger failure of the buffer collection. To close a BufferCollectionTokenGroup channel without failing the buffer collection, ensure that AllChildrenPresent() has been sent, and send Release before closing the BufferCollectionTokenGroup client end.

If Release occurs before [fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the buffer collection will fail (triggered by reception of Releasewithout priorAllChildrenPresent). This is intentionally not analogous to how <a class='link' href='../fuchsia.sysmem2/'>fuchsia.sysmem2</a>/<a class='link' href='../fuchsia.sysmem2/#BufferCollection.Release'>BufferCollection.Release</a> without <a class='link' href='../fuchsia.sysmem2/'>fuchsia.sysmem2</a>/<a class='link' href='../fuchsia.sysmem2/#BufferCollection.SetConstraints'>BufferCollection.SetConstraints</a> first doesn't cause buffer collection failure. For a BufferCollectionTokenGroup, clean close requires AllChildrenPresent(if not already sent), then Release`, then close client end.

If Release occurs after AllChildrenPresent, the children and all their constraints remain intact (just as they would if the BufferCollectionTokenGroup channel had remained open), and the client end close doesn't trigger buffer collection failure.

On all fuchsia.sysmem2/Node channels (any of the above):

For brevity, the per-channel-protocol paragraphs above ignore the separate failure domain created by fuchsia.sysmem2/BufferCollectionToken.SetDispensable or fuchsia.sysmem2/BufferCollection.AttachToken. When a client end unexpectedly closes (without Release first) and that client end is under a failure domain, instead of failing the whole buffer collection, the failure domain is failed, but the buffer collection itself is isolated from failure of the failure domain. Such failure domains can be nested, in which case only the inner-most failure domain in which the Node resides fails.

Request

<EMPTY>

SetDebugClientInfo

Set information about the current client that can be used by sysmem to help diagnose leaking memory and allocation stalls waiting for a participant to send fuchsia.sysmem2/BufferCollection.SetConstraints.

This sets the debug client info on this fuchsia.sysmem2/Node and all Node(s) derived from this Node, unless overriden by fuchsia.sysmem2/Allocator.SetDebugClientInfo or a later fuchsia.sysmem2/Node.SetDebugClientInfo.

Sending fuchsia.sysmem2/Allocator.SetDebugClientInfo once per Allocator is the most efficient way to ensure that all fuchsia.sysmem2/Node(s) will have at least some debug client info set, and is also more efficient than separately sending the same debug client info via fuchsia.sysmem2/Node.SetDebugClientInfo for each created fuchsia.sysmem2/Node.

Also used when verbose logging is enabled (see SetVerboseLogging) to indicate which client is closing their channel first, leading to subtree failure (which can be normal if the purpose of the subtree is over, but if happening earlier than expected, the client-channel-specific name can help diagnose where the failure is first coming from, from sysmem's point of view).

All table fields are currently required.

  • request name This can be an arbitrary string, but the current process name (see fsl::GetCurrentProcessName) is a good default.
  • request id This can be an arbitrary id, but the current process ID (see fsl::GetCurrentProcessKoid) is a good default.

Request

NameType
payload NodeSetDebugClientInfoRequest

SetDebugTimeoutLogDeadline

Sysmem logs a warning if sysmem hasn't seen fuchsia.sysmem2/BufferCollection.SetConstraints from all clients within 5 seconds after creation of a new collection.

Clients can call this method to change when the log is printed. If multiple client set the deadline, it's unspecified which deadline will take effect.

In most cases the default works well.

All table fields are currently required.

  • request deadline The time at which sysmem will start trying to log the warning, unless all constraints are with sysmem by then.

Request

NameType
payload NodeSetDebugTimeoutLogDeadlineRequest

SetName

Set a name for VMOs in this buffer collection.

If the name doesn't fit in ZX_MAX_NAME_LEN, the name of the vmo itself will be truncated to fit. The name of the vmo will be suffixed with the buffer index within the collection (if the suffix fits within ZX_MAX_NAME_LEN). The name specified here (without truncation) will be listed in the inspect data.

The name only affects VMOs allocated after the name is set; this call does not rename existing VMOs. If multiple clients set different names then the larger priority value will win. Setting a new name with the same priority as a prior name doesn't change the name.

All table fields are currently required.

  • request priority The name is only set if this is the first SetName or if priority is greater than any previous priority value in prior SetName calls across all Node(s) of this buffer collection.
  • request name The name for VMOs created under this buffer collection.

Request

NameType
payload NodeSetNameRequest

SetVerboseLogging

This enables verbose logging for the buffer collection.

Verbose logging includes constraints set via fuchsia.sysmem2/BufferCollection.SetConstraints from each client along with info set via fuchsia.sysmem2/Node.SetDebugClientInfo (or fuchsia.sysmem2/Allocator.SetDebugClientInfo) and the structure of the tree of Node(s).

Normally sysmem prints only a single line complaint when aggregation fails, with just the specific detailed reason that aggregation failed, with little surrounding context. While this is often enough to diagnose a problem if only a small change was made and everything was working before the small change, it's often not particularly helpful for getting a new buffer collection to work for the first time. Especially with more complex trees of nodes, involving things like fuchsia.sysmem2/BufferCollection.AttachToken, fuchsia.sysmem2/BufferCollectionToken.SetDispensable, fuchsia.sysmem2/BufferCollectionTokenGroup nodes, and associated subtrees of nodes, verbose logging may help in diagnosing what the tree looks like and why it's failing a logical allocation, or why a tree or subtree is failing sooner than expected.

The intent of the extra logging is to be acceptable from a performance point of view, under the assumption that verbose logging is only enabled on a low number of buffer collections. If we're not tracking down a bug, we shouldn't send this message.

Request

<EMPTY>

SetWeak

Sets the current fuchsia.sysmem2/Node and all child Node(s) created after this message to weak, which means that a client's Node client end (or a child created after this message) is not alone sufficient to keep allocated VMOs alive.

All VMOs obtained from weak Node(s) are weak sysmem VMOs. See also close_weak_asap.

This message is only permitted before the Node becomes ready for allocation (else the server closes the channel with ZX_ERR_BAD_STATE):

  • BufferCollectionToken: any time
  • BufferCollection: before SetConstraints
  • BufferCollectionTokenGroup: before AllChildrenPresent

Currently, no conversion from strong Node to weak Node after ready for allocation is provided, but a client can simulate that by creating an additional Node before allocation and setting that additional Node to weak, and then potentially at some point later sending Release and closing the client end of the client's strong Node, but keeping the client's weak Node.

Zero strong Node(s) and zero strong VMO handles will result in buffer collection failure (all Node client end(s) will see ZX_CHANNEL_PEER_CLOSED and all close_weak_asap client_end(s) will see ZX_EVENTPAIR_PEER_CLOSED), but sysmem (intentionally) won't notice this situation until all Node(s) are ready for allocation. For initial allocation to succeed, at least one strong Node is required to exist at allocation time, but after that client receives VMO handles, that client can BufferCollection.Release and close the client end without causing this type of failure.

This implies fuchsia.sysmem2/Node.SetWeakOk as well, but does not imply SetWeakOk with for_children_also true, which can be sent separately as appropriate.

Request

<EMPTY>

SetWeakOk

This indicates to sysmem that the client is prepared to pay attention to close_weak_asap.

If sent, this message must be before fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated.

All participants using a weak fuchsia.sysmem2/BufferCollection must send this message before WaitForAllBuffersAllocated, or a parent Node must have sent fuchsia.sysmem2/Node.SetWeakOk with for_child_nodes_also true, else the WaitForAllBuffersAllocated will trigger buffer collection failure.

This message is necessary because weak sysmem VMOs have not always been a thing, so older clients are not aware of the need to pay attention to close_weak_asap ZX_EVENTPAIR_PEER_CLOSED and close all remaining sysmem weak VMO handles asap. By having this message and requiring participants to indicate their acceptance of this aspect of the overall protocol, we avoid situations where an older client is delivered a weak VMO without any way for sysmem to get that VMO to close quickly later (and on a per-buffer basis).

A participant that doesn't handle close_weak_asap and also doesn't retrieve any VMO handles via WaitForAllBuffersAllocated doesn't need to send SetWeakOk (and doesn't need to have a parent Node send SetWeakOk with for_child_nodes_also true either). However, if that same participant has a child/delegate which does retrieve VMOs, that child/delegate will need to send SetWeakOk before WaitForAllBuffersAllocated.

  • request for_child_nodes_also If present and true, this means direct child nodes of this node created after this message plus all descendants of those nodes will behave as if SetWeakOk was sent on those nodes. Any child node of this node that was created before this message is not included. This setting is "sticky" in the sense that a subsequent SetWeakOk without this bool set to true does not reset the server-side bool. If this creates a problem for a participant, a workaround is to SetWeakOk with for_child_nodes_also true on child tokens instead, as appropriate. A participant should only set for_child_nodes_also true if the participant can really promise to obey close_weak_asap both for its own weak VMO handles, and for all weak VMO handles held by participants holding the corresponding child Node(s). When for_child_nodes_also is set, descendent Node(s) which are using sysmem(1) can be weak, despite the clients of those sysmem1 Node(s) not having any direct way to SetWeakOk or any direct way to find out about close_weak_asap. This only applies to descendents of this Node which are using sysmem(1), not to this Node when converted directly from a sysmem2 token to a sysmem(1) token, which will fail allocation unless an ancestor of this Node specified for_child_nodes_also true.

Request

NameType
payload NodeSetWeakOkRequest

Sync

Ensure that previous messages have been received server side. This is particularly useful after previous messages that created new tokens, because a token must be known to the sysmem server before sending the token to another participant.

Calling fuchsia.sysmem2/BufferCollectionToken.Sync on a token that isn't/wasn't a valid token risks the Sync stalling forever. See fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken for one way to mitigate the possibility of a hostile/fake fuchsia.sysmem2/BufferCollectionToken at the cost of one round trip. Another way is to pass the token to fuchsia.sysmem2/Allocator/BindSharedCollection, which also validates the token as part of exchanging it for a fuchsia.sysmem2/BufferCollection channel, and fuchsia.sysmem2/BufferCollection.Sync can then be used without risk of stalling.

After creating one or more fuchsia.sysmem2/BufferCollectionToken(s) and then starting and completing a Sync, it's then safe to send the BufferCollectionToken client ends to other participants knowing the server will recognize the tokens when they're sent by the other participants to sysmem in a fuchsia.sysmem2/Allocator.BindSharedCollection message. This is an efficient way to create tokens while avoiding unnecessary round trips.

Other options include waiting for each fuchsia.sysmem2/BufferCollectionToken.Duplicate to complete individually (using separate call to Sync after each), or calling fuchsia.sysmem2/BufferCollection.Sync after a token has been converted to a BufferCollection via fuchsia.sysmem2/Allocator.BindSharedCollection, or using fuchsia.sysmem2/BufferCollectionToken.DuplicateSync which includes the sync step and can create multiple tokens at once.

Request

<EMPTY>

Response

NameType
payload Node_Sync_Result

Node

Defined in fuchsia.sysmem2/collection.fidl

This protocol is the parent protocol for all nodes in the tree established by fuchsia.sysmem2/BufferCollectionToken creation and fuchsia.sysmem2/BufferCollectionTokenGroup creation, including fuchsia.sysmem2/BufferCollectionToken(s) which have since been converted to a fuchsia.sysmem2/BufferCollection channel.

Epitaphs are not used in this protocol.

Added: 19

GetBufferCollectionId

Get the buffer collection ID. This ID is also available from fuchsia.sysmem2/Allocator.GetVmoInfo (along with the buffer_index within the collection).

This call is mainly useful in situations where we can't convey a fuchsia.sysmem2/BufferCollectionToken or fuchsia.sysmem2/BufferCollection directly, but can only convey a VMO handle, which can be joined back up with a BufferCollection client end that was created via a different path. Prefer to convey a BufferCollectionToken or BufferCollection directly when feasible.

Trusting a buffer_collection_id value from a source other than sysmem is analogous to trusting a koid value from a source other than zircon. Both should be avoided unless really necessary, and both require caution. In some situations it may be reasonable to refer to a pre-established BufferCollection by buffer_collection_id via a protocol for efficiency reasons, but an incoming value purporting to be a buffer_collection_id is not sufficient alone to justify granting the sender of the buffer_collection_id any capability. The sender must first prove to a receiver that the sender has/had a VMO or has/had a BufferCollectionToken to the same collection by sending a handle that sysmem confirms is a valid sysmem handle and which sysmem maps to the buffer_collection_id value. The receiver should take care to avoid assuming that a sender had a BufferCollectionToken in cases where the sender has only proven that the sender had a VMO.

  • response buffer_collection_id This ID is unique per buffer collection per boot. Each buffer is uniquely identified by the buffer_collection_id and buffer_index together.

Request

<EMPTY>

Response

NameType
payload Node_GetBufferCollectionId_Result

GetNodeRef

This gets a handle that can be used as a parameter to fuchsia.sysmem2/Node.IsAlternateFor called on any fuchsia.sysmem2/Node. This handle is only for use as proof that the client obtained this handle from this Node.

Because this is a get not a set, no fuchsia.sysmem2/Node.Sync is needed between the GetNodeRef and the call to IsAlternateFor, despite the two calls typically being on different channels.

See also fuchsia.sysmem2/Node.IsAlternateFor.

All table fields are currently required.

  • response node_ref This handle can be sent via IsAlternateFor on a different Node channel, to prove that the client obtained the handle from this Node.

Request

<EMPTY>

Response

NameType
payload Node_GetNodeRef_Result

IsAlternateFor

Check whether the calling fuchsia.sysmem2/Node is in a subtree rooted at a different child token of a common parent fuchsia.sysmem2/BufferCollectionTokenGroup, in relation to the passed-in node_ref.

This call is for assisting with admission control de-duplication, and with debugging.

The node_ref must be obtained using fuchsia.sysmem2/Node.GetNodeRef.

The node_ref can be a duplicated handle; it's not necessary to call GetNodeRef for every call to fuchsia.sysmem2/Node.IsAlternateFor.

If a calling token may not actually be a valid token at all due to a potentially hostile/untrusted provider of the token, call fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken first instead of potentially getting stuck indefinitely if IsAlternateFor never responds due to a calling token not being a real token (not really talking to sysmem). Another option is to call fuchsia.sysmem2/Allocator.BindSharedCollection with this token first which also validates the token along with converting it to a fuchsia.sysmem2/BufferCollection, then call IsAlternateFor.

All table fields are currently required.

  • response is_alternate
    • true: The first parent node in common between the calling node and the node_ref Node is a BufferCollectionTokenGroup. This means that the calling Node and the node_ref Node will not have both their constraints apply - rather sysmem will choose one or the other of the constraints - never both. This is because only one child of a BufferCollectionTokenGroup is selected during logical allocation, with only that one child's subtree contributing to constraints aggregation.
    • false: The first parent node in common between the calling Node and the node_ref Node is not a BufferCollectionTokenGroup. Currently, this means the first parent node in common is a BufferCollectionToken or BufferCollection (regardless of not Releaseed). This means that the calling Node and the node_ref Node may have both their constraints apply during constraints aggregation of the logical allocation, if both Node(s) are selected by any parent BufferCollectionTokenGroup(s) involved. In this case, there is no BufferCollectionTokenGroup that will directly prevent the two Node(s) from both being selected and their constraints both aggregated, but even when false, one or both Node(s) may still be eliminated from consideration if one or both Node(s) has a direct or indirect parent BufferCollectionTokenGroup which selects a child subtree other than the subtree containing the calling Node or node_ref Node.
  • error [fuchsia.sysmem2/Error.NOT_FOUND] The node_ref wasn't associated with the same buffer collection as the calling Node. Another reason for this error is if the node_ref is an zx.Handle.EVENT handle with sufficient rights, but isn't actually a real node_ref obtained from GetNodeRef.
  • error [fuchsia.sysmem2/Error.PROTOCOL_DEVIATION] The caller passed a node_ref that isn't a zx.Handle:EVENT handle , or doesn't have the needed rights expected on a real node_ref.
  • No other failing status codes are returned by this call. However, sysmem may add additional codes in future, so the client should have sensible default handling for any failing status code.

Request

NameType
payload NodeIsAlternateForRequest

Response

NameType
payload Node_IsAlternateFor_Result

Release

On a fuchsia.sysmem2/BufferCollectionToken channel:

Normally a participant will convert a BufferCollectionToken into a fuchsia.sysmem2/BufferCollection, but a participant can instead send Release via the token (and then close the channel immediately or shortly later in response to server closing the server end), which avoids causing buffer collection failure. Without a prior Release, closing the BufferCollectionToken client end will cause buffer collection failure.

On a fuchsia.sysmem2/BufferCollection channel:

By default the server handles unexpected closure of a fuchsia.sysmem2/BufferCollection client end (without Release first) by failing the buffer collection. Partly this is to expedite closing VMO handles to reclaim memory when any participant fails. If a participant would like to cleanly close a BufferCollection without causing buffer collection failure, the participant can send Release before closing the BufferCollection client end. The Release can occur before or after SetConstraints. If before SetConstraints, the buffer collection won't require constraints from this node in order to allocate. If after SetConstraints, the constraints are retained and aggregated, despite the lack of BufferCollection connection at the time of constraints aggregation.

On a fuchsia.sysmem2/BufferCollectionTokenGroup channel:

By default, unexpected closure of a BufferCollectionTokenGroup client end (without Release first) will trigger failure of the buffer collection. To close a BufferCollectionTokenGroup channel without failing the buffer collection, ensure that AllChildrenPresent() has been sent, and send Release before closing the BufferCollectionTokenGroup client end.

If Release occurs before [fuchsia.sysmem2/BufferCollectionTokenGroup.AllChildrenPresent], the buffer collection will fail (triggered by reception of Releasewithout priorAllChildrenPresent). This is intentionally not analogous to how <a class='link' href='../fuchsia.sysmem2/'>fuchsia.sysmem2</a>/<a class='link' href='../fuchsia.sysmem2/#BufferCollection.Release'>BufferCollection.Release</a> without <a class='link' href='../fuchsia.sysmem2/'>fuchsia.sysmem2</a>/<a class='link' href='../fuchsia.sysmem2/#BufferCollection.SetConstraints'>BufferCollection.SetConstraints</a> first doesn't cause buffer collection failure. For a BufferCollectionTokenGroup, clean close requires AllChildrenPresent(if not already sent), then Release`, then close client end.

If Release occurs after AllChildrenPresent, the children and all their constraints remain intact (just as they would if the BufferCollectionTokenGroup channel had remained open), and the client end close doesn't trigger buffer collection failure.

On all fuchsia.sysmem2/Node channels (any of the above):

For brevity, the per-channel-protocol paragraphs above ignore the separate failure domain created by fuchsia.sysmem2/BufferCollectionToken.SetDispensable or fuchsia.sysmem2/BufferCollection.AttachToken. When a client end unexpectedly closes (without Release first) and that client end is under a failure domain, instead of failing the whole buffer collection, the failure domain is failed, but the buffer collection itself is isolated from failure of the failure domain. Such failure domains can be nested, in which case only the inner-most failure domain in which the Node resides fails.

Request

<EMPTY>

SetDebugClientInfo

Set information about the current client that can be used by sysmem to help diagnose leaking memory and allocation stalls waiting for a participant to send fuchsia.sysmem2/BufferCollection.SetConstraints.

This sets the debug client info on this fuchsia.sysmem2/Node and all Node(s) derived from this Node, unless overriden by fuchsia.sysmem2/Allocator.SetDebugClientInfo or a later fuchsia.sysmem2/Node.SetDebugClientInfo.

Sending fuchsia.sysmem2/Allocator.SetDebugClientInfo once per Allocator is the most efficient way to ensure that all fuchsia.sysmem2/Node(s) will have at least some debug client info set, and is also more efficient than separately sending the same debug client info via fuchsia.sysmem2/Node.SetDebugClientInfo for each created fuchsia.sysmem2/Node.

Also used when verbose logging is enabled (see SetVerboseLogging) to indicate which client is closing their channel first, leading to subtree failure (which can be normal if the purpose of the subtree is over, but if happening earlier than expected, the client-channel-specific name can help diagnose where the failure is first coming from, from sysmem's point of view).

All table fields are currently required.

  • request name This can be an arbitrary string, but the current process name (see fsl::GetCurrentProcessName) is a good default.
  • request id This can be an arbitrary id, but the current process ID (see fsl::GetCurrentProcessKoid) is a good default.

Request

NameType
payload NodeSetDebugClientInfoRequest

SetDebugTimeoutLogDeadline

Sysmem logs a warning if sysmem hasn't seen fuchsia.sysmem2/BufferCollection.SetConstraints from all clients within 5 seconds after creation of a new collection.

Clients can call this method to change when the log is printed. If multiple client set the deadline, it's unspecified which deadline will take effect.

In most cases the default works well.

All table fields are currently required.

  • request deadline The time at which sysmem will start trying to log the warning, unless all constraints are with sysmem by then.

Request

NameType
payload NodeSetDebugTimeoutLogDeadlineRequest

SetName

Set a name for VMOs in this buffer collection.

If the name doesn't fit in ZX_MAX_NAME_LEN, the name of the vmo itself will be truncated to fit. The name of the vmo will be suffixed with the buffer index within the collection (if the suffix fits within ZX_MAX_NAME_LEN). The name specified here (without truncation) will be listed in the inspect data.

The name only affects VMOs allocated after the name is set; this call does not rename existing VMOs. If multiple clients set different names then the larger priority value will win. Setting a new name with the same priority as a prior name doesn't change the name.

All table fields are currently required.

  • request priority The name is only set if this is the first SetName or if priority is greater than any previous priority value in prior SetName calls across all Node(s) of this buffer collection.
  • request name The name for VMOs created under this buffer collection.

Request

NameType
payload NodeSetNameRequest

SetVerboseLogging

This enables verbose logging for the buffer collection.

Verbose logging includes constraints set via fuchsia.sysmem2/BufferCollection.SetConstraints from each client along with info set via fuchsia.sysmem2/Node.SetDebugClientInfo (or fuchsia.sysmem2/Allocator.SetDebugClientInfo) and the structure of the tree of Node(s).

Normally sysmem prints only a single line complaint when aggregation fails, with just the specific detailed reason that aggregation failed, with little surrounding context. While this is often enough to diagnose a problem if only a small change was made and everything was working before the small change, it's often not particularly helpful for getting a new buffer collection to work for the first time. Especially with more complex trees of nodes, involving things like fuchsia.sysmem2/BufferCollection.AttachToken, fuchsia.sysmem2/BufferCollectionToken.SetDispensable, fuchsia.sysmem2/BufferCollectionTokenGroup nodes, and associated subtrees of nodes, verbose logging may help in diagnosing what the tree looks like and why it's failing a logical allocation, or why a tree or subtree is failing sooner than expected.

The intent of the extra logging is to be acceptable from a performance point of view, under the assumption that verbose logging is only enabled on a low number of buffer collections. If we're not tracking down a bug, we shouldn't send this message.

Request

<EMPTY>

SetWeak

Sets the current fuchsia.sysmem2/Node and all child Node(s) created after this message to weak, which means that a client's Node client end (or a child created after this message) is not alone sufficient to keep allocated VMOs alive.

All VMOs obtained from weak Node(s) are weak sysmem VMOs. See also close_weak_asap.

This message is only permitted before the Node becomes ready for allocation (else the server closes the channel with ZX_ERR_BAD_STATE):

  • BufferCollectionToken: any time
  • BufferCollection: before SetConstraints
  • BufferCollectionTokenGroup: before AllChildrenPresent

Currently, no conversion from strong Node to weak Node after ready for allocation is provided, but a client can simulate that by creating an additional Node before allocation and setting that additional Node to weak, and then potentially at some point later sending Release and closing the client end of the client's strong Node, but keeping the client's weak Node.

Zero strong Node(s) and zero strong VMO handles will result in buffer collection failure (all Node client end(s) will see ZX_CHANNEL_PEER_CLOSED and all close_weak_asap client_end(s) will see ZX_EVENTPAIR_PEER_CLOSED), but sysmem (intentionally) won't notice this situation until all Node(s) are ready for allocation. For initial allocation to succeed, at least one strong Node is required to exist at allocation time, but after that client receives VMO handles, that client can BufferCollection.Release and close the client end without causing this type of failure.

This implies fuchsia.sysmem2/Node.SetWeakOk as well, but does not imply SetWeakOk with for_children_also true, which can be sent separately as appropriate.

Request

<EMPTY>

SetWeakOk

This indicates to sysmem that the client is prepared to pay attention to close_weak_asap.

If sent, this message must be before fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated.

All participants using a weak fuchsia.sysmem2/BufferCollection must send this message before WaitForAllBuffersAllocated, or a parent Node must have sent fuchsia.sysmem2/Node.SetWeakOk with for_child_nodes_also true, else the WaitForAllBuffersAllocated will trigger buffer collection failure.

This message is necessary because weak sysmem VMOs have not always been a thing, so older clients are not aware of the need to pay attention to close_weak_asap ZX_EVENTPAIR_PEER_CLOSED and close all remaining sysmem weak VMO handles asap. By having this message and requiring participants to indicate their acceptance of this aspect of the overall protocol, we avoid situations where an older client is delivered a weak VMO without any way for sysmem to get that VMO to close quickly later (and on a per-buffer basis).

A participant that doesn't handle close_weak_asap and also doesn't retrieve any VMO handles via WaitForAllBuffersAllocated doesn't need to send SetWeakOk (and doesn't need to have a parent Node send SetWeakOk with for_child_nodes_also true either). However, if that same participant has a child/delegate which does retrieve VMOs, that child/delegate will need to send SetWeakOk before WaitForAllBuffersAllocated.

  • request for_child_nodes_also If present and true, this means direct child nodes of this node created after this message plus all descendants of those nodes will behave as if SetWeakOk was sent on those nodes. Any child node of this node that was created before this message is not included. This setting is "sticky" in the sense that a subsequent SetWeakOk without this bool set to true does not reset the server-side bool. If this creates a problem for a participant, a workaround is to SetWeakOk with for_child_nodes_also true on child tokens instead, as appropriate. A participant should only set for_child_nodes_also true if the participant can really promise to obey close_weak_asap both for its own weak VMO handles, and for all weak VMO handles held by participants holding the corresponding child Node(s). When for_child_nodes_also is set, descendent Node(s) which are using sysmem(1) can be weak, despite the clients of those sysmem1 Node(s) not having any direct way to SetWeakOk or any direct way to find out about close_weak_asap. This only applies to descendents of this Node which are using sysmem(1), not to this Node when converted directly from a sysmem2 token to a sysmem(1) token, which will fail allocation unless an ancestor of this Node specified for_child_nodes_also true.

Request

NameType
payload NodeSetWeakOkRequest

Sync

Ensure that previous messages have been received server side. This is particularly useful after previous messages that created new tokens, because a token must be known to the sysmem server before sending the token to another participant.

Calling fuchsia.sysmem2/BufferCollectionToken.Sync on a token that isn't/wasn't a valid token risks the Sync stalling forever. See fuchsia.sysmem2/Allocator.ValidateBufferCollectionToken for one way to mitigate the possibility of a hostile/fake fuchsia.sysmem2/BufferCollectionToken at the cost of one round trip. Another way is to pass the token to fuchsia.sysmem2/Allocator/BindSharedCollection, which also validates the token as part of exchanging it for a fuchsia.sysmem2/BufferCollection channel, and fuchsia.sysmem2/BufferCollection.Sync can then be used without risk of stalling.

After creating one or more fuchsia.sysmem2/BufferCollectionToken(s) and then starting and completing a Sync, it's then safe to send the BufferCollectionToken client ends to other participants knowing the server will recognize the tokens when they're sent by the other participants to sysmem in a fuchsia.sysmem2/Allocator.BindSharedCollection message. This is an efficient way to create tokens while avoiding unnecessary round trips.

Other options include waiting for each fuchsia.sysmem2/BufferCollectionToken.Duplicate to complete individually (using separate call to Sync after each), or calling fuchsia.sysmem2/BufferCollection.Sync after a token has been converted to a BufferCollection via fuchsia.sysmem2/Allocator.BindSharedCollection, or using fuchsia.sysmem2/BufferCollectionToken.DuplicateSync which includes the sync step and can create multiple tokens at once.

Request

<EMPTY>

Response

NameType
payload Node_Sync_Result

STRUCTS

BufferCollection_CheckAllBuffersAllocated_Response

Defined in fuchsia.sysmem2/collection.fidl

<EMPTY>

Node_Sync_Response

Defined in fuchsia.sysmem2/collection.fidl

<EMPTY>

PixelFormatAndModifier

Defined in fuchsia.sysmem2/constraints.fidl

FieldTypeDescriptionDefault
pixel_format fuchsia.images2/PixelFormat

When specified by a participant in a message to sysmem, this can be any PixelFormat value that's acceptable to the participant. Specifying kInvalid is not permitted.

The participant can specify fuchsia.images2/PixelFormat.DO_NOT_CARE if the participant needs to specify ImageFormatConstraints without constraining the pixel_format.

No default
pixel_format_modifier fuchsia.images2/PixelFormatModifier

Specific modifier (not just flags), or FORMAT_MODIFIER_DO_NOT_CARE, from set of values defined in fuchsia.images2 format_modifier.fidl.

No default

ENUMS

CoherencyDomain flexible

Type: uint32

Defined in fuchsia.sysmem2/constraints.fidl

INACCESSIBLE is only for cases where there is no CPU access to the buffers.

Device-local memory that isn't reachable from the CPU is CoherencyDomain INACCESSIBLE, even if it's possible to cause a device (physical or virtual) to copy the data from the INACCESSIBLE buffers to buffers that are visible to the CPU. In other words, INACCESSIBLE does not imply secure, but secure implies INACCESSIBLE.

CPU means producers must ensure that a consumer can read the produced data with the CPU without the consumer needing to do additional cache ops not already performed (as needed) by the producer.

RAM means producers must ensure that the produced data is entirely present in RAM, without any dirty CPU cache lines, and a consumer must invalidate (or flush and invalidate, typically) the CPU cache before reading data with the CPU. The RAM domain can be faster than the CPU domain when all access is via HW DMA, since in that case no CPU cache ops are required, since no participant is actually reading/writing using the CPU.

Added: 19

NameValueDescription
0
1
2

Error flexible

Type: uint32

Defined in fuchsia.sysmem2/error.fidl

Regardless of which error code, any client retries should be very limited in number, if any.

A Error value should never be stored in a zx_status_t, since positive values in zx_status_t are deprecated.

Added: 19

NameValueDescription
0

This is not a valid error value in this error enum. The server will never send this value as a failure code. This value is not treated as "success". In some languages, a locally default-initialized Error instance will have this value until it is initialized with a valid positive error code.

1

Unspecified error.

This error code is used when no other error code applies, and the error is probably not due to problematic messages sent to the server via the channel delivering this error.

This error should be handled by the client as a generic error.

As one example, this error is used when a different client channel has closed from the client end unexpectedly (without sending fuchsia.sysmem2/Node.Release first), thereby causing failure of any nodes in the same tree or sub-tree. In this usage, the main thing that's relevant is it isn't the receiving client's "fault" - no reason to be more specific since there's probably nothing the receiving client could do about the error, at least not directly.

As another example, this error can be used if a syscall that is normally expected to succeed fails unexpectedly, and there's no identified reason to "blame" the client.

A client should never require / depend on a particular cause of error continuing to result in UNSPECIFIED, as any particular error cause can potentially start resulting in a more specific error code in future.

2

A required field wasn't set or a specified value was invalid. See the log for more info.

This is also used when a message is received from the client in the wrong order or in some way inconsistent with protocol rules.

3

A client-specified object or ID was not found.

4

The object handle doesn't have sufficient rights to perform the request.

5

The allocation could not be satisfied due to lack of available memory.

The memory exhaustion can be specific to the heap that was selected during constraints aggregation, so in some cases, this error can happen despite normal system RAM not being near exhaustion, depending on configured and selected heap(s).

6

The request is valid but cannot be satisfied, perhaps due to hardware limitations. This happens if participants involved in this allocation have incompatible constraints (empty intersection, roughly speaking). See the log for more info. In cases where a participant could potentially be treated as optional, see BufferCollectionTokenGroup.

This can also happen if there aren't enough buffers in a pre-existing collection to satisfy an additional token (including sub-tree of derived tokens) created with fuchsia.sysmem2/BufferCollection.AttachToken.

This can also happen if a client's node is under a group and a different group child is selected instead.

7

Allocation hasn't been attempted yet. Calling fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated would (likely) block.

8

Too many BufferCollectionTokenGroup child token selection combinations exist and were considered, causing sysmem to give up on allocating rather than enumerate the rest.

TABLES

AllocatorAllocateNonSharedCollectionRequest resource

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalFieldTypeDescription
collection_request server_end<BufferCollection>

AllocatorAllocateSharedCollectionRequest resource

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalFieldTypeDescription
token_request server_end<BufferCollectionToken>

AllocatorBindSharedCollectionRequest resource

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalFieldTypeDescription
token BufferCollectionToken
buffer_collection_request server_end<BufferCollection>

AllocatorGetVmoInfoRequest resource

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalFieldTypeDescription
vmo handle<vmo>

vmo is required to be set; ownership is transferred to the server so in most cases a client will duplicate a handle and transfer the duplicate via this field.

AllocatorSetDebugClientInfoRequest

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalFieldTypeDescription
name string[256]
id uint64

AllocatorValidateBufferCollectionTokenRequest

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalFieldTypeDescription
token_server_koid zx/Koid

Allocator_GetVmoInfo_Response resource

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalFieldTypeDescription
buffer_collection_id uint64
buffer_index uint64
close_weak_asap handle<eventpair>

Allocator_ValidateBufferCollectionToken_Response

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalFieldTypeDescription
is_known bool

BufferCollectionAttachLifetimeTrackingRequest resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
server_end handle<eventpair>
buffers_remaining uint32

BufferCollectionAttachTokenRequest resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
rights_attenuation_mask zx/Rights
token_request server_end<BufferCollectionToken>

BufferCollectionConstraints

Defined in fuchsia.sysmem2/constraints.fidl

Constraints on allocated buffers and, optionally, constraints on images stored in the buffers. These constraints can be specified per-participant. The sysmem service implements aggregation of constraints from multiple participants.

Added: 19

OrdinalFieldTypeDescription
usage BufferUsage

The usage is a hint to sysmem to potentially help choose a more optimal fuchsia.images2/PixelFormat and/or pixel_format_modifier when multiple compatible options exist.

When aggregating fuchsia.sysmem2/BufferCollectionConstraints, these values bitwise-OR.

At least one usage bit must be specified (however, it's permitted for a fuchsia.sysmem2/BufferCollection.SetConstraints request to have the request constraints field not set, in which case kNoneUsage is the default, along with no constraints from the participant).

When kNoneUsage is specified it must be the only set bit, and no VMOs will be sent in response to fuchsia.sysmem2/BufferCollection.WaitForAllBuffersAllocated.

min_buffer_count_for_camping uint32

Per-participant number of buffers that the participant may concurrently hold for its exclusive use for more than a transient duration (camp on). In this context, a "transient" duration is the time it takes to finish running a small amount of non-blocking code that finishes transfering away ownership of the buffer. Things like reading from storage, waiting on hardware that isn't already known to be done, or doing things like frame encode or decode are not considered transient durations, even if they might sometimes complete quickly.

For example, a video decoder would specify (at least) the maximum number of reference frames + 1 frame currently being decoded into. But not 1 more for the code that runs async and quickly to deliver a previously decoded frame, even though that frame can potentially be owned for a transient duration concurrent with decode of the next frame.

A participant must not camp on more buffers than specified here (except for a transient duration) else processing may get stuck.

When aggregating BufferCollectionConstraints, these values add.

In testing scenarios, camping on more buffers than this for any significant duration (one screen refresh period is "significant" in this context) may (ideally will) be flagged as a failure. In testing scenarios, the participant may not be provided with more buffers than this concurrently.

min_buffer_count_for_dedicated_slack uint32

Per-participant minimum number of buffers that are needed for slack reasons, for better overlap of processing / better performance.

When aggregating BufferCollectionConstraints, these values add.

A participant should typically specify 0 or 1 here - typically 0 is appropriate if min_buffer_count_for_camping is already enough to keep the participant busy 100% of the time when the participant is slightly behind, while 1 can be appropriate if 1 more buffer than strictly needed for min-camping reasons gives enough slack to stay busy 100% of the time (when slightly behind, vs. lower % without the extra buffer).

In testing scenarios, this field may be forced to 0, and all participants are expected to continue to work without getting stuck. If a buffer is needed for forward progress reasons, that buffer should be accounted for in min_buffer_count_for_camping.

min_buffer_count_for_shared_slack uint32

Similar to min_buffer_count_for_dedicated_slack, except when aggregating these values max (instead of add). The value here is not shared with any participant's min_buffer_count_for_dedicated_slack.

A participant can specify > 0 here if a participant would like to ensure there's some slack overall, but doesn't need that slack to be dedicated.

The choice whether to use min_buffer_count_for_dedicated_slack or min_buffer_count_for_shared_slack (or both) will typically be about the degree to which the extra slack improves performance.

In testing scenarios, this field may be forced to 0, and all participants are expected to continue to work without getting stuck. If a buffer is needed for forward progress reasons, that buffer should be accounted for in min_buffer_count_for_camping.

min_buffer_count uint32

A particularly-picky participant may unfortunately need to demand a tight range of buffer_count, or even a specific buffer_count. This field should remain 0 unless a participant really must set this field to constrain the overall BufferCollectionInfo.buffer_count. Any such participant should still fill out the min_buffer_count_for_* fields as appropriate.

If this field is un-set, the logical min_buffer_count is 0.

max_buffer_count uint32

A particularly-picky participant may unfortunately need to demand a tight range of buffer_count, or even a specific buffer_count. This field should remain 0 unless a participant really must set this field to constrain the overall BufferCollectionInfo.buffer_count. Any such participant should still fill out the min_buffer_count_for_* fields.

If this field is un-set, the logical max_buffer_count is 0xFFFFFFFF.

buffer_memory_constraints BufferMemoryConstraints

Optional constraints on BufferCollectionSettings.buffer_settings.

A participant that intends to set image_format_constraints will typically specify the minimum buffer size implicitly via image_format_constraints, and possibly specify only the max buffer size via buffer_memory_constraints.

If un-set, the client is specifying "don't care" re. any buffer memory constraints.

image_format_constraints vector<ImageFormatConstraints>[64]

Optional constraints on the image format parameters of an image stored in a buffer of the collection. This includes fuchsia.images2/PixelFormat and pixel_format_modifier (for tiling and the like). These constraints can be specified separately per pixel_format pixel_format_modifier pair. Duplicated pixel_format pixel_format_modifier pairs aren't permitted.

When aggregating, only pixel_format pixel_format_modifier pairs that are specified by all participants with non-zero image_format_constraints size (and non-null) BufferCollectionConstraints) are retained.

A participant can specify pixel_format fuchsia.images2/PixelFormat.DO_NOT_CARE and/or pixel_format_modifier fuchsia.images2/FORMAT_MODIFIER_DO_NOT_CARE to permit any value to be selected, but at least one participant must specify a specific format for overall allocation to succeed.

In a SetConstraints message, un-set or zero length means no image format constraints; a raw buffer can be allocated if no other participants specify any image_format_constraints entries.

BufferCollectionInfo resource

Defined in fuchsia.sysmem2/results.fidl

Information about a buffer collection and its buffers.

Added: 19

OrdinalFieldTypeDescription
settings SingleBufferSettings

These settings apply to all the buffers in the initial buffer allocation.

This field will always be set by sysmem.

buffers vector<VmoBuffer>[128]

VMO handles (and vmo_usable_start offset) for each buffer in the collection.

The size of this vector is the buffer_count (buffer_count is not sent separately).

All buffer VMO handles have identical size and access rights. The size is in settings.buffer_settings.size_bytes.

The VMO access rights are determined based on the usages which the client specified when allocating the buffer collection. For example, a client which expressed a read-only usage will receive VMOs without write rights. In addition, the rights can be attenuated by the parameter to BufferCollectionToken.Duplicate() calls.

This field will always have VmoBuffer(s) in it, even if the participant specifies usage whieh does not require VMO handles. This permits such a participant to know the vmo_usable_start values, in case that's of any use to the participant.

This field will always be set by sysmem, even if the participant doesn't specify any buffer usage (but the fuchsia.sysmem2/VmoBuffer.vmo sub-field within this field won't be set in that case).

buffer_collection_id uint64

This number is unique among all logical buffer collections per boot.

This ID number will be the same for all BufferCollectionToken(s), BufferCollection(s), and BufferCollectionTokenGroup(s) associated with the same logical buffer collection (derived from the same root token created with fuchsia.sysmem2.Allocator.CreateSharedCollection, or with CreateNonSharedCollection).

The same ID can be retrieved from a BufferCollectionToken, BufferCollection, or BufferCollectionTokenGroup using GetBufferCollectionId (at the cost of a round-trip to sysmem and back).

This field will always be set by sysmem.

BufferCollectionSetConstraintsRequest resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
constraints BufferCollectionConstraints

BufferCollectionTokenCreateBufferCollectionTokenGroupRequest resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
group_request server_end<BufferCollectionTokenGroup>

BufferCollectionTokenDuplicateRequest resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
rights_attenuation_mask zx/Rights
token_request server_end<BufferCollectionToken>

BufferCollectionTokenDuplicateSyncRequest

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
rights_attenuation_masks vector<zx/Rights>[64]

BufferCollectionTokenGroupCreateChildRequest resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
token_request server_end<BufferCollectionToken>

Must be set.

rights_attenuation_mask zx/Rights

If not set, the default is ZX_RIGHT_SAME_RIGHTS.

BufferCollectionTokenGroupCreateChildrenSyncRequest

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
rights_attenuation_masks vector<zx/Rights>[64]

BufferCollectionTokenGroup_CreateChildrenSync_Response resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
tokens vector<BufferCollectionToken>[64]

BufferCollectionToken_DuplicateSync_Response resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
tokens vector<BufferCollectionToken>[64]

BufferCollection_WaitForAllBuffersAllocated_Response resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
buffer_collection_info BufferCollectionInfo

BufferMemoryConstraints

Defined in fuchsia.sysmem2/constraints.fidl

Added: 19

OrdinalFieldTypeDescription
min_size_bytes uint64

un-set is treated as 1

max_size_bytes uint64

un-set is treated as 0xFFFFFFFFFFFFFFFF.

physically_contiguous_required bool

When false, physical pages of a buffer VMO can be non-contiguous. When true, physical pages of a buffer VMO must be sequentially contiguous. A client that doesn't require physically contiguous VMOs must still accept physically contiguous VMOs or "physical" VMOs.

secure_required bool

If true, the participant requires secure memory.

When aggregating BufferCollectionConstraints, these values boolean-OR.

cpu_domain_supported bool

When true (or when BufferMemoryConstraints is not present), the participant is ok with sysmem selecting the CPU domain.

If the CPU domain is selected, participants must ensure the CPU can read or write data to the buffer without cache operations outside of the participant.

ram_domain_supported bool

When true, the participant is ok with sysmem selecting the RAM domain.

If the RAM domain is selected, producer data must be available in RAM (with CPU cache state such that the RAM data won't get corrupted by a dirty CPU cache line writing incorrect data to RAM), and a consumer reading using the CPU must invalidate CPU cache before reading (the producer doesn't guarantee zero stale "clean" cache lines)

inaccessible_domain_supported bool

When true, the participant is ok with sysmem selecting the INACCESSIBLE domain.

If the INACCESSIBLE domain is selected, CPU reads and writes of the data are prevented. Attempts to read/write the data with the CPU may result in UB and/or process termination.

If the INACCESSIBLE domain is selected, participants must only operate on the data using DMAs performed by HW, or platform-specific DMA-like requests to a secure environment.

Secure heaps only support INACCESSIBLE domain, and will fail allocation if any participant with BufferUsage other than NONE_USAGE does not set inaccessible_domain_supported to true.

permitted_heaps vector<Heap>[64]

Which heaps are acceptable to the participant. Participants that don't care which heap memory is allocated on should leave this field un-set. A secure heap is only selected if all participants explicitly indicate that the secure heap is acceptable via heap_permitted, or specify NONE_USAGE.

BufferMemorySettings

Defined in fuchsia.sysmem2/results.fidl

These are memory-related settings for all buffers of a buffer collection.

Added: 19

OrdinalFieldTypeDescription
size_bytes uint64

This field will always be set by sysmem.

is_physically_contiguous bool

This field will always be set by sysmem.

is_secure bool

This field will always be set by sysmem.

coherency_domain CoherencyDomain

This field will always be set by sysmem.

heap Heap

The specific heap from which buffers are allocated.

This field will always be set by sysmem.

BufferUsage

Defined in fuchsia.sysmem2/usages.fidl

Describes how a client will access the contents of a buffer.

Added: 19

OrdinalFieldTypeDescription
none uint32

If the client sets this field, the client should not set any other fields in the same table instance. The only valid bit in this field is NONE_USAGE which must be set if this field is set. The point of this field and the one bit set in this field is to essentially prove that the client really means they aren't going to use the buffers, so don't need any VMOs (didn't just fail to fill out the table).

cpu uint32

If set, holds CPU usage bits. See CPU_USAGE_* flags in usages.fidl.

vulkan uint32

If set, holds vulkan usage bits. See VULKAN_IMAGE_* and VULKAN_BUFFER_* bits in usages.fidl. The VULKAN_USAGE_* bit definitions/names are deprecated.

display uint32

If set, holds display usage bits. See DISPLAY_USAGE_* bits in usages.fidl.

video uint32

If set, holds video usage bits. See VIDEO_USAGE_* bits in usages.fidl.

Heap

Defined in fuchsia.sysmem2/constraints.fidl

A reference to a heap instance.

A given heap instance can have more than one Heap which can be used to refer to the heap instance. Comparing Heap tables without knowledge of these Heap aliases is not a reliable way to determine if two Heap tables refer to the same heap (matching means yes, but not matching means maybe). Allowing heap aliases makes renaming Heap.type(s) easier.

Added: 19

OrdinalFieldTypeDescription
heap_type string[128]

The type of the heap, specified using a bind string defined per the schema and mechanism described in comments in the fuchsia.sysmem.heap.bind file.

Examples:

  • "fuchsia.sysmem.heap.HEAP_TYPE.SYSTEM_RAM"
  • "fuchsia.goldfish.platform.sysmem.heap.HEAP_TYPE.HOST_VISIBLE"
id uint64

The uint64 id of the heap. This is only required to be unique per (type, boot) tuple. In other words, a given heap id is only meaningful within the current boot of the machine (not across boots), and only within the Heap.type.

For Heap.type(s) that refer to a singleton heap, a participant specifying the singleton heap in fuchsia.sysmem2.BufferMemoryConstraints.permitted_heaps can leave this field un-set, or set it to zero. Sysmem will always fill out this field for the heap indicated in fuchsia.sysmem2.BufferMemmorySettings.heap (for a singleton heap the id field will be set to 0 by sysmem).

ImageFormatConstraints

Defined in fuchsia.sysmem2/constraints.fidl

Describes constraints on layout of image data in buffers.

OrdinalFieldTypeDescription
pixel_format fuchsia.images2/PixelFormat

The fuchsia.images2/PixelFormat for which the following constraints apply.

The pixel_format and pixel_format_modifier fields together are treated by the server as one additional pixel_format_and_modifiers entry.

A participant may have more than one fuchsia.sysmem2/PixelFormatAndModifier that's supported.

  • If image constraints are the same for different PixelFormatAndModifiers, the participant may list additional PixelFormatAndModifiers for which the constraints apply in the pixel_format_and_modifiers field. This reduces the overall number of ImageFormatConstraints that need to be sent, without changing the meaning (vs for example sending a bunch of separate ImageFormatConstraints that only differ by the pixel_format and pixel_format_modifier which overall specify the same list of PixelFormatAndModifiers).
  • If image constraints differ for different PixelFormatAndModifiers, the participant can convey this using a separate ImageFormatConstraints entry in image_format_constraints for each set of PixelFormatAndModifiers that have different image constraints.
  • It's ok for a participant to have two image_format_constraints entries that only differ in their pixel_format_and_modifiers, but this is isn't the most compact way to represent that situation since the two entries could be combined by specifying two PixelFormatAndModifiers within a single ImageFormatConstraints.

It's not uncommon for the other fields of ImageFormatConstraints to vary by pixel_format or by pixel_format_modifier - for example for a linear format to support smaller max size than a tiled format.

See also fuchsia.sysmem2/ImageFormatConstraints.pixel_format_and_modifiers.

Thie field must be set to a value other than fuchsia.images2/PixelFormat.INVALID unless pixel_format_and_modifiers is non-empty. In other words, there must be at least one PixelFormatAndModifier per ImageFormatConstraints. If pixel_format_modifier is set, this field must also be set.

The participant can specify fuchsia.images2/PixelFormat.DO_NOT_CARE if the participant needs to specify ImageFormatConstraints without constraining the pixel_format.

pixel_format_modifier fuchsia.images2/PixelFormatModifier

The pixel format modifier for which the following constraints apply.

The pixel_format and pixel_format_modifier fields together are treated by the server as one additional pixel_format_and_modifiers entry.

This is a value from fuchsia.images2 format_modifier.fidl that's acceptable to the participant in combination with the pixel_format.

See also pixel_format_and_modifiers.

If pixel_format is set but pixel_format_modifier is un-set, the default depends on other fields:

  • If pixel_format is DO_NOT_CARE, the pixel format modifier is implicitly FORMAT_MODIFIER_DO_NOT_CARE.
  • else if BufferCollectionConstraints.usage isn't NONE, the pixel format modifier is implicitly FORMAT_MODIFIER_LINEAR.
  • else the pixel format modifier is implicitly FORMAT_MODIFIER_DO_NOT_CARE.

When set, this value is a specific modifier (not just flags), or FORMAT_MODIFIER_DO_NOT_CARE, from set of values defined in fuchsia.images2 format_modifier.fidl.

color_spaces vector<fuchsia.images2/ColorSpace>[32]

Empty is an error. Duplicate entries are an error. Arbitrary ordering is not an error.

The client can specify a single entry [fuchsia.sysmem2/ColorSpace.DO_NOT_CARE] if the client doesn't want to constrain which ColorSpaceis chosen. At least one participant must specify at least oneColorSpacevalue other than ColorSpace.DO_NOT_CARE`, or allocation will fail.

min_size fuchsia.math/SizeU

Minimum permitted size in pixels.

For example a video decoder participant may set this field to the minimum size that might potentially be specified by a stream. In contrast, required_min_size would be set to the current size specified by the stream. While min_size aggregates by taking the max, required_min_size aggregates by taking the min.

When sending to sysmem, this field can be un-set if the participant is prepared to deal with the smallest possible non-zero image layout limited only by the constraints implicitly imposed by the pixel_format and pixel_format_modifier. Or this field can be set to the actual minimum size the participant can handle.

Producers should set min_size and set both width and height to the actual non-zero smallest width and height that the producer might generate. For example, a video decoder can set the size of a single macroblock here.

When receiving from sysmem, this field will always be set, and neither width nor height will be 0, because at least one participant must specify a non-zero minimum size (where both width and height aren't zero).

See also required_min_size.

max_size fuchsia.math/SizeU

Maximum size in pixels. For example Scenic may set this field (directly or via sub-participants) to the maximum size that can be composited.

Sending to sysmem, un-set is treated as 0xFFFFFFFF, 0xFFFFFFFF.

Receiving from sysmem, this field will always be set. For width and height separately, if there is no enforced max, that sub-field will be 0xFFFFFFFF.

See also required_max_size.

min_bytes_per_row uint32

The minimum number of bytes per row, including any padding beyond the last image data in a row.

This is sometimes called the "stride in bytes" or the "line to line offset". For single-plane formats, this is the number of bytes per row of pixels. For multi-plane formats, this is the number of bytes per row of samples in plane 0 (for example, the number of bytes per row of luma samples in the case of a multi-plane YUV format). For multi-plane formats, the bytes per row in planes other than plane 0 is format specific, but always a specific relationship to the plane 0 bytes per row.

When sending ImageFormatConstraints to sysmem, setting this field is optional. Not setting this field is recommended unless the participant needs to force the bytes_per_row to be larger than the minimum value implied by min_size.width, the "stride bytes per width pixel" of the pixel_format plus pixel_format_modifier (see also ImageFormatStrideBytesPerWidthPixel), and bytes_per_row_divisor.

When this structure is received from sysmem, this field will always be set (when the parent structure is present), and will always be at least the value implied by min_size.width, the "stride bytes per width pixel" of the pixel_format plus pixel_format_modifier, and bytes_per_row_divisor.

Some producer participants may prefer to simply set ImageFormat.bytes_per_row to ImageFormatConstraints.min_bytes_per_row since sysmem is guaranteeing that min_bytes_per_row is compatible with an image of width min_size.width. However, producer participants that need to have size.width > min_size.width can get a corresponding min_bytes_per_row from ImageFormatMinimumRowBytes (in C++), or can just calculate the bytes_per_row directly.

max_bytes_per_row uint32

The maximum number of bytes per row, including any padding beyond the last image data in a row.

When sent to sysmem, must be >= the value implied by max_size.width, "stride bytes per width pixel", and bytes_per_row_divisor, or constraints aggregation will fail. Un-set means the participant doesn't need/want to set a strict max.

Sending to sysmem, un-set is treated as 0xFFFFFFFF.

When received from sysmem, this field will always be set. If the max is effectively infinite, the value will be 0xFFFFFFFF (not zero).

max_width_times_height uint64

The maximum number of pixels.

The max image area in pixels is limited indirectly via fuchsia.sysmem/BufferMemoryConstraints.max_size_bytes and the resulting fuchsia.sysmem/BufferSettings.size_bytes, and can also be enforced directly via this field.

In contrast to the fuchsia.sysmem2/ImageFormatConstraints.max_size field which limits width and height separately, this field limits the total number of pixels.

In contrast to fuchsia.sysmem/BufferMemoryConstraints.max_size_bytes, this field doesn't limit the number of non-pixel padding bytes after each row of pixels, and doesn't limit the number of non-pixel bytes in the case of tiled pixel_format_modifier.

Very narrow or very short image aspect ratios can have worse performance per pixel in comparison to more typical aspect ratios. Padding and/or memory bandwidth overheads tend to increase for extreme aspect ratios. Participants can indicate lack of support for very narrow or very short dimensions using ['fuchsia.sysmem/ImageFormatConstraints.min_size`].

Sending to sysmem, un-set is treated as 0xFFFFFFFF.

Receiving from sysmem, this field will always be set, and can be set to 0xFFFFFFFF.

size_alignment fuchsia.math/SizeU

Alignment requirements on the image size.

  • size.width % size_alignment.width must be 0.
  • size.height % size_alignment.height must be 0.

Un-set is treated as 1, 1.

display_rect_alignment fuchsia.math/SizeU

Alignment requirements on display_rect.

  • display_rect.x % display_rect_alignment.width must be 0.
  • display_rect.y % display_rect_alignment.height must be 0.
  • display_rect.width % display_rect_alignment.width must be 0.
  • display_rect.height % display_rect_alignment.height must be 0.

Un-set is treated as 1, 1.

required_min_size fuchsia.math/SizeU

These fields can be used to ensure the aggregated constraints have min_size and max_size such that both required_min_size and required_max_size (and anything in between that satisfies alignment requirements) are permitted values of ImageFormat.size.

For example, a producer video decoder doesn't want to constrain the allowed ImageFormat.size, as a compressed stream can change dimensions mid-stream, but the producer video decoder needs to ensure that the aggregated constraints allow for at least the current dimensions of uncompressed frames at the current position in the stream.

As another example, an initiator that's intending to decode video may know what the maximum expected size of frames in the stream(s) can be, so by setting required_max_size, can ensure that the allocated buffers are large enough to support that max size. In addition on successful allocation the initiator also knows that the consumer participants are ok with receiving up to that max size.

It's much more common for a producer or initiator to set these fields than for a consumer to set these fields.

While min_size and max_size aggregate by effectively taking the intersection, the required_min_size and required_max_size aggregate by effectively taking the union.

This field aggregates by taking the min per component, and required_max_size aggregates by taking the max per component.

Un-set is treated as 0xFFFFFFFF, 0xFFFFFFFF.

required_max_size fuchsia.math/SizeU

See also required_min_size. Un-set is treated as 0, 0.

bytes_per_row_divisor uint32

fuchsia_images2.ImageFormat.bytes_per_row % bytes_per_row_divisor must be 0. Un-set is treated as 1.

Prefer to use require_bytes_per_row_at_pixel_boundary when the intent is to ensure that `bytes_per_row' will be a multiple of the pixel size in bytes.

Prefer to use size_alignment.width when the intent is to ensure that the width in pixels is aligned. In contrast, this field can specify that the "stride in bytes" (byte offset from start of image to start of row n minus byte offset from start of image to start of row n-1, with result in bytes) needs to be aligned to the specified number of bytes. For example, when PixelFormat.BGR24 (24 bit color; 3 bytes per pixel) is used, it's not uncommon for a participant to need each row of pixels to start at a 4 byte aligned offset from the start of the image, which can imply some padding bytes at the end of each row of pixels, before the start of the next row of pixels.

While any value of bytes_per_row_divisor could instead be enforced by setting size_alignment.width to the least-common-multiple of the "stride bytes per width pixel" and the stride alignment requirement, enforcing the stride alignment requirement that way can lead to more padding than necessary (implying larger buffer than necessary), and can also result in a "fake" size.width; this field exists to avoid that situation. Instead, the stride alignment requirement in bytes is specified directly here.

start_offset_divisor uint32

vmo_usable_start % start_offset_divisor must be 0. Un-set is treated as 1.

Producer participants are discouraged from setting non-zero image start offset (from the buffer base) unless actually required, as not all participants correctly handle non-zero image start offset.

pixel_format_and_modifiers vector<PixelFormatAndModifier>[64]

The (additional) fuchsia.sysmem2/PixelFormatAndModifiers for which the following constraints apply.

As a non-limiting example, if a participant only wants to set a single PixelFormatAndModifier for this fuchsia.sysmem2/ImageFormatConstraints, the participant can either (a) use pixel_format and pixel_format_modifier fields to specify the fields of the one PixelFormatAndModifier and leave pixel_format_and_modifiers un-set, or (b) leave pixel_format and pixel_format_modifier fields un-set and put the one PixelFormatAndModifier in pixel_format_and_modifiers.

If pixel_format is set, the server will take pixel_format and pixel_format_modifier fields (un-setting them in the process), pack them into a PixelFormatAndModifier, and move it into this vector as one additional entry, with an overall size limit of MAX_COUNT_PIXEL_FORMAT_AND_MODIFIERS + 1.

After the server moves pixel_format, pixel_format_modifier into one additional entry in this vector, this vector must not be empty. When the resulting list has more than 1 item, the entries in this vector are equivalent to (shorthand for) listing (size) separate ImageFormatConstraints entries, one per pixel_format_and_modifiers entry, each with one PixelFormatAndModifier, where all the separate ImageFormatConstraints entries have the same constraints (compared field by field, not including pixel_format, pixel_format_modifier, or pixel_format_and_modifiers fields).

In SetConstraints message, each entry specifies a PixelFormatAndModifier which is acceptable to the participant (assuming the following constraints fields are also satisfied).

In the response to WaitForAllBuffersAllocated, this field will be un-set and the one chosen PixelFormatAndModifier will be indicated using the pixel_format and pixel_format_modifier fields.

All the PixelFormatAndModifiers in a SetConstraints message from a participant must be unique across all the entries under image_format_constraints. If fuchsia.images2.PixelFormat.DO_NOT_CARE is used in an entry, there must not be any other entry (considering all the entries under image_format_constraints) with matching pixel_format_modifier. If FORMAT_MODIFIER_DO_NOT_CARE is used, there must not be any other entry (considering all the entries under image_format_constraints) with matching pixel_format.

A PixelFormatAndModifier value with either DO_NOT_CARE or FORMAT_MODIFIER_DO_NOT_CARE (but not both, for purposes of this example) can be combined with a PixelFormatAndModifier from a separate participant with the other field indicating "do not care", resulting in a complete PixelFormatAndModifier that can succeed allocation. However, at least for now, it's not permitted for a single participant to specify two separate PixelFormatAndModifier values which have "do not care" in different fields. This does not prohibit a single PixelFormatAndModifier with both DO_NOT_CARE and PIXEL_FORMAT_DO_NOT_CARE (which is only a single PixelFormatAndModifier value). If a client really needs to specify some constraints relevant to pixel_format(s) with pixel_format_modifier FORMAT_MODIFIER_DO_NOT_CARE, and other constraints relevant to pixel_format_modifier(s) with pixel_format DO_NOT_CARE, the client can do so by duplicating the token and using/driving two separate participants.

See also pixel_format for more comments relevant to multiple PixelFormatAndModifiers in a single ImageFormatConstraints.

require_bytes_per_row_at_pixel_boundary bool

Iff set and true, bytes_per_row_divisor in the resulting ImageFormatConstraints is guaranteed to be a value which requires bytes_per_row to be an integral number of pixels. This can result in more padding at the end of each row than when this field is not set to true, but ensures that the stride can be expressed as an integral number of pixels.

For example, if the chosen PixelFormat is B8G8R8, if this field is set to true, the resulting bytes_per_row_divisor will be a multiple of 3. In this example, if another participant sets bytes_per_row_divisor to 4, the resulting bytes_per_row_divisor will be a multiple of 12.

NodeIsAlternateForRequest resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
node_ref handle<event>

NodeSetDebugClientInfoRequest

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
name string[256]
id uint64

NodeSetDebugTimeoutLogDeadlineRequest

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
deadline zx/Time

NodeSetNameRequest

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
priority uint32
name string[64]

NodeSetWeakOkRequest resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
for_child_nodes_also bool

Node_GetBufferCollectionId_Response

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
buffer_collection_id uint64

Node_GetNodeRef_Response resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
node_ref handle<event>

Node_IsAlternateFor_Response

Defined in fuchsia.sysmem2/collection.fidl

OrdinalFieldTypeDescription
is_alternate bool

SingleBufferSettings

Defined in fuchsia.sysmem2/results.fidl

These settings and constraints apply to all the buffers in the collection.

Added: 19

OrdinalFieldTypeDescription
buffer_settings BufferMemorySettings

This field will always be set by sysmem.

image_format_constraints ImageFormatConstraints

Buffers holding data that is not uncompressed image data will not have this field set. Buffers holding data that is uncompressed image data may have this field set.

At least for now, changing the PixelFormat requires re-allocating buffers.

If un-set, there are no image format constraints.

VmoBuffer resource

Defined in fuchsia.sysmem2/results.fidl

Added: 19

OrdinalFieldTypeDescription
vmo handle<vmo>

vmo can be un-set if a participant has only fuchsia.sysmem2/BufferUsage.none set to NONE_USAGE (explicitly or implicitly by fuchsia.sysmem2/BufferCollection.SetConstraints without constraints set).

vmo_usable_start uint64

Offset within the VMO of the first usable byte. Must be < the VMO's size in bytes, and leave sufficient room for BufferMemorySettings.size_bytes before the end of the VMO.

Currently sysmem will always set this field to 0, and in future, sysmem won't set this field to a non-zero value unless all participants have explicitly indicated support for non-zero vmo_usable_start (this mechanism does not exist as of this comment). A participant that hasn't explicitly indicated support for non-zero vmo_usable_start (all current clients) should implicitly assume this field is set to 0 without actually checking this field.

close_weak_asap handle<eventpair>

This field is set iff vmo is a sysmem weak VMO handle. The client must keep close_weak_asap around for as long as vmo, and must notice ZX_EVENTPAIR_PEER_CLOSED. If that signal occurs, the client must close vmo asap. Not doing so is considered a VMO leak by the client and in that case sysmem will eventually complain loudly via syslog (currently 5s later).

UNIONS

Allocator_GetVmoInfo_Result strict resource

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalVariantTypeDescription
response Allocator_GetVmoInfo_Response
err Error
framework_err internal

Allocator_ValidateBufferCollectionToken_Result strict

Defined in fuchsia.sysmem2/allocator.fidl

OrdinalVariantTypeDescription
response Allocator_ValidateBufferCollectionToken_Response
framework_err internal

BufferCollectionTokenGroup_CreateChildrenSync_Result strict resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalVariantTypeDescription
response BufferCollectionTokenGroup_CreateChildrenSync_Response
framework_err internal

BufferCollectionToken_DuplicateSync_Result strict resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalVariantTypeDescription
response BufferCollectionToken_DuplicateSync_Response
framework_err internal

BufferCollection_CheckAllBuffersAllocated_Result strict

Defined in fuchsia.sysmem2/collection.fidl

OrdinalVariantTypeDescription
response BufferCollection_CheckAllBuffersAllocated_Response
err Error
framework_err internal

BufferCollection_WaitForAllBuffersAllocated_Result strict resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalVariantTypeDescription
response BufferCollection_WaitForAllBuffersAllocated_Response
err Error
framework_err internal

Node_GetBufferCollectionId_Result strict

Defined in fuchsia.sysmem2/collection.fidl

OrdinalVariantTypeDescription
response Node_GetBufferCollectionId_Response
framework_err internal

Node_GetNodeRef_Result strict resource

Defined in fuchsia.sysmem2/collection.fidl

OrdinalVariantTypeDescription
response Node_GetNodeRef_Response
framework_err internal

Node_IsAlternateFor_Result strict

Defined in fuchsia.sysmem2/collection.fidl

OrdinalVariantTypeDescription
response Node_IsAlternateFor_Response
err Error
framework_err internal

Node_Sync_Result strict

Defined in fuchsia.sysmem2/collection.fidl

OrdinalVariantTypeDescription
response Node_Sync_Response
framework_err internal

CONSTANTS

NameValueTypeDescription
CPU_USAGE_READ 1 uint32
Added: 19
CPU_USAGE_READ_OFTEN 2 uint32
Added: 19
CPU_USAGE_WRITE 4 uint32
Added: 19
CPU_USAGE_WRITE_OFTEN 8 uint32
Added: 19
DISPLAY_USAGE_CURSOR 2 uint32
Added: 19
DISPLAY_USAGE_LAYER 1 uint32
Added: 19
MAX_CLIENT_NAME_LENGTH 256 int32

The max length in bytes of the name request field in fuchsia.sysmem2/Allocator.SetDebugClientInfo and fuchsia.sysmem2/Node.SetDebugClientInfo.

Added: 19
MAX_COUNT_BUFFER_COLLECTION_CONSTRAINTS_IMAGE_FORMAT_CONSTRAINTS 64 uint32

The maximum size of fuchsia.sysmem2/BufferCollectionConstraints.image_format_constraints.

Added: 19
MAX_COUNT_BUFFER_COLLECTION_INFO_BUFFERS 128 uint32

The maximum entries that can be in the fuchsia.sysmem2/BufferCollectionInfo.buffers field.

Added: 19
MAX_COUNT_BUFFER_MEMORY_CONSTRAINTS_PERMITTED_HEAPS 64 uint32

The maximum size of fuchsia.sysmem2/BufferMemoryConstraints.permitted_heaps.

Added: 19
MAX_COUNT_CREATE_CHILDREN 64 int32

The maximum number of token children of an OR group that can be created per call to fuchsia.sysmem2/BufferCollectionTokenGroup.CreateChildrenSync.

Actually creating this many children isn't recommended in most typical scenarios, but isn't prevented, for testing reasons, and just in case an unusual scenario needs it. Mitigation of potentially high time complexity in sysmem will limit the actual number of group child combinations considered in aggregation attempts to a separate maximum that is not settable via sysmem protocols. The maximum number of total nodes in a sysmem token tree is limited to a separate maximum that is not settable via these protocols.

Added: 19
MAX_COUNT_DUPLICATES 64 uint32
Added: 19
MAX_COUNT_IMAGE_FORMAT_CONSTRAINTS_COLOR_SPACES 32 uint32

The maximum size of fuchsia.sysmem2/ImageFormatConstraints.color_spaces.

MAX_COUNT_PIXEL_FORMAT_AND_MODIFIERS 64 uint32

The maximum size of fuchsia.sysmem2/ImageFormatConstraints.pixel_format_and_modifiers.

NONE_USAGE 1 uint32
Added: 19
VIDEO_USAGE_CAPTURE 8 uint32
Added: 19
VIDEO_USAGE_DECRYPTOR_OUTPUT 16 uint32
Added: 19
VIDEO_USAGE_HW_DECODER 1 uint32
Added: 19
VIDEO_USAGE_HW_DECODER_INTERNAL 32 uint32
Added: 19
VIDEO_USAGE_HW_ENCODER 2 uint32
Added: 19
VULKAN_BUFFER_USAGE_INDEX_BUFFER 4194304 uint32
Added: 19
VULKAN_BUFFER_USAGE_INDIRECT_BUFFER 16777216 uint32
Added: 19
VULKAN_BUFFER_USAGE_STORAGE_BUFFER 2097152 uint32
Added: 19
VULKAN_BUFFER_USAGE_STORAGE_TEXEL_BUFFER 524288 uint32
Added: 19
VULKAN_BUFFER_USAGE_TRANSFER_DST 131072 uint32
Added: 19
VULKAN_BUFFER_USAGE_TRANSFER_SRC 65536 uint32
Added: 19
VULKAN_BUFFER_USAGE_UNIFORM_BUFFER 1048576 uint32
Added: 19
VULKAN_BUFFER_USAGE_UNIFORM_TEXEL_BUFFER 262144 uint32
Added: 19
VULKAN_BUFFER_USAGE_VERTEX_BUFFER 8388608 uint32
Added: 19
VULKAN_IMAGE_USAGE_COLOR_ATTACHMENT 16 uint32
Added: 19
VULKAN_IMAGE_USAGE_INPUT_ATTACHMENT 128 uint32
Added: 19
VULKAN_IMAGE_USAGE_SAMPLED 4 uint32
Added: 19
VULKAN_IMAGE_USAGE_STENCIL_ATTACHMENT 32 uint32
Added: 19
VULKAN_IMAGE_USAGE_STORAGE 8 uint32
Added: 19
VULKAN_IMAGE_USAGE_TRANSFER_DST 2 uint32
Added: 19
VULKAN_IMAGE_USAGE_TRANSFER_SRC 1 uint32
Added: 19
VULKAN_IMAGE_USAGE_TRANSIENT_ATTACHMENT 64 uint32
Added: 19