fuchsia.images

Added: 7

PROTOCOLS

ImagePipe2

Defined in fuchsia.images/image_pipe2.fidl

ImagePipe is a mechanism for streaming shared images between a producer and a consumer which may be running in different processes.

Conceptually, the image pipe maintains a table of image resources supplied by the producer into which graphical content may be stored as well as a presentation queue containing a sequence of images which the producer has asked the consumer to present.

The presentation queue is initially empty.

Each entry in the presentation queue consists of an image together with a pair of optional synchronization fences:

  • Acquire fence: signaled by the producer when the image is ready to be consumed
  • Release fence: signaled by the consumer when the image is free to be freed or modified by the producer

The producer performs the following sequence of steps to present content:

  • Allocate and add some number of BufferCollections to the image pipe to allow consumer to set constraints.
  • Allocate and add some number of images (often 2 or 3) to the image pipe to establish a pool using AddImage().
  • Obtain the next available image from the pool.
  • Ask the consumer to enqueue an image for presentation and provide fences using PresentImage().
  • Start rendering the image.
  • Signal the image's acquire fence when rendering is complete.
  • Loop to present more image, listen for signals on release fences to recycle images back into the pool.

The consumer performs the following sequence of steps for each image which is enqueued in the presentation queue:

  • Await signals on the image's acquire fence.
  • If the fence wait cannot be satisfied or if some other error is detected, close the image pipe. Otherwise, begin presenting the image's content.
  • Retire the previously presented image (if any) from the presentation queue and signal its release fence when no longer needed.
  • Continue presenting the same image until the next one is ready. Loop.

If the producer wants to close the image pipe, it should:

  • Close its side of the connection.
  • Wait on all release fences for buffers that it has submitted with PresentImage().
  • Proceed with resource cleanup.

When the consumer detects the image pipe has closed, it should:

  • Stop using/presenting any images from the pipe.
  • Unmap all memory objects associated with the images in the pipe.
  • Close all BufferCollection resources.
  • Signal all release fences for presented and queued buffers.
  • Close all handles to fences.
  • Close its side of the connection.

When either party detects that a fence has been abandoned (remotely closed without being signaled) it should assume that the associated image is in an indeterminate state. This will typically happen when the other party (or one of its delegates) has crashed. The safest course of action is to close the image pipe, release all resources which were shared with the other party, and re-establish the connection to recover.

Deprecated: 13

AddBufferCollection

Adds a BufferCollection resource to the image pipe.

The producer is expected to set constraints on this resource for images added via AddImage(). The consumer can set its constraints on buffer_collection_token before or after. Note that the buffers won't be allocated until all BufferCollectionToken instances are used to set constraints, on both the producer and consumer side. See collection.fidl for details.

The following errors will cause the connection to be closed:

  • buffer_collection_id is already registered

Request

NameType
buffer_collection_id uint32
buffer_collection_token fuchsia.sysmem/BufferCollectionToken

AddImage

Adds an image resource to image pipe.

buffer_collection_id refers to the BufferCollectionToken instance that is registered via AddBufferCollection(). The underlying memory objects allocated are used to address to the image data. buffer_collection_index refers to the index of the memory object allocated in BufferCollection.

image_format specifiies image properties. coded_width and coded_height are used to set image dimensions.

It is valid to create multiple images backed by the same memory object; they may even overlap. Consumers must detect this and handle it accordingly.

The following errors will cause the connection to be closed:

  • image_id is already registered
  • buffer_collection_id refers to an unregistered BufferCollection.
  • buffer_collection_index points to a resource index out of the initialized BufferCollection bounds
  • No resource is allocated in the registered BufferCollection.

Request

NameType
image_id uint32
buffer_collection_id uint32
buffer_collection_index uint32
image_format fuchsia.sysmem/ImageFormat_2

PresentImage

Enqueues the specified image for presentation by the consumer.

The acquire_fences are a set of fences which must all be signaled by the producer before the consumer presents the image. The release_fences are a set of fences which inform the producer that it's safe to free or modify the image_id image, and it's safe to re-use the fences in acquire_fences. The consumer must signal all the fences in release_fences after image_id is no longer being presented. The producer may reuse resources after any of the release_fences is signaled.

This design allows a producer to distribute image processing across multiple threads / processes without unnecessary coordination delay. Each thread / process signals its own fence in acquire_fences when it's done rendering its piece of image_id, and waits on its own fence in release_fences to render new content in image_id.

presentation_time specifies the time on or after which the client would like the enqueued operations should take visible effect (light up pixels on the screen), expressed in nanoseconds in the CLOCK_MONOTONIC timebase. Desired presentation times must be monotonically non-decreasing.

presentation_info returns timing information about the submitted frame and future frames (see presentation_info.fidl).

The producer may decide not to signal acquire_fences for an image. In that case, if a later image is enqueued and later image's presentation_time is reached, the consumer presents the later image when later image's acquire_fences are signaled. The consumer also signals earlier image's release_fences and removes it from the presentation queue. This sequence works as a cancellation mechanism.

The following errors will cause the connection to be closed:

  • image_id does not reference a currently registered image resource

Request

NameType
image_id uint32
presentation_time uint64
acquire_fences vector<event>[16]
release_fences vector<event>[16]

Response

NameType
presentation_info PresentationInfo

RemoveBufferCollection

Removes a BufferCollection resource from the pipe.

The buffer_collection_id resource is detached as well as all Images that are associated with that BufferCollection. Leads to the same results as calling RemoveImage() on all Images for buffer_collection_id.

The producer must wait for all release fences associated with the Images to be signaled before freeing or modifying the underlying memory object since the image may still be in use in the presentation queue.

The following errors will cause the connection to be closed:

  • buffer_collection_id does not reference a currently registered BufferCollection

Request

NameType
buffer_collection_id uint32

RemoveImage

Removes an image resource from the pipe.

The image_id is detached from the image resource and is free to be reused to add a new image resource.

Removing an image from the image pipe does not affect the presentation queue or the currently presented image.

The producer must wait for all release fences associated with the image to be signaled before freeing or modifying the underlying memory object since the image may still be in use in the presentation queue.

The following errors will cause the connection to be closed:

  • image_id does not reference a currently registered image resource

Request

NameType
image_id uint32

STRUCTS

EncodedImage resource

Defined in fuchsia.images/encoded_image.fidl

Removed: 14 Deprecated: 13

FieldTypeDescriptionDefault
vmo handle<vmo>

The vmo.

No default
size uint64

The size of the image in the vmo in bytes.

No default

ImageInfo

Defined in fuchsia.images/image_info.fidl

Information about a graphical image (texture) including its format and size.

FieldTypeDescriptionDefault
transform Transform

Specifies if the image should be mirrored before displaying.

Transform.NORMAL
width uint32

The width and height of the image in pixels.

No default
height uint32 No default
stride uint32

The number of bytes per row in the image buffer.

No default
pixel_format PixelFormat

The pixel format of the image.

PixelFormat.BGRA_8
color_space ColorSpace

The pixel color space.

ColorSpace.SRGB
tiling Tiling

The pixel arrangement in memory.

Tiling.LINEAR
alpha_format AlphaFormat

Specifies the interpretion of the alpha channel, if one exists.

AlphaFormat.OPAQUE

PresentationInfo

Defined in fuchsia.images/presentation_info.fidl

Information returned by methods such as ImagePipe.PresentImage() and Session.Present(), when the consumer begins preparing the first frame which includes the presented content.

Deprecated: 13

FieldTypeDescriptionDefault
presentation_time uint64

The actual time at which the enqueued operations are anticipated to take visible effect, expressed in nanoseconds in the CLOCK_MONOTONIC timebase.

This value increases monotonically with each new frame, typically in increments of the presentation_interval.

No default
presentation_interval uint64

The nominal amount of time which is anticipated to elapse between successively presented frames, expressed in nanoseconds. When rendering to a display, the interval will typically be derived from the display refresh rate.

This value is non-zero. It may vary from time to time, such as when changing display modes.

No default

ENUMS

AlphaFormat strict

Type: uint32

Defined in fuchsia.images/image_info.fidl

Specifies how alpha information should be interpreted.

NameValueDescription
0

Image is considered to be opaque. Alpha channel is ignored. Blend function is: src.RGB

1

Color channels have been premultiplied by alpha. Blend function is: src.RGB + (dest.RGB * (1 - src.A))

2

Color channels have not been premultiplied by alpha. Blend function is: (src.RGB * src.A) + (dest.RGB * (1 - src.A))

ColorSpace strict

Type: uint32

Defined in fuchsia.images/image_info.fidl

Specifies how pixel color information should be interpreted.

NameValueDescription
0

MemoryType strict

Type: uint32

Defined in fuchsia.images/memory_type.fidl

Specifies the type of VMO's memory.

Deprecated: 13

NameValueDescription
0

VMO is regular host CPU memory.

1

VMO can be imported as a VkDeviceMemory by calling VkAllocateMemory with a VkImportMemoryFuchsiaHandleInfoKHR wrapped in a VkMemoryAllocateInfo.

PixelFormat strict

Type: uint32

Defined in fuchsia.images/image_info.fidl

Specifies how pixels are represented in the image buffer.

NameValueDescription
0

BGRA_8

A 32-bit four-component unsigned integer format. Byte order: B, G, R, A (little-endian ARGB packed 32-bit word). Equivalent to Skia kBGRA_8888_SkColorType color type. Equivalent to Zircon ARGB_8888 pixel format on little-endian arch.

1

YUY2

4:2:2 (2x down-sampled UV horizontally; full res UV vertically)

A 32-bit component that contains information for 2 pixels: Byte order: Y1, U, Y2, V Unpacks to 2 RGB pixels, where RGB1 = func(Y1, U, V) and RGB2 = func(Y2, U, V) Equivalent to YUV422

2

NV12

4:2:0 (2x down-sampled UV in both directions)

Offset 0: 8 bit per pixel Y plane with bytes YYY. Offset height * stride: 8 bit UV data interleaved bytes as UVUVUV.

Y plane has line stride >= width.

In this context, both width and height are required to be even.

The UV data is separated into "lines", with each "line" having same byte width as a line of Y data, and same "line" stride as Y data's line stride. The UV data has height / 2 "lines".

In converting to RGB, the UV data gets up-scaled by 2x in both directions overall. This comment is intentionally silent on exactly how UV up-scaling phase/filtering/signal processing works, as it's a complicated topic that can vary by implementation, typically trading off speed and quality of the up-scaling. See comments in relevant conversion code for approach taken by any given convert path. The precise relative phase of the UV data is not presently conveyed.

3

YV12

Like I420, except with V and U swapped.

4:2:0 (2x down-sampled UV in both directions)

Offset 0: 8 bit per pixel Y plane with bytes YYY. Offset height * stride: 8 bit V data with uv_stride = stride / 2 Offset height * stride + uv_stride * height / 2: 8 bit U data with uv_stride = stride / 2

Y plane has line stride >= width.

Both width and height are required to be even.

4

R8G8B8A8

A 32-bit four-component unsigned integer format. Byte order: R, G, B, A (little-endian ABGR packed 32-bit word). Equivalent to Skia kRGBA_8888_SkColorType color type. Equivalent to Zircon ABGR_8888 pixel format on little-endian arch.

This format can only be used with VK_DEVICE_MEMORY.

Tiling strict

Type: uint32

Defined in fuchsia.images/image_info.fidl

Specifies how pixels are arranged in memory.

NameValueDescription
0

Pixels are packed linearly. Equivalent to VK_IMAGE_TILING_LINEAR.

1

Pixels are packed in a GPU-dependent optimal format. Equivalent to VK_IMAGE_TILING_OPTIMAL.

Transform strict

Type: uint32

Defined in fuchsia.images/image_info.fidl

NameValueDescription
0

Pixels are displayed normally.

1

Pixels are mirrored left-right.

2

Pixels are flipped vertically.

3

Pixels are flipped vertically and mirrored left-right.

CONSTANTS

NameValueTypeDescription
MAX_ACQUIRE_RELEASE_FENCE_COUNT 16 int32
Deprecated: 13