Google is committed to advancing racial equity for Black communities. See how.

What does a display controller do?

Display controllers (DCs) are responsible for compositing one or more source images into a display output signal. Some DCs are capable of producing multiple output signals in parallel, e.g. to drive multiple monitors. This document briefly explains common hardware components to help you understand the often brief and uncommented datasheets available for driver development.

The subset of hardware used to composite one output image is often called a pipe. A pipe is made up of input [planes][#plane], color correction, and blending/compositing stages.

Each plane has a source image with its own dimensions, position, pixel format, color space, gamma correction and blending behavior. The pipe must be programmed to correctly decode, transform, color correct, and blend these planes into a final display image.

Encoders consume final display images and combine the raw pixel data with timing generators for encoding into the right electrical signal for a [connector port]#port

Plane types

A plane that is "screen-sized" and is blended normally.
An overlay is a plane that uses chroma-keyed transparency. A typical use case for this plane type is to render media controls on top of video playback -- software punches a hole in the (largely) static content, allowing a hardware accelerated video decoder to dump contents into a second plane which appears beneath the overlay.
Nowadays, many driver developers use the term "sprite" to refer to any plane that is not intended as the main content. Historically, sprites were used to compactly encode repeated uses of a static image. This type was very popular in the very limited memory environments of 80s personal computers and 80s-90s video game consoles.
Cursor planes are typically smaller and offer limited color space options. They are used to overlay cursors on static content. Most software manipulates this plane by adjusting its position and only very occasionally changing its source image.

Color spaces

Color spaces consist of a color model, e.g. RGB or CMYK, and a mapping function to convert values in that color space to a reference space such as CIELAB or CIEXYZ.

The color spaces used in digital image processing and display can be roughly divided into two types: physical/linear and perceptual/gamma-encoded. Colors in a linear space can be combined and transformed in a physically accurate manner, e.g. doubling the values means "twice as much light". Perceptual color spaces are meant to mimic non-linear human visual perception, so doubling the values would instead mean "twice as bright".

Pixel formats

Pixel formats and color spaces are often confused, and this leads to color accuracy bugs. Take the RGB_888_24 pixel format for example: each pixel is formatted as a triple of (R, G, B) bytes, but you do not know whether that triple is a point in the sRGB, Adobe RGB, or linear RGB color space. If two values from different color spaces are combined (e.g. adding them), the resulting pixel value may be physically incorrect.

A DC must be programmed to convert each plane's image into a linear color space, composite them all, and convert the resulting image into a color space that is appropriate for the output device. The HDMI spec mandates that monitors support at least sRGB.

Gamma correction

For a detailed explanation, see this excellent blog post by John Novak.

Most still digital images are in the sRGB color space, which is also the most widely supported format for computer monitors. However, in order to correctly composite such images for display on a monitor, the controller must convert every plane image into a linear color space ("degamma"), blend them, and then convert back ("regamma") to sRGB and a pixel format supported by the monitor.


Extended Display Identification Data (EDID) is a VESA metadata format for display devices to describe their capabilities to a video source. Many displays expose their color format & output capabilities via [EDID][edid]{:.external}. The //zircon/system/ulib/edid can be used to parse this information and adjust gamma & color-correction programming for higher fidelity color.

Not all fields of EDID will apply to all display devices, e.g. projectors have no physical dimensions.


Pipes are a common abstraction in the theory of operation for display controllers. Each pipe is dedicated to producing a single output image to be encoded for transmission on a port. The final output of a pipe is pixel data in an appropriate format and gamma-encoded for the target displays. This separation of responsibilities allows drivers to support display "mirroring" by using the same final image as the input to multiple encoder/port pairs.


Ports are physical connectors that can be used to attach one or more displays. Example port types are DisplayPort, HDMI, DVI and VGA. Some port specs allow multiple displays to be connected through daisychaining, e.g. DisplayPort's Multi-stream Transport (MST).


Encoders transform the output images of a pipes into signals to be transmitted by one or more ports. For example, HDMI connectors use TMDS which is an 8b/10b encoding.

Display timing

Display timing is a collection of intervals that together describe how to encode a signal for a specific panel. While these intervals are related to the display mode (resolution, color depth, refresh rate), they also include portions of the signal that do not communicate pixel contents, e.g. the vertical blanking interval that has been retained since the early days of broadcast television. This article explains the details of timing.