FIDL API Rubric

General Advice

This section presents techniques, best practices, and general advice about defining protocols in the Fuchsia Interface Definition Language.

See also the FIDL Style Guide.

Protocols not objects

FIDL is a language for defining interprocess communication protocols. Although the syntax resembles a definition of an object-oriented interface, the design considerations are more akin to network protocols than to object systems. For example, to design a high-quality protocol, you need to consider bandwidth, latency, and flow control. You should also consider that a protocol is more than just a logical grouping of operations: a protocol also imposes a FIFO ordering on requests and breaking a protocol into two smaller protocols means that requests made on the two different protocols can be reordered with respect to each other.

Focus on the types

A good starting point for designing your FIDL protocol is to design the data structures your protocol will use. For example, a FIDL protocol about networking would likely contain data structures for various types of IP addresses and a FIDL protocol about graphics would likely contain data structures for various geometric concepts. You should be able to look at the type names and have some intuition about the concepts the protocol manipulates and how those concepts might be structured.

Language neutrality

There are FIDL back ends for many different languages. You should avoid over-specializing your FIDL definitions for any particular target language. Over time, your FIDL protocol is likely to be used by many different languages, perhaps even some languages that are not even supported today. FIDL is the glue that holds the system together and lets Fuchsia support a wide variety of languages and runtimes. If you over-specialize for your favorite language, you undermine that core value proposition.

Ordinals

Protocols contain a number of methods. Each method is automatically assigned a unique 64-bit identifier, called an ordinal. Servers use the ordinal value to determine which protocol method should be dispatched.

The compiler determines the ordinal value by hashing the library, protocol, and method name. In rare cases, ordinals in the same protocol may collide. If this happens, you can use the Selector attribute to change the name of the method the compiler uses for hashing. The following example will use the method name "C" instead of the method name "B" for calculating the hash:

protocol A {
    @selector("C")
    B(struct {
        s string;
        b bool;
    });
};

Selectors can also be used to maintain backwards compatibility with the wire format in cases where developers wish to change the name of a method.

Diagnostics

Sometimes there’s the need to expose information useful for debugging purposes or diagnostics of a program. This data can take the form of statistics and metrics (like number of errors, calls, sizes, etc), information useful for development, health state of a component or similar.

It's tempting to expose this information in test protocols or debug methods in a production protocol. Fuchsia, however, provides a separate mechanism for exposing this type of information: Inspect which should be taken into account to make the best decision on how to expose this type of data. Inspect should be used instead of a FIDL method/protocol when there’s a need to expose diagnostics information about a program that is useful for debugging in tests, used by dev tools, or retrieved in the field through crash reports or metrics as long as no other program uses that information to make runtime decisions.

FIDL should be used when runtime decisions will be made based on diagnostics information by other programs. Inspect must never be used for communication between programs, it’s a best effort system that must not be relied on to make decisions or alter behavior during runtime in production.

A heuristic to decide whether to use Inspect or FIDL could be:

  1. Is the data used by other programs in production?

    • Yes: Use FIDL.
  2. Is the data used by crash reports or in metrics?

    • Yes: Use Inspect.
  3. Is the data used by tests or developer tools? Any chance that it will ever be used in production?

    • Yes: Use FIDL.
    • No: Use either.

Library structure

Grouping of FIDL declarations into FIDL libraries has two specific goals:

  • Help FIDL developers (those using the FIDL libraries) navigate the API surface.
  • Provide structure to hierarchically scope FIDL declarations within FIDL libraries.

Carefully consider how you divide your type and protocol definitions into libraries. How you decompose these definitions into libraries has a large effect on the consumers of these definitions because a FIDL library is the unit of dependency and distribution for your types and protocols.

The FIDL compiler requires that the dependency graph between libraries is a DAG, which means you cannot create a circular dependency across library boundaries. However, you can create (some) circular dependencies within a library.

To decide whether to decompose a library into smaller libraries, consider the following questions:

  • Do the customers for the library break down into separate roles that would want to use a subset of the functionality or declarations in the library? If so, consider breaking the library into separate libraries that target each role.

  • Does the library correspond to an industry concept that has a generally understood structure? If so, consider structuring your library to match the industry-standard structure. For example, Bluetooth is organized into fuchsia.bluetooth.le and fuchsia.bluetooth.gatt to match how these concepts are generally understood in the industry. Similarly, fuchsia.net.http corresponds to the industry-standard HTTP network protocol.

  • Do many other libraries depend upon the library? If so, check whether those incoming dependencies really need to depend on the whole library or whether there is a "core" set of definitions that could be factored out of the library to receive the bulk of the incoming dependencies.

Ideally, we would produce a FIDL library structure for Fuchsia as a whole that is a global optimum. However, Conway's law states that "organizations which design systems [...] are constrained to produce designs which are copies of the communication structures of these organizations." We should spend a moderate amount of time fighting Conway's law.

Access control is at protocol granularity

When deciding in which library to define a protocol, do not take into account access control considerations. Generally, access control is expressed at protocol granularity. The library in which a protocol is defined has no bearing on access control, and cannot be used to determine whether it can or cannot be accessed.

As an example, a process may access fuchsia.logger.LogSink, or a process is given a client end of the fuchsia.media.StreamSource protocol. However, FIDL is not designed and cannot be used to express access to the fuchsia.logger library, or prevent access the fuchsia.ldsvc library.

The fuchsia namespace

FIDL libraries defined in the Platform Source Tree (i.e., defined in fuchsia.googlesource.com) must be in the fuchsia top-level namespace (e.g., fuchsia.ui) unless one of the following is true:

  • The library defines the portions of the FIDL language itself or its conformance test suite, in which case the top-level namespace must be fidl.
  • The library is only used for internal testing and is not included in the SDK or in production builds, in which case the top-level namespace must be test.

FIDL libraries in the top-level namespace fuchsia namespace are strongly encouraged to have no more than four components, i.e. fuchsia.<api-namespace>, fuchsia.<api-namespace>.<name> or fuchsia.<api-namespace>.<name>.<subname>. Choose an appropriate api-namespace, possibly with the help of an API Council member.

For instance, FIDL libraries defined in the Platform Source Tree for the purpose of exposing hardware functionality to applications must be in the fuchsia.hardware namespace. For example, a protocol for exposing an ethernet device might be named fuchsia.hardware.ethernet.Device. Higher-level functionality built on top of these FIDL protocols does not belong in the fuchsia.hardware namespace. For example, it is more appropriate for network protocols to be under fuchsia.net than fuchsia.hardware.

Avoid nesting too deeply

Prefer library names with three components (e.g. fuchsia.hardware.network), and avoid library names with more than four components (e.g. fuchsia.apps.foo.bar.baz). If you use more than four components, you should have a specific reason for that choice.

Library dependencies

It is preferable to introduce dependencies from libraries with more specific names to libraries with less specific names. For example, fuchsia.foo.bar might depend on fuchsia.foo, but fuchsia.foo should not depend on fuchsia.foo.bar. This pattern is better for extensibility because over time we can add more libraries with more specific names but there are only a finite number of libraries with less specific names.

Visibility to importing libraries

To expand on the second goal of grouping of FIDL declarations into FIDL libraries, we expect to evolve FIDL to provide visibility rules altering whether elements may be used by importing libraries ("child libraries"), e.g. public or private modifiers.

The internal library component name is intended to be treated specially, and indicates a local restriction of visibility rules. For instance, a public declaration in the fuchsia.net.dhcp.internal.foo library might only be visible to its parent fuchsia.net.dhcp, or its siblings e.g. fuchsia.net.dhcp.internal.bar.

Using multi-word library components

While library names with components that join multiple words (e.g. fuchsia.modular.storymodel) are allowed, their use should be exceptional. Library authors can resort to joining multiple words together if the library name would violate nesting rules, or if neither word should take precedence over the other when thinking hierarchically about placement of the library.

Version strings

Should a library need to be versioned, a single version number should be suffixed e.g. fuchsia.io2 or fuchsia.something.something4. Version numbers should not be multi-part, e.g. fuchsia.io2.1 is not acceptable, and should instead be fuchsia.io3. Any library component may be versioned, though it is strongly discouraged to have multiple versioned components, e.g. fuchsia.hardware.cpu2.ctrl but not fuchsia.hardware.cpu2.ctrl4.

Version numbers should only indicate a more recent version of a library, rather than a materially different domain. As a counterexample, fuchsia.input library is used for lower level device handling, while fuchsia.ui.input{2,3} is used for input that's interacting with scenic and with software components that render UIs. Focusing solely on versioning, it would have been clearer as fuchsia.ui.scenic.input and fuchsia.ui.scenic.input2 to distinguish from the other domain that fuchsia.input serves.

Types

As mentioned under "general advice," you should pay particular attention to the types you used in your protocol definition.

Be consistent

Use consistent types for the same concept. For example, use a uint32 or an int32 for a particular concept consistently throughout your library. If you create a struct for a concept, be consistent about using that struct to represent the concept.

Ideally, types would be used consistently across library boundaries as well. Check related libraries for similar concepts and be consistent with those libraries. If there are many concepts shared between libraries, consider factoring the type definitions for those concepts into a common library. For example, fuchsia.mem and fuchsia.math contain many commonly used types for representing memory and mathematical concepts, respectively.

Prefer semantic types

Create structs to name commonly used concepts, even if those concepts could be represented using primitives. For example, an IPv4 address is an important concept in the networking library and should be named using a struct even through the data can be represented using a primitive:

type Ipv4Address = struct {
    octets array<uint8, 4>;
};

In performance-critical target languages, structs are represented in line, which reduces the cost of using structs to name important concepts.

zx.Time has a well-defined timebase

The zx.Time type monotonically measures the number of nanoseconds from a device-specific timebase. Uses of zx.Time can assume this timebase, and it does not need to be spelled out.

Use anonymous types judiciously

Anonymous types are very useful to describe an API more fluently. In particular, anonymous types are well suited for cases where you have a priori knowledge that a sub-element of a named type is inherently tied to that named type, and will not be useful or meaningful when used outside of the context of the containing named container.

Consider for instance a union variant, which aggregates together a few things. It is exceedingly rare for the union variant to be used by itself, i.e. we know a priori that the union variant has meaning only in the context of its specific use. As a result, using anonymous types for union variants is appropriate and recommended.

Ideally, types should both map one-to-one to key concepts of an API, and no two types should have the same definition. Achieving both is however not always possible, especially in cases where the naming of type — which introduces a different concept1 — is meaningful beyond its use as an API surface element. Consider for instance named identifiers type EntityId = struct { id uint64; }; and type OtherEntityId = struct { id uint64; }; which represent different concepts, yet have the same type definition except for their names.

Using anonymous types creates multiple types, all incompatible with each other. As such, if multiple anonymous types are used to represent the same concept, this will lead to an overly complex API, and prevent generic handling in most target languages.

When using anonymous types, you must therefore avoid multiple anonymous types representing the same concept. For instance, you must not use anonymous types in cases where evolution of the API will likely lead to multiple anonymous types representing the same concept.

Consider using Virtual Memory Objects (VMOs)

A Virtual Memory Object (VMO) is a kernel object that represents a contiguous region of virtual memory along with a logical size. Use this type to transfer memory in a FIDL message and use the ZX_PROP_VMO_CONTENT_SIZE property to track the amount of data contained in the object.

Specify bounds for vector and string

All vector and string declarations should specify a length bound. Declarations generally fall into one of two categories:

  • There is a constraint inherent to the data. For example, a string containing a filesystem name component must not be longer than fuchsia.io.MAX_FILENAME.
  • There is no constraint other than "as much as possible." In these cases, you should use the built-in constant MAX.

Whenever you use MAX, consider whether the receiver of the message would really want to process arbitrarily long sequences or whether extremely long sequences represent abuse.

Bear in mind that all declarations are implicitly bounded by the maximum message length when sent over a zx::channel. If there really are use cases for arbitrarily long sequences, simply using MAX might not address those use cases because clients that attempt to provide extremely long sequences might hit the maximum message length.

To address use cases with arbitrarily large sequences, consider breaking the sequence up into multiple messages using one of the pagination patterns discussed below or consider moving the data out of the message itself, for example into a VMO.

FIDL recipe: Size constraint

FIDL vectors and strings may carry a size constraint, which specifies a limit to how many members the type can contain. In the case of vectors this refers to the number of elements stored in the vector, while for strings it refers to the number-of-bytes the string contains.

The use of size constraints is strongly encouraged, since it sets an upper bound on what would otherwise be an unboundedly large type.

A useful operation for key-value stores is in-order iteration: that is, when given a key, to return a (usually paginated) list of elements that appear after it, in order.

Reasoning

In FIDL, this is best done using an iterator, which is generally implemented as a separate protocol over which this iteration can occur. Using a separate protocol, and therefore a separate channel, has a number of benefits, including de-interleaving the iteration pull requests from other operations done over the main protocol.

The client and server side of the channel connection for protocol P can be represented as FIDL data types, as a client_end:P and server_end:P, respectively. These types are collectively known as protocol ends, and represent the other (non-@discoverable) way of connecting a FIDL client to its corresponding server: over an existing FIDL connection!

Protocol ends are specific instances of a general FIDL concept: the resource type. A resource type is intended to contain FIDL handles, which necessitates extra restrictions on how the type can be used. The type must be always be unique, as the underlying resource is mediated by some other capability manager (usually the Zircon kernel). Duplicating such a resource via a simple in-memory copy, without involving the manager, is impossible. To prevent such duplication, all resource types in FIDL are always move-only.

Finally, the Get() method of the Iterator protocol itself makes use of a size constraint on the return payload. This limits the amount of data that may be transmitted in a single pull, allowing for some measure of resource use control. It also creates a natural pagination boundary: rather than a giant dump of all of the results at once, the server only needs to prepare small batches at a time.

Implementation

The FIDL, CML, and realm interface definitions are as follows:

FIDL

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
library examples.keyvaluestore.additerator;

/// An item in the store. The key must match the regex `^[A-z][A-z0-9_\.\/]{2,62}[A-z0-9]$`. That
/// is, it must start with a letter, end with a letter or number, contain only letters, numbers,
/// periods, and slashes, and be between 4 and 64 characters long.
type Item = struct {
    key string:128;
    value vector<byte>:64000;
};

/// An enumeration of things that may go wrong when trying to write a value to our store.
type WriteError = flexible enum {
    UNKNOWN = 0;
    INVALID_KEY = 1;
    INVALID_VALUE = 2;
    ALREADY_EXISTS = 3;
};

/// An enumeration of things that may go wrong when trying to create an iterator.
type IterateConnectionError = flexible enum {
    /// The starting key was not found.
    UNKNOWN_START_AT = 1;
};

/// A key-value store which supports insertion and iteration.
@discoverable
open protocol Store {
    /// Writes an item to the store.
    flexible WriteItem(struct {
        attempt Item;
    }) -> () error WriteError;

    /// Iterates over the items in the store, using lexicographic ordering over the keys.
    ///
    /// The [`iterator`] is [pipelined][pipelining] to the server, such that the client can
    /// immediately send requests over the new connection.
    ///
    /// [pipelining]: https://fuchsia.dev/fuchsia-src/development/api/fidl?hl=en#request-pipelining
    flexible Iterate(resource struct {
        /// If present, requests to start the iteration at this item.
        starting_at string:<128, optional>;

        /// The [`Iterator`] server endpoint. The client creates both ends of the channel and
        /// retains the `client_end` locally to use for pulling iteration pages, while sending the
        /// `server_end` off to be fulfilled by the server.
        iterator server_end:Iterator;
    }) -> () error IterateConnectionError;
};

/// An iterator for the key-value store. Note that this protocol makes no guarantee of atomicity -
/// the values may change between pulls from the iterator. Unlike the `Store` protocol above, this
/// protocol is not `@discoverable`: it is not independently published by the component that
/// implements it, but rather must have one of its two protocol ends transmitted over an existing
/// FIDL connection.
///
/// As is often the case with iterators, the client indicates that they are done with an instance of
/// the iterator by simply closing their end of the connection.
///
/// Since the iterator is associated only with the Iterate method, it is declared as closed rather
/// than open. This is because changes to how iteration works are more likely to require replacing
/// the Iterate method completely (which is fine because that method is flexible) rather than
/// evolving the Iterator protocol.
closed protocol Iterator {
    /// Gets the next batch of keys.
    ///
    /// The client pulls keys rather than having the server proactively push them, to implement
    /// [flow control][flow-control] over the messages.
    ///
    /// [flow-control]:
    ///     https://fuchsia.dev/fuchsia-src/development/api/fidl?hl=en#prefer_pull_to_push
    strict Get() -> (struct {
        /// A list of keys. If the iterator has reached the end of iteration, the list will be
        /// empty. The client is expected to then close the connection.
        entries vector<string:128>:10;
    });
};

CML

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    include: [ "syslog/client.shard.cml" ],
    program: {
        runner: "elf",
        binary: "bin/client_bin",
    },
    use: [
        { protocol: "examples.keyvaluestore.additerator.Store" },
    ],
    config: {
        write_items: {
            type: "vector",
            max_count: 16,
            element: {
                type: "string",
                max_size: 64,
            },
        },

        // A key to iterate from, after all items in `write_items` have been written.
        iterate_from: {
            type: "string",
            max_size: 64,
        },

    },
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    include: [ "syslog/client.shard.cml" ],
    program: {
        runner: "elf",
        binary: "bin/server_bin",
    },
    capabilities: [
        { protocol: "examples.keyvaluestore.additerator.Store" },
    ],
    expose: [
        {
            protocol: "examples.keyvaluestore.additerator.Store",
            from: "self",
        },
    ],
}

Realm

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    children: [
        {
            name: "client",
            url: "#meta/client.cm",
        },
        {
            name: "server",
            url: "#meta/server.cm",
        },
    ],
    offer: [
        // Route the protocol under test from the server to the client.
        {
            protocol: "examples.keyvaluestore.additerator.Store",
            from: "#server",
            to: "#client",
        },

        // Route diagnostics support to all children.
        {
            protocol: [
                "fuchsia.inspect.InspectSink",
                "fuchsia.logger.LogSink",
            ],
            from: "parent",
            to: [
                "#client",
                "#server",
            ],
        },
    ],
}

Client and server implementations can then be written in any supported language:

Rust

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

use {
    anyhow::{Context as _, Error},
    config::Config,
    fuchsia_component::client::connect_to_protocol,
    std::{thread, time},
};

use {
    fidl::endpoints::create_proxy,
    fidl_examples_keyvaluestore_additerator::{Item, IteratorMarker, StoreMarker},
    futures::join,
};

#[fuchsia::main]
async fn main() -> Result<(), Error> {
    println!("Started");

    // Load the structured config values passed to this component at startup.
    let config = Config::take_from_startup_handle();

    // Use the Component Framework runtime to connect to the newly spun up server component. We wrap
    // our retained client end in a proxy object that lets us asynchronously send `Store` requests
    // across the channel.
    let store = connect_to_protocol::<StoreMarker>()?;
    println!("Outgoing connection enabled");

    // This client's structured config has one parameter, a vector of strings. Each string is the
    // path to a resource file whose filename is a key and whose contents are a value. We iterate
    // over them and try to write each key-value pair to the remote store.
    for key in config.write_items.into_iter() {
        let path = format!("/pkg/data/{}.txt", key);
        let value = std::fs::read_to_string(path.clone())
            .with_context(|| format!("Failed to load {path}"))?;
        match store.write_item(&Item { key: key, value: value.into_bytes() }).await? {
            Ok(_) => println!("WriteItem Success"),
            Err(err) => println!("WriteItem Error: {}", err.into_primitive()),
        }
    }

    if !config.iterate_from.is_empty() {
        // This helper creates a channel, and returns two protocol ends: the `client_end` is already
        // conveniently bound to the correct FIDL protocol, `Iterator`, while the `server_end` is
        // unbound and ready to be sent over the wire.
        let (iterator, server_end) = create_proxy::<IteratorMarker>()?;

        // There is no need to wait for the iterator to connect before sending the first `Get()`
        // request - since we already hold the `client_end` of the connection, we can start queuing
        // requests on it immediately.
        let connect_to_iterator = store.iterate(Some(config.iterate_from.as_str()), server_end);
        let first_get = iterator.get();

        // Wait until both the connection and the first request resolve - an error in either case
        // triggers an immediate resolution of the combined future.
        let (connection, first_page) = join!(connect_to_iterator, first_get);

        // Handle any connection error. If this has occurred, it is impossible for the first `Get()`
        // call to have resolved successfully, so check this error first.
        if let Err(err) = connection.context("Could not connect to Iterator")? {
            println!("Iterator Connection Error: {}", err.into_primitive());
        } else {
            println!("Iterator Connection Success");

            // Consecutively repeat the `Get()` request if the previous response was not empty.
            let mut entries = first_page.context("Could not get page from Iterator")?;
            while !&entries.is_empty() {
                for entry in entries.iter() {
                    println!("Iterator Entry: {}", entry);
                }
                entries = iterator.get().await.context("Could not get page from Iterator")?;
            }
        }
    }

    // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
    // referenced bug has been resolved, we can remove the sleep.
    thread::sleep(time::Duration::from_secs(2));
    Ok(())
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

use {
    anyhow::{Context as _, Error},
    fuchsia_component::server::ServiceFs,
    futures::prelude::*,
    lazy_static::lazy_static,
    regex::Regex,
};

use {
    fidl_examples_keyvaluestore_additerator::{
        Item, IterateConnectionError, IteratorRequest, IteratorRequestStream, StoreRequest,
        StoreRequestStream, WriteError,
    },
    fuchsia_async as fasync,
    std::collections::btree_map::Entry,
    std::collections::BTreeMap,
    std::ops::Bound::*,
    std::sync::Arc,
    std::sync::Mutex,
};

lazy_static! {
    static ref KEY_VALIDATION_REGEX: Regex =
        Regex::new(r"^[A-Za-z]\w+[A-Za-z0-9]$").expect("Key validation regex failed to compile");
}

/// Handler for the `WriteItem` method.
fn write_item(store: &mut BTreeMap<String, Vec<u8>>, attempt: Item) -> Result<(), WriteError> {
    // Validate the key.
    if !KEY_VALIDATION_REGEX.is_match(attempt.key.as_str()) {
        println!("Write error: INVALID_KEY, For key: {}", attempt.key);
        return Err(WriteError::InvalidKey);
    }

    // Validate the value.
    if attempt.value.is_empty() {
        println!("Write error: INVALID_VALUE, For key: {}", attempt.key);
        return Err(WriteError::InvalidValue);
    }

    // Write to the store, validating that the key did not already exist.
    match store.entry(attempt.key) {
        Entry::Occupied(entry) => {
            println!("Write error: ALREADY_EXISTS, For key: {}", entry.key());
            Err(WriteError::AlreadyExists)
        }
        Entry::Vacant(entry) => {
            println!("Wrote value at key: {}", entry.key());
            entry.insert(attempt.value);
            Ok(())
        }
    }
}

/// Handler for the `Iterate` method, which deals with validating that the requested start position
/// exists, and then sets up the asynchronous side channel for the actual iteration to occur over.
fn iterate(
    store: Arc<Mutex<BTreeMap<String, Vec<u8>>>>,
    starting_at: Option<String>,
    stream: IteratorRequestStream,
) -> Result<(), IterateConnectionError> {
    // Validate that the starting key, if supplied, actually exists.
    if let Some(start_key) = starting_at.clone() {
        if !store.lock().unwrap().contains_key(&start_key) {
            return Err(IterateConnectionError::UnknownStartAt);
        }
    }

    // Spawn a detached task. This allows the method call to return while the iteration continues in
    // a separate, unawaited task.
    fasync::Task::spawn(async move {
        // Serve the iteration requests. Note that access to the underlying store is behind a
        // contended `Mutex`, meaning that the iteration is not atomic: page contents could shift,
        // change, or disappear entirely between `Get()` requests.
        stream
            .map(|result| result.context("failed request"))
            .try_fold(
                match starting_at {
                    Some(start_key) => Included(start_key),
                    None => Unbounded,
                },
                |mut lower_bound, request| async {
                    match request {
                        IteratorRequest::Get { responder } => {
                            println!("Iterator page request received");

                            // The `page_size` should be kept in sync with the size constraint on
                            // the iterator's response, as defined in the FIDL protocol.
                            static PAGE_SIZE: usize = 10;

                            // An iterator, beginning at `lower_bound` and tracking the pagination's
                            // progress through iteration as each page is pulled by a client-sent
                            // `Get()` request.
                            let held_store = store.lock().unwrap();
                            let mut entries = held_store.range((lower_bound.clone(), Unbounded));
                            let mut current_page = vec![];
                            for _ in 0..PAGE_SIZE {
                                match entries.next() {
                                    Some(entry) => {
                                        current_page.push(entry.0.clone());
                                    }
                                    None => break,
                                }
                            }

                            // Update the `lower_bound` - either inclusive of the next item in the
                            // iteration, or exclusive of the last seen item if the iteration has
                            // finished. This `lower_bound` will be passed to the next request
                            // handler as its starting point.
                            lower_bound = match entries.next() {
                                Some(next) => Included(next.0.clone()),
                                None => match current_page.last() {
                                    Some(tail) => Excluded(tail.clone()),
                                    None => lower_bound,
                                },
                            };

                            // Send the page. At the end of this scope, the `held_store` lock gets
                            // dropped, and therefore released.
                            responder.send(&current_page).context("error sending reply")?;
                            println!("Iterator page sent");
                        }
                    }
                    Ok(lower_bound)
                },
            )
            .await
            .ok();
    })
    .detach();

    Ok(())
}

/// Creates a new instance of the server. Each server has its own bespoke, per-connection instance
/// of the key-value store.
async fn run_server(stream: StoreRequestStream) -> Result<(), Error> {
    // Create a new in-memory key-value store. The store will live for the lifetime of the
    // connection between the server and this particular client.
    //
    // Note that we now use an `Arc<Mutex<BTreeMap>>`, replacing the previous `RefCell<HashMap>`.
    // The `BTreeMap` is used because we want an ordered map, to better facilitate iteration. The
    // `Arc<Mutex<...>>` is used because there are now multiple async tasks accessing the: one main
    // task which handles communication over the protocol, and one additional task per iterator
    // protocol. `Arc<Mutex<...>>` is the simplest way to synchronize concurrent access between
    // these racing tasks.
    let store = &Arc::new(Mutex::new(BTreeMap::<String, Vec<u8>>::new()));

    // Serve all requests on the protocol sequentially - a new request is not handled until its
    // predecessor has been processed.
    stream
        .map(|result| result.context("failed request"))
        .try_for_each(|request| async {
            // Match based on the method being invoked.
            match request {
                StoreRequest::WriteItem { attempt, responder } => {
                    println!("WriteItem request received");

                    // The `responder` parameter is a special struct that manages the outgoing reply
                    // to this method call. Calling `send` on the responder exactly once will send
                    // the reply.
                    responder
                        .send(write_item(&mut store.clone().lock().unwrap(), attempt))
                        .context("error sending reply")?;
                    println!("WriteItem response sent");
                }
                StoreRequest::Iterate { starting_at, iterator, responder } => {
                    println!("Iterate request received");

                    // The `iterate` handler does a quick check to see that the request is valid,
                    // then spins up a separate worker task to serve the newly minted `Iterator`
                    // protocol instance, allowing this call to return immediately and continue the
                    // request stream with other work.
                    responder
                        .send(iterate(store.clone(), starting_at, iterator.into_stream()?))
                        .context("error sending reply")?;
                    println!("Iterate response sent");
                } //
                StoreRequest::_UnknownMethod { ordinal, .. } => {
                    println!("Received an unknown method with ordinal {ordinal}");
                }
            }
            Ok(())
        })
        .await
}

// A helper enum that allows us to treat a `Store` service instance as a value.
enum IncomingService {
    Store(StoreRequestStream),
}

#[fuchsia::main]
async fn main() -> Result<(), Error> {
    println!("Started");

    // Add a discoverable instance of our `Store` protocol - this will allow the client to see the
    // server and connect to it.
    let mut fs = ServiceFs::new_local();
    fs.dir("svc").add_fidl_service(IncomingService::Store);
    fs.take_and_serve_directory_handle()?;
    println!("Listening for incoming connections");

    // The maximum number of concurrent clients that may be served by this process.
    const MAX_CONCURRENT: usize = 10;

    // Serve each connection simultaneously, up to the `MAX_CONCURRENT` limit.
    fs.for_each_concurrent(MAX_CONCURRENT, |IncomingService::Store(stream)| {
        run_server(stream).unwrap_or_else(|e| println!("{:?}", e))
    })
    .await;

    Ok(())
}

C++ (Natural)

Client

// TODO(https://fxbug.dev/42060656): C++ (Natural) implementation.

Server

// TODO(https://fxbug.dev/42060656): C++ (Natural) implementation.

C++ (Wire)

Client

// TODO(https://fxbug.dev/42060656): C++ (Wire) implementation.

Server

// TODO(https://fxbug.dev/42060656): C++ (Wire) implementation.

HLCPP

Client

// TODO(https://fxbug.dev/42060656): HLCPP implementation.

Server

// TODO(https://fxbug.dev/42060656): HLCPP implementation.

String encoding, string contents, and length bounds

FIDL strings are encoded in UTF-8, a variable-width encoding that uses 1, 2, 3, or 4 bytes per Unicode code point.

Bindings enforce valid UTF-8 for strings, and strings are therefore not appropriate for arbitrary binary data. See Should I use string or vector?.

Because the purpose of length bound declarations is to provide an easily calculable upper bound on the total byte size of a FIDL message, string bounds specify the maximum number of bytes in the field. To be on the safe side, you will generally want to budget (4 bytes · code points in string). (If you know for certain that the text only uses code points in the single-byte ASCII range, as in the case of phone numbers or credit card numbers, 1 byte per code point will be sufficient.)

How many code points are in a string? This question can be complicated to answer, particularly for user-generated string contents, because there is not necessarily a one-to-one correspondence between a Unicode code point and what users might think of as "characters".

For example, the string

is rendered as a single user-perceived "character", but actually consists of two code points:

1. LATIN SMALL LETTER A (U+0061)
2. COMBINING ACUTE ACCENT (U+0301)

In Unicode terminology, this kind of user-perceived "character" is known as a grapheme cluster.

A single grapheme cluster can consist of arbitrarily many code points. Consider this longer example:

á🇨🇦b👮🏽‍♀️

If your system and fonts support it, you should see four grapheme clusters above:

1. 'a' with acute accent
2. emoji of Canadian flag
3. 'b'
4. emoji of a female police officer with a medium skin tone

These four grapheme clusters are encoded as ten code points:

 1. LATIN SMALL LETTER A (U+0061)
 2. COMBINING ACUTE ACCENT (U+0301)
 3. REGIONAL INDICATOR SYMBOL LETTER C (U+1F1E8)
 4. REGIONAL INDICATOR SYMBOL LETTER A (U+1F1E6)
 5. LATIN SMALL LETTER B (U+0062)
 6. POLICE OFFICER (U+1F46E)
 7. EMOJI MODIFIER FITZPATRICK TYPE-4 (U+1F3FD)
 8. ZERO WIDTH JOINER (U+200D)
 9. FEMALE SIGN (U+2640)
10. VARIATION SELECTOR-16 (U+FE0F)

In UTF-8, this string takes up 28 bytes.

From this example, it should be clear that if your application's UI displays a text input box that allows N arbitrary grapheme clusters (what users think of as "characters"), and you plan to transport those user-entered strings over FIDL, you will have to budget some multiple of N in your FIDL string field.

What should that multiple be? It depends on your data. If you're dealing with a fairly constrained use case (e.g. human names, postal addresses, credit card numbers), you might be able to assume 1-2 code points per grapheme cluster. If you're building a chat client where emoji use is rampant, 4-5 code points per grapheme cluster might be safer. In any case, your input validation UI should show clear visual feedback so that users aren't surprised if they run out of room.

Integer types

Select an integer type appropriate for your use case and be consistent about how you use them. If your value is best thought of as a byte of data, use byte. If a negative value has no meaning, use an unsigned type. As a rule of thumb if you're unsure, use 32-bit values for small quantities and 64-bit values for large ones.

Avoid booleans if more states are possible

When adding a boolean field, consider using an enum instead if the field could be extended to represent additional states in the future. For example a boolean is_gif field might be better represented by

type FileType = strict enum {
    UNKNOWN = 0;
    GIF = 1;
};

The enum may then be extended with JPEG = 2 if required.

How should I represent errors?

Select the appropriate error type for your use case and be consistent about how you report errors.

Use the error syntax to clearly document and convey a possible erroneous return, and take advantage of tailored target language bindings.

(The use of the optional value with error enum pattern is deprecated.)

Using the error syntax

Methods can take an optional error <type> specifier to indicate that they return a value, or error out and produce <type>. Here is an example:

// Only erroneous statuses are listed
type MyErrorCode = strict enum {
    MISSING_FOO = 1; // avoid using 0
    NO_BAR = 2;
};

protocol Frobinator {
    Frobinate() -> (struct {
        value SuccessValue;
    }) error MyErrorCode;
};

When using this pattern, you can either use an int32, uint32, or an enum thereof to represent the kind of error returned. In most cases, returning an enum is the preferred approach.

It is preferred to have a single error type across all methods of a protocol.

Prefer domain-specific enum

Use a purpose built enum error type when you define and control the domain. For example, define an enum when the protocol is purpose built, and conveying the semantics of the error is the only design constraint. As noted in the enum section, it is best to avoid using the value 0.

In some cases, it may be appropriate to start by using an empty flexible enum:

type MyEmptyErrorCode = flexible enum {};

protocol Frobinator2 {
    Frobinate() -> (struct {
        value SuccessValue;
    }) error MyEmptyErrorCode;

    BadFrobinate() -> (struct {
        value SuccessValue;
    }) error flexible enum {}; // avoid anonymous enum
};

Flexible enums have a default unknown member. An empty flexible enum is therefore a typed placeholder providing affordances for evolvability. When using this pattern, it is recommended to define a standalone type to be reused by multiple methods within a protocol (or a library) rather than using an anonymous enum. Using an anonymous enum creates multiple types, all incompatible with each other, which will lead to an overly complex API, and prevent generic handling of errors in most target languages.

Use a domain-specific enum error type when you are following a well defined specification (say HTTP error codes), and the enum is meant to be an ergonomic way to represent the raw value dictated by the specification.

In particular, use the zx.Status type for errors related to kernel objects or IO. For example, fuchsia.process uses zx.Status because the library is largely concerned with manipulating kernel objects. As another example, fuchsia.io uses zx.Status extensively because the library is concerned with IO.

Using optional value with error enum

In the past, there was a slight performance benefit to defining a method with two returns, an optional value and an error code. See for instance:

type MyStatusCode = strict enum {
    OK = 0; // The success value should be 0,
    MISSING_FOO = 1; // with erroneous status next.
    NO_BAR = 2;
};

protocol Frobinator3 {
    Frobinate() -> (struct {
        value box<SuccessValue>;
        err MyStatusCode;
    });
};

However, this pattern is now deprecated in favor of the error syntax: the performance benefits which existed have been obsoleted by inlining small values in envelopes, and low-level support for unions is now prevalent.

Avoid messages and descriptions in errors

In some unusual situations, protocols may include a string description of the error in addition to a status or enum value if the range of possible error conditions is large and descriptive error messages are likely to be useful to clients. However, including a string invites difficulties. For example, clients might try to parse the string to understand what happened, which means the exact format of the string becomes part of the protocol, which is especially problematic when the strings are localized.

Security note: Similarly, reporting stack traces or exception messages to the client can unintentionally leak privileged information.

Localizing strings and error messages

If you are building a service that acts as a backend for a UI, use structured, typed messages, and leave the rendering to the UI layer.

If all your messages are simple and unparameterized, use enums for error reporting and general UI strings. For more detailed messages, with parameters such as names, numbers, and locations, use tables or xunions, and pass the parameters as string or numeric fields.

It may be tempting to generate messages (in English) in the service and provide them to the UI as strings—the UI just receives a string and pops up a notification or error dialog box.

However, this simpler approach has some serious drawbacks:

  • Does your service know what locale (language and region) is being used in the UI? You would either have to pass the locale with each request (see example), or keep track of state for each connected client, in order to provide messages in the right language.
  • Does your service's development environment have good support for localization? If you're writing in C++, you have easy access to the ICU library and MessageFormat, but if you're using Rust, library support is currently much more limited.
  • Do any of your error messages need to include parameters that are known to the UI but not to the service?
  • Does your service only serve a single UI implementation? Does the service know how much space the UI has to display a message?
  • Are errors only displayed as text? You might also need error-specific alert icons, sound effects, or text-to-speech hints.
  • Could the user change the display locale while the UI is still running? If this happens, pre-localized strings might be difficult to update to the new locale, particularly if they were the result of some non-idempotent operation.

Unless you are building a highly specialized service that is tightly coupled to a single UI implementation, you probably shouldn't expose user-visible UI strings in your FIDL service.

Should I define a struct to encapsulate method parameters (or responses)?

Whenever you define a method, you need to decide whether to pass parameters individually or to encapsulate the parameters in a struct. Making the best choice involves balancing several factors. Consider the questions below to help guide your decision making:

  • Is there a meaningful encapsulation boundary? If a group of parameters makes sense to pass around as a unit because they have some cohesion beyond this method, you might want to encapsulate those parameters in a struct. (Hopefully, you have already identified these cohesive groups when you started designing your protocol because you followed the "general advice" above and focused on the types early on.)

  • Would the struct be useful for anything beyond the method being called? If not, consider passing the parameters separately.

  • Are you repeating the same groups of parameters in many methods? If so, consider grouping those parameters into one or more structures. You might also consider whether the repetition indicates that these parameters are cohesive because they represent some important concept in your protocol.

  • Are there a large number of parameters that are optional or otherwise are commonly given a default value? If so, consider using use a struct to reduce boilerplate for callers.

  • Are there groups of parameters that are always null or non-null at the same time? If so, consider grouping those parameters into a nullable struct to enforce that invariant in the protocol itself. For example, the FrobinateResult struct defined above contains values that are always null at the same time when error is not MyError.OK.

Should I use string or vector<byte>?

In FIDL, string data must be valid UTF-8, which means strings can represent sequences of Unicode code points but cannot represent arbitrary binary data. In contrast, vector<byte> or array<byte, N> can represent arbitrary binary data and do not imply Unicode.

Use string for text data:

  • Use string to represent package names because package names are required to be valid UTF-8 strings (with certain excluded characters).

  • Use string to represent file names within packages because file names within packages are required to be valid UTF-8 strings (with certain excluded characters).

  • Use string to represent media codec names because media codec names are selected from a fixed vocabulary of valid UTF-8 strings.

  • Use string to represent HTTP methods because HTTP methods are comprised of a fixed selection of characters that are always valid UTF-8.

Use vector<byte> or array<byte, N> for small non-text data:

  • Use vector<byte> for HTTP header fields because HTTP header fields do not specify an encoding and therefore cannot necessarily be represented in UTF-8.

  • Use array<byte, 6> for MAC addresses because MAC address are binary data.

  • Use array<byte, 16> for UUIDs because UUIDs are (almost!) arbitrary binary data.

Use shared-memory primitives for blobs:

  • Use zx.Handle:VMO for images and (large) protobufs, when it makes sense to buffer the data completely.
  • Use zx.Handle:SOCKET for audio and video streams because data may arrive over time, or when it makes sense to process data before completely written or available.

Should I use vector or array?

A vector is a variable-length sequence that is represented out-of-line in the wire format. An array is a fixed-length sequence that is represented in-line in the wire format.

Use vector for variable-length data:

  • Use vector for tags in log messages because log messages can have between zero and five tags.

Use array for fixed-length data:

  • Use array for MAC addresses because a MAC address is always six bytes long.

Should I use a struct or a table?

Both structs and tables represent an object with multiple named fields. The difference is that structs have a fixed layout in the wire format, which means they cannot be modified without breaking binary compatibility. By contrast, tables have a flexible layout in the wire format, which means fields can be added to a table over time without breaking binary compatibility.

Use structs for performance-critical protocol elements or for protocol elements that are very unlikely to change in the future. For example, use a struct to represent a MAC address because the structure of a MAC address is very unlikely to change in the future.

Use tables for protocol elements that are likely to change in the future. For example, use a table to represent metadata information about camera devices because the fields in the metadata are likely to evolve over time.

How should I represent constants?

There are three ways to represent constants, depending on the flavor of constant you have:

  1. Use const for special values, like PI, or MAX_NAME_LEN.
  2. Use enum when the values are elements of a set, like the repeat mode of a media player: OFF, SINGLE_TRACK, or ALL_TRACKS.
  3. Use bits for constants forming a group of flags, such as the capabilities of an interface: WLAN, SYNTH, and LOOPBACK.

const

Use a const when there is a value that you wish to use symbolically rather than typing the value every time. The classical example is PI — it's often coded as a const because it's convenient to not have to type 3.141592653589 every time you want to use this value.

Alternatively, you may use a const when the value may change, but needs to otherwise be used consistently throughout. A maximum number of characters that can be supplied in a given field is a good example (e.g., MAX_NAME_LEN). By using a const, you centralize the definition of that number, and thus don't end up with different values throughout your code.

Another reason to choose const is that you can use it both to constrain a message, and then later on in code. For example:

const MAX_BATCH_SIZE int32 = 128;

protocol Sender {
    Emit(struct {
        batch vector<uint8>:MAX_BATCH_SIZE;
    });
};

You can then use the constant MAX_BATCH_SIZE in your code to assemble batches.

enum

Use an enum if the set of enumerated values is bounded and controlled by the Fuchsia project. For example, the Fuchsia project defines the pointer event input model and therefore controls the values enumerated by PointerEventPhase.

In some scenarios, you should use an enum even if the Fuchsia project itself does not control the set of enumerated values if we can reasonably expect that people who will want to register new values will submit a patch to the Fuchsia source tree to register their values. For example, texture formats need to be understood by the Fuchsia graphics drivers, which means new texture formats can be added by developers working on those drivers even if the set of texture formats is controlled by the graphics hardware vendors. As a counterexample, do not use an enum to represent HTTP methods because we cannot reasonably expect people who use novel HTTP methods to submit a patch to the Platform Source Tree.

For a priori unbounded sets, a string might be a more appropriate choice if you foresee wanting to extend the set dynamically. For example, use a string to represent media codec names because intermediaries might be able to do something reasonable with a novel media codec name.

If the set of enumerated values is controlled by an external entity, use an integer (of an appropriate size) or a string. For example, use an integer (of some size) to represent USB HID identifiers because the set of USB HID identifiers is controlled by an industry consortium. Similarly, use a string to represent a MIME type because MIME types are controlled (at least in theory) by an IANA registry.

We recommend that, where possible, developers avoid use of 0 as an enum value. Because many target languages use 0 as the default value for integers, it can be difficult to distinguish whether a 0 value was set intentionally, or instead was set because it is the default. For instance, the fuchsia.module.StoryState defines three values: RUNNING with value 1, STOPPING with value 2, and STOPPED with value 3.

There are two cases where using the value 0 is appropriate:

bits

If your protocol has a bitfield, represent its values using bits values (for details, see RFC-0025: "Bit Flags").

For example:

type InfoFeaturesHex = strict bits : uint32 {
    WLAN = 0x00000001; // If present, this device represents WLAN hardware
    SYNTH = 0x00000002; // If present, this device is synthetic (not backed by h/w)
    LOOPBACK = 0x00000004; // If present, this device receives all messages it sends
};

This indicates that the InfoFeatures bit field is backed by an unsigned 32-bit integer, and then goes on to define the three bits that are used.

You can also express the values in binary (as opposed to hex) using the 0b notation:

type InfoFeaturesBits = strict bits : uint32 {
    WLAN = 0b00000001; // If present, this device represents WLAN hardware
    SYNTH = 0b00000010; // If present, this device is synthetic (not backed by h/w)
    LOOPBACK = 0b00000100; // If present, this device receives all messages it sends
};

This is the same as the previous example.

Should I use resource?

The FIDL compiler will enforce that any types that already contain a resource are marked as such.

If a flexible type does not contain resources but is likely to in the future, the resource modifier should be added pre-emptively to avoid a difficult transition later on. This situation is rare: experience has shown that most messages do not contain resources, and passing resources in protocols requires care and upfront planning.

Should I use strict or flexible on types?

Marking a type as flexible makes it possible to handle data that is unknown to the current FIDL schema, and is recommended for types that may add or remove members in the future (e.g., configs, metadata, or errors). It is always possible to soft transition between strict and flexible for an existing type.

It is stylish to always specify this modifier, when the type permits. The Fuchsia project enforces this style with a linter check.

Using strict or flexible does not have any significant performance impact.

Handle rights

This section describes best practices for assigning rights constraints on handles in FIDL.

See the FIDL bindings spec or RFC-0028 for more details on how rights are used in bindings.

For the zircon rights definitions, see kernel rights. FIDL uses rights.fidl to resolve rights constraints.

Always specify rights on handles

All handles should specify rights to favor being explicit about the intended use. This requirement forces an upfront decision of which rights are to be passed, rather than basing them on observed behavior. Having explicit rights also contributes to the auditability of API surfaces.

Use the minimum rights the recipient needs

When determining which rights to provide, prefer being minimal, i.e. the least amount of rights needed to achieve the functionality sought. For instance, if it is known that only zx.Rights.READ and zx.Rights.WRITE will be needed, then only these two rights should be specified.

Do not add rights based on speculative needs. If a right needs to be added at a future time, it can be added by starting with the source and adding it to each location along the call path up until the final point of use.

Use zx.Rights.SAME_RIGHTS sparingly

zx.Rights.SAME_RIGHTS is well suited for a protocol that forwards a handle of unknown rights, but for most cases a specific set of rights should be used instead. Part of the motivation for this is that zx.Rights.SAME_RIGHTS tells bindings to skip rights checks, so it disables the security protection that handle rights may offer. Additionally, zx.Rights.SAME_RIGHTS makes the rights set dynamic, meaning that a process might receive fewer or greater rights than it actually needs.

It is worth noting that zx.Rights.SAME_RIGHTS is not the same as the defaults rights set for a type, e.g. zx.DEFAULT_CHANNEL_RIGHTS. While the former skips rights checks, the latter requires all normal rights for a given object type to be present.

Good Design Patterns

This section describes several good design patterns that recur in many FIDL protocols.

Should I use strict or flexible on methods and events?

Marking a method or event as flexible makes it easier to deal with removing the method or event when different components might have been built at different versions, such that some components think the method or event exists while others that they communicate with do not. Because flexibility for evolving protocols is generally desirable, it is recommended to choose flexible for methods and events unless there is a good reason to choose strict.

Making a method flexible has no overhead for one-way methods or events. For two-way methods, choosing flexible adds a tiny amount of overhead (16 bytes or fewer) to the message and possibly some tiny additional time to message decoding. Overall the cost of making a two-way method flexible should be small enough not to be a consideration for almost all use cases.

Methods and events should be made strict only when they are so critical to the correct behavior of the protocol that the absence of that method or event on the receiving side is serious enough that all communication between the two ends should be aborted and the connection closed.

This can be particularly useful when designing for feed-forward dataflow. Consider this logger protocol which supports a mode for safely handling logs with personally identifiable information (PII). It uses the feed-forward pattern to add records so that the client can initiate many operations sequentially without waiting for the round-trip time, and just flush the pending operations at the end.

open protocol Logger {
    flexible AddRecord(struct {
        message string;
    });
    strict EnablePIIMode();
    flexible DisablePIIMode();
    flexible Flush() -> ();
};

All the methods here are flexible except EnablePIIMode; consider what happens if the server doesn't recognize any one of the methods:

  • AddRecord: the server just fails to add the data to the log output. The sending application behaves normally, though its log records become less useful. This is inconvenient, but safe.
  • EnablePIIMode: the server fails to enable PII mode, meaning it might fail to take security precautions and leak PII. This is a serious issue, so it is preferable to close the channel if the server doesn't recognize this method.
  • DisablePIIMode: the server takes unnecessary security precautions for messages that don't need PII logging. This might be inconvenient for someone trying to read the logs, but is safe for the system.
  • Flush: the server fails to flush the records as requested, which is potentially inconvenient, but still safe.

An alternative way of designing this protocol to be fully flexible would be to make EnablePIIMode a two-way method (flexible EnablePIIMode() -> ();) so that the client can find out if the server doesn't have the method. Notice how this creates additional flexbility for the client; with this setup, the client has the choice of whether to respond to the server not recognizing EnablePIIMode by closing the connection or by just choosing not to log PII, whereas with strict the protocol is always closed automatically. However this does interrupt the feed-forward flow.

Keep in mind that strictness is based on the sender. Suppose you have some method strict A(); in version 1, then change it to flexible A(); in version 2, and then delete it in version 3. If a client built at version 1 tries to call A() on a server built at version 3, the method will be treated as strict, because the client at version 1 thinks the method is strict, and the server at version 3 takes the client's word for it because it doesn't recognize the method at all.

It is stylish to always specify this modifier. The Fuchsia project enforces this style with a linter check.

Should I use open, ajar, or closed?

Marking a protocol as open makes it easier to deal with removing methods or events when different components might have been built at different versions, such that each component has a different view of which methods and events exist. Because flexibility for evolving protocols is generally desirable, it is recommended to choose open for protocols unless there is a reason to choose a more closed protocol.

Deciding to use ajar or closed should be based on expected constraints on the evolution of a protocol. Using closed or ajar does not prevent protocol evolution, however it does impose the requirement for a longer roll-out period where methods and events exist but aren't used in order to ensure all clients and servers agree about what methods exist. The flexibility of using flexible applies to both adding and removing methods and events, depending on whether the client or server is updated first.

ajar could be useful for a protocol which uses feed-forward dataflow but which is only expected to have its evolution limited to one-way methods. For example, this could apply to a tear-off protocol representing a transaction, where the only two-way method is a commit operation which must be strict while other operations on the transaction may evolve.

ajar protocol BazTransaction {
    flexible Add(resource struct {
        bars vector<client_end:Bar>;
    });
    strict Commit() -> (struct {
        args Args;
    });
};

open protocol Foo3 {
    flexible StartBazTransaction(resource struct {
        transaction server_end:BazTransaction;
    });
};

closed is useful for critical protocols where any unknown method is a serious issue which should result in closing the channel rather than continuing in a potentially bad state. It is also reasonable to use it for protocols which are very unlikely to change, or at least where any change is likely to involve an extremely long rollout cycle anyway, such that the extra cost involved in changing strict methods is already expected in the rollout cycle.

Protocol request pipelining

One of the best and most widely used design patterns is protocol request pipelining. Rather than returning a channel that supports a protocol, the client sends the channel and requests the server to bind an implementation of the protocol to that channel:

// GOOD:
protocol GoodFoo {
    GetBar(resource struct {
        name string;
        bar server_end:Bar;
    });
};

// BAD:
protocol BadFoo {
    GetBar(struct {
        name string;
    }) -> (resource struct {
        bar client_end:Bar;
    });
};

This pattern is useful because the client does not need to wait for a round-trip before starting to use the Bar protocol. Instead, the client can queue messages for Bar immediately. Those messages will be buffered by the kernel and processed eventually once an implementation of Bar binds to the protocol request. By contrast, if the server returns an instance of the Bar protocol, the client needs to wait for the whole round-trip before queuing messages for Bar.

If the request is likely to fail, consider extending this pattern with a reply that describes whether the operation succeeded:

protocol CodecProvider {
    TryToCreateCodec(resource struct {
        params CodecParams;
        codec server_end:Codec;
    }) -> (struct {
        succeed bool;
    });
};

To handle the failure case, the client waits for the reply and takes some other action if the request failed. Another approach is for the protocol to have an event that the server sends at the start of the protocol:

protocol Codec2 {
    -> OnReady();
};

protocol CodecProvider2 {
    TryToCreateCodec(resource struct {
        params CodecParams;
        codec server_end:Codec2;
    });
};

To handle the failure case, the client waits for the OnReady event and takes some other action if the Codec2 channel is closed before the event arrives.

However, if the request is likely to succeed, having either kind of success signal can be harmful because the signal allows the client to distinguish between different failure modes that often should be handled in the same way. For example, the client should treat a service that fails immediately after establishing a connection in the same way as a service that cannot be reached in the first place. In both situations, the service is unavailable and the client should either generate an error or find another way to accomplish its task.

Flow control

FIDL messages are buffered by the kernel. If one endpoint produces more messages than the other endpoint consumes, the messages will accumulate in the kernel, taking up memory and making it more difficult for the system to recover. Instead, well-designed protocols should throttle the production of messages to match the rate at which those messages are consumed, a property known as flow control.

Flow control is a broad, complex topic, and there are a number of effective design patterns. This section discusses some of the more popular flow control patterns but is not exhaustive. The patterns are listed in descending order of preference. If one of these patterns works well for a particular use case it should be used. But if not, protocols are free to use alternative flow control mechanisms that are not listed below.

Prefer pull to push

Without careful design, protocols in which the server pushes data to the client often have poor flow control. One approach to providing better flow control is to have the client pull one or a range from the server. Pull models have built-in flow control since the client naturally limits the rate at which the server produces data and avoids getting overwhelmed by messages pushed from the server.

Delay responses using hanging gets

A simple way to implement a pull-based protocol is to "park a callback" with the server using the hanging get pattern:

protocol FooProvider {
    WatchFoo(struct {
        args Args;
    }) -> (resource struct {
        foo client_end:Foo;
    });
};

In this pattern, the client sends a WatchFoo message but the server does not reply until it has new information to send to the client. The client consumes foo and immediately sends another hanging get. The client and server each do one unit of work per data item, which means neither gets ahead of the other.

The hanging get pattern works well when the set of data items being transferred is bounded in size and the server-side state is simple, but does not work well in situations where the client and server need to synchronize their work.

For example, a server might implement the hanging get pattern for some mutable state foo using a "dirty" bit for each client. It would initialize this bit to true, clear it on each WatchFoo response, and set it on each change of foo. The server would only respond to a WatchFoo message when the dirty bit is set.

Throttle push using acknowledgements

One approach to providing flow control in protocols that use push is the acknowledgment pattern, in which the caller provides an acknowledgement response that the caller uses for flow control. For example, consider this generic listener protocol:

protocol Listener {
    OnBar(struct {
        args Args;
    }) -> ();
};

The listener is expected to send an empty response message immediately upon receiving the OnBar message. The response does not convey any data to the caller. Instead, the response lets the caller observe the rate at which the callee is consuming messages. The caller should throttle the rate at which it produces messages to match the rate at which the callee consumes them. For example, the caller might arrange for only one (or a fixed number) of messages to be in flight (i.e., waiting for acknowledgement).

FIDL recipe: Acknowledgement pattern

The acknowledgement pattern is a simple method of flow-control for methods that would otherwise be one way calls. Rather than leaving the method as a one way call, it is instead turned into the a two way call with an absent response, colloquially known as an ack. The ack's only reason for existence is to inform the sended that the message has been received, which the sender can use to make decisions about how to proceed.

The cost of this acknowledgement is added chatter over the channel. This pattern can also result in degraded performance if the client waits for the acknowledgement before proceeding with the next call.

Sending unmetered one way calls back and forth produces a simple design, but there are potential pitfalls: what if the server is much slower at processing updates than the client sends them? For example, the client may load a drawing consisting of many thousands of lines from some text file, and try to send them all sequentially. How can we apply back pressure to the client to prevent the server from being overwhelmed by this wave of updates?

By using the acknowledgement pattern and making the one way call AddLine(...); into a two way AddLine(...) -> ();, we can provide feedback to the client. This will allow the client to throttle its output as appropriate. In this example, we'll simply have the client wait for the ack before sending the next message it has waiting, though more complex designs could send messages optimistically, and only throttle when they receive async acks less frequently than expected.

First, we need to define our interface definitions and test harness. The FIDL, CML, and realm interface definitions set up a scaffold that arbitrary implementations can use:

FIDL

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
library examples.canvas.addlinemetered;

/// A point in 2D space.
type Point = struct {
    x int64;
    y int64;
};

/// A line in 2D space.
alias Line = array<Point, 2>;

/// A bounding box in 2D space. This is the result of "drawing" operations on our canvas, and what
/// the server reports back to the client. These bounds are sufficient to contain all of the
/// lines (inclusive) on a canvas at a given time.
type BoundingBox = struct {
    top_left Point;
    bottom_right Point;
};

/// Manages a single instance of a canvas. Each session of this protocol is responsible for a new
/// canvas.
@discoverable
open protocol Instance {
    /// Add a line to the canvas.
    ///
    /// This method can be considered an improvement over the one-way case from a flow control
    /// perspective, as it is now much more difficult for a well-behaved client to "get ahead" of
    /// the server and overwhelm. This is because the client now waits for each request to be acked
    /// by the server before proceeding. This change represents a trade-off: we get much greater
    /// synchronization of message flow between the client and the server, at the cost of worse
    /// performance at the limit due to the extra wait imposed by each ack.
    flexible AddLine(struct {
        line Line;
    }) -> ();

    /// Update the client with the latest drawing state. The server makes no guarantees about how
    /// often this event occurs - it could occur multiple times per board state, for example.
    flexible -> OnDrawn(BoundingBox);
};

CML

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    include: [ "syslog/client.shard.cml" ],
    program: {
        runner: "elf",
        binary: "bin/client_bin",
    },
    use: [
        { protocol: "examples.canvas.addlinemetered.Instance" },
    ],
    config: {
        // A script for the client to follow. Entries in the script may take one of two forms: a
        // pair of signed-integer coordinates like "-2,15:4,5", or the string "WAIT". The former
        // calls `AddLine(...)`, while the latter pauses execution until the next `->OnDrawn(...)`
        // event is received.
        //
        // TODO(https://fxbug.dev/42178362): It would absolve individual language implementations of a great
        //   deal of string parsing if we were able to use a vector of `union { Point; WaitEnum}`
        //   here.
        script: {
            type: "vector",
            max_count: 100,
            element: {
                type: "string",
                max_size: 64,
            },
        },
    },
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    include: [ "syslog/client.shard.cml" ],
    program: {
        runner: "elf",
        binary: "bin/server_bin",
    },
    capabilities: [
        { protocol: "examples.canvas.addlinemetered.Instance" },
    ],
    expose: [
        {
            protocol: "examples.canvas.addlinemetered.Instance",
            from: "self",
        },
    ],
}

Realm

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    children: [
        {
            name: "client",
            url: "#meta/client.cm",
        },
        {
            name: "server",
            url: "#meta/server.cm",
        },
    ],
    offer: [
        // Route the protocol under test from the server to the client.
        {
            protocol: "examples.canvas.addlinemetered.Instance",
            from: "#server",
            to: "#client",
        },

        // Route diagnostics support to all children.
        {
            protocol: [
                "fuchsia.inspect.InspectSink",
                "fuchsia.logger.LogSink",
            ],
            from: "parent",
            to: [
                "#client",
                "#server",
            ],
        },
    ],
}

Client and server implementations can then be written in any supported language:

Rust

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

use {
    anyhow::{format_err, Context as _, Error},
    config::Config,
    fidl_examples_canvas_addlinemetered::{InstanceEvent, InstanceMarker, Point},
    fuchsia_component::client::connect_to_protocol,
    futures::TryStreamExt,
    std::{thread, time},
};

#[fuchsia::main]
async fn main() -> Result<(), Error> {
    println!("Started");

    // Load the structured config values passed to this component at startup.
    let config = Config::take_from_startup_handle();

    // Use the Component Framework runtime to connect to the newly spun up server component. We wrap
    // our retained client end in a proxy object that lets us asynchronously send Instance requests
    // across the channel.
    let instance = connect_to_protocol::<InstanceMarker>()?;
    println!("Outgoing connection enabled");

    for action in config.script.into_iter() {
        // If the next action in the script is to "WAIT", block until an OnDrawn event is received
        // from the server.
        if action == "WAIT" {
            let mut event_stream = instance.take_event_stream();
            loop {
                match event_stream
                    .try_next()
                    .await
                    .context("Error getting event response from proxy")?
                    .ok_or_else(|| format_err!("Proxy sent no events"))?
                {
                    InstanceEvent::OnDrawn { top_left, bottom_right } => {
                        println!(
                            "OnDrawn event received: top_left: {:?}, bottom_right: {:?}",
                            top_left, bottom_right
                        );
                        break;
                    }
                    InstanceEvent::_UnknownEvent { ordinal, .. } => {
                        println!("Received an unknown event with ordinal {ordinal}");
                    }
                }
            }
            continue;
        }

        // If the action is not a "WAIT", we need to draw a line instead. Parse the string input,
        // making two points out of it.
        let mut points = action
            .split(":")
            .map(|point| {
                let integers = point
                    .split(",")
                    .map(|integer| integer.parse::<i64>().unwrap())
                    .collect::<Vec<i64>>();
                Point { x: integers[0], y: integers[1] }
            })
            .collect::<Vec<Point>>();

        // Assemble a line from the two points.
        let from = points.pop().ok_or(format_err!("line requires 2 points, but has 0"))?;
        let to = points.pop().ok_or(format_err!("line requires 2 points, but has 1"))?;
        let line = [from, to];

        // Draw a line to the canvas by calling the server, using the two points we just parsed
        // above as arguments.
        println!("AddLine request sent: {:?}", line);

        // By awaiting on the reply, we prevent the client from sending another request before the
        // server is ready to handle, thereby syncing the flow rate between the two parties over
        // this method.
        instance.add_line(&line).await.context("Error sending request")?;
        println!("AddLine response received");
    }

    // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
    // referenced bug has been resolved, we can remove the sleep.
    thread::sleep(time::Duration::from_secs(2));
    Ok(())
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

use {
    anyhow::{Context as _, Error},
    fidl::endpoints::RequestStream as _,
    fidl_examples_canvas_addlinemetered::{
        BoundingBox, InstanceRequest, InstanceRequestStream, Point,
    },
    fuchsia_async::{Time, Timer},
    fuchsia_component::server::ServiceFs,
    fuchsia_zircon::{self as zx},
    futures::future::join,
    futures::prelude::*,
    std::sync::{Arc, Mutex},
};

// A struct that stores the two things we care about for this example: the bounding box the lines
// that have been added thus far, and bit to track whether or not there have been changes since the
// last `OnDrawn` event.
#[derive(Debug)]
struct CanvasState {
    // Tracks whether there has been a change since the last send, to prevent redundant updates.
    changed: bool,
    bounding_box: BoundingBox,
}

impl CanvasState {
    /// Handler for the `AddLine` method.
    fn add_line(&mut self, line: [Point; 2]) {
        // Update the bounding box to account for the new lines we've just "added" to the canvas.
        let bounds = &mut self.bounding_box;
        for point in line {
            if point.x < bounds.top_left.x {
                bounds.top_left.x = point.x;
            }
            if point.y > bounds.top_left.y {
                bounds.top_left.y = point.y;
            }
            if point.x > bounds.bottom_right.x {
                bounds.bottom_right.x = point.x;
            }
            if point.y < bounds.bottom_right.y {
                bounds.bottom_right.y = point.y;
            }
        }

        // Mark the state as "dirty", so that an update is sent back to the client on the next tick.
        self.changed = true
    }
}

/// Creates a new instance of the server, paired to a single client across a zircon channel.
async fn run_server(stream: InstanceRequestStream) -> Result<(), Error> {
    // Create a new in-memory state store for the state of the canvas. The store will live for the
    // lifetime of the connection between the server and this particular client.
    let state = Arc::new(Mutex::new(CanvasState {
        changed: true,
        bounding_box: BoundingBox {
            top_left: Point { x: 0, y: 0 },
            bottom_right: Point { x: 0, y: 0 },
        },
    }));

    // Take ownership of the control_handle from the stream, which will allow us to push events from
    // a different async task.
    let control_handle = stream.control_handle();

    // A separate watcher task periodically "draws" the canvas, and notifies the client of the new
    // state. We'll need a cloned reference to the canvas state to be accessible from the new
    // task.
    let state_ref = state.clone();
    let update_sender = || async move {
        loop {
            // Our server sends one update per second.
            Timer::new(Time::after(zx::Duration::from_seconds(1))).await;
            let mut state = state_ref.lock().unwrap();
            if !state.changed {
                continue;
            }

            // After acquiring the lock, this is where we would draw the actual lines. Since this is
            // just an example, we'll avoid doing the actual rendering, and simply send the bounding
            // box to the client instead.
            let bounds = state.bounding_box;
            match control_handle.send_on_drawn(&bounds.top_left, &bounds.bottom_right) {
                Ok(_) => println!(
                    "OnDrawn event sent: top_left: {:?}, bottom_right: {:?}",
                    bounds.top_left, bounds.bottom_right
                ),
                Err(_) => return,
            }

            // Reset the change tracker.
            state.changed = false
        }
    };

    // Handle requests on the protocol sequentially - a new request is not handled until its
    // predecessor has been processed.
    let state_ref = &state;
    let request_handler =
        stream.map(|result| result.context("failed request")).try_for_each(|request| async move {
            // Match based on the method being invoked.
            match request {
                InstanceRequest::AddLine { line, responder } => {
                    println!("AddLine request received: {:?}", line);
                    state_ref.lock().unwrap().add_line(line);

                    // Because this is now a two-way method, we must use the generated `responder`
                    // to send an in this case empty reply back to the client. This is the mechanic
                    // which syncs the flow rate between the client and server on this method,
                    // thereby preventing the client from "flooding" the server with unacknowledged
                    // work.
                    responder.send().context("Error responding")?;
                    println!("AddLine response sent");
                } //
                InstanceRequest::_UnknownMethod { ordinal, .. } => {
                    println!("Received an unknown method with ordinal {ordinal}");
                }
            }
            Ok(())
        });

    // This await does not complete, and thus the function does not return, unless the server errors
    // out. The stream will await indefinitely, thereby creating a long-lived server. Here, we first
    // wait for the updater task to realize the connection has died, then bubble up the error.
    join(request_handler, update_sender()).await.0
}

// A helper enum that allows us to treat a `Instance` service instance as a value.
enum IncomingService {
    Instance(InstanceRequestStream),
}

#[fuchsia::main]
async fn main() -> Result<(), Error> {
    println!("Started");

    // Add a discoverable instance of our `Instance` protocol - this will allow the client to see
    // the server and connect to it.
    let mut fs = ServiceFs::new_local();
    fs.dir("svc").add_fidl_service(IncomingService::Instance);
    fs.take_and_serve_directory_handle()?;
    println!("Listening for incoming connections");

    // The maximum number of concurrent clients that may be served by this process.
    const MAX_CONCURRENT: usize = 10;

    // Serve each connection simultaneously, up to the `MAX_CONCURRENT` limit.
    fs.for_each_concurrent(MAX_CONCURRENT, |IncomingService::Instance(stream)| {
        run_server(stream).unwrap_or_else(|e| println!("{:?}", e))
    })
    .await;

    Ok(())
}

C++ (Natural)

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <fidl/examples.canvas.addlinemetered/cpp/fidl.h>
#include <lib/async-loop/cpp/loop.h>
#include <lib/component/incoming/cpp/protocol.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <charconv>

#include <examples/fidl/new/canvas/add_line_metered/cpp_natural/client/config.h>

// The |EventHandler| is a derived class that we pass into the |fidl::WireClient| to handle incoming
// events asynchronously.
class EventHandler : public fidl::AsyncEventHandler<examples_canvas_addlinemetered::Instance> {
 public:
  // Handler for |OnDrawn| events sent from the server.
  void OnDrawn(fidl::Event<examples_canvas_addlinemetered::Instance::OnDrawn>& event) override {
    auto top_left = event.top_left();
    auto bottom_right = event.bottom_right();
    FX_LOGS(INFO) << "OnDrawn event received: top_left: Point { x: " << top_left.x()
                  << ", y: " << top_left.y() << " }, bottom_right: Point { x: " << bottom_right.x()
                  << ", y: " << bottom_right.y() << " }";
    loop_.Quit();
  }

  void on_fidl_error(fidl::UnbindInfo error) override { FX_LOGS(ERROR) << error; }

  void handle_unknown_event(
      fidl::UnknownEventMetadata<examples_canvas_addlinemetered::Instance> metadata) override {
    FX_LOGS(WARNING) << "Received an unknown event with ordinal " << metadata.event_ordinal;
  }

  explicit EventHandler(async::Loop& loop) : loop_(loop) {}

 private:
  async::Loop& loop_;
};

// A helper function that takes a coordinate in string form, like "123,-456", and parses it into a
// a struct of the form |{ in64 x; int64 y; }|.
::examples_canvas_addlinemetered::Point ParsePoint(std::string_view input) {
  int64_t x = 0;
  int64_t y = 0;
  size_t index = input.find(',');
  if (index != std::string::npos) {
    std::from_chars(input.data(), input.data() + index, x);
    std::from_chars(input.data() + index + 1, input.data() + input.length(), y);
  }
  return ::examples_canvas_addlinemetered::Point(x, y);
}

// A helper function that takes a coordinate pair in string form, like "1,2:-3,-4", and parses it
// into an array of 2 |Point| structs.
::std::array<::examples_canvas_addlinemetered::Point, 2> ParseLine(const std::string& action) {
  auto input = std::string_view(action);
  size_t index = input.find(':');
  if (index != std::string::npos) {
    return {ParsePoint(input.substr(0, index)), ParsePoint(input.substr(index + 1))};
  }
  return {};
}

int main(int argc, const char** argv) {
  FX_LOGS(INFO) << "Started";

  // Retrieve component configuration.
  auto conf = config::Config::TakeFromStartupHandle();

  // Start up an async loop and dispatcher.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Connect to the protocol inside the component's namespace. This can fail so it's wrapped in a
  // |zx::result| and it must be checked for errors.
  zx::result client_end = component::Connect<examples_canvas_addlinemetered::Instance>();
  if (!client_end.is_ok()) {
    FX_LOGS(ERROR) << "Synchronous error when connecting to the |Instance| protocol: "
                   << client_end.status_string();
    return -1;
  }

  // Create an instance of the event handler.
  EventHandler event_handler(loop);

  // Create an asynchronous client using the newly-established connection.
  fidl::Client client(std::move(*client_end), dispatcher, &event_handler);
  FX_LOGS(INFO) << "Outgoing connection enabled";

  for (const auto& action : conf.script()) {
    // If the next action in the script is to "WAIT", block until an |OnDrawn| event is received
    // from the server.
    if (action == "WAIT") {
      loop.Run();
      loop.ResetQuit();
      continue;
    }

    // Draw a line to the canvas by calling the server, using the two points we just parsed
    // above as arguments.
    auto line = ParseLine(action);
    FX_LOGS(INFO) << "AddLine request sent: [Point { x: " << line[1].x() << ", y: " << line[1].y()
                  << " }, Point { x: " << line[0].x() << ", y: " << line[0].y() << " }]";

    client->AddLine(line).ThenExactlyOnce(
        [&](fidl::Result<examples_canvas_addlinemetered::Instance::AddLine>& result) {
          // Check if the FIDL call succeeded or not.
          if (!result.is_ok()) {
            // Check that our two-way call succeeded, and handle the error appropriately. In the
            // case of this example, there is nothing we can do to recover here, except to log an
            // error and exit the program.
            FX_LOGS(ERROR) << "Could not send AddLine request: "
                           << result.error_value().FormatDescription();
          }
          FX_LOGS(INFO) << "AddLine response received";

          // Quit the loop, thereby handing control back to the outer loop of actions being iterated
          // over.
          loop.Quit();
        });

    // Run the loop until the callback is resolved, at which point we can continue from here.
    loop.Run();
    loop.ResetQuit();
  }

  // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
  // referenced bug has been resolved, we can remove the sleep.
  sleep(2);
  return 0;
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <fidl/examples.canvas.addlinemetered/cpp/fidl.h>
#include <lib/async-loop/cpp/loop.h>
#include <lib/async/cpp/task.h>
#include <lib/component/outgoing/cpp/outgoing_directory.h>
#include <lib/fidl/cpp/wire/channel.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <src/lib/fxl/macros.h>
#include <src/lib/fxl/memory/weak_ptr.h>

// A struct that stores the two things we care about for this example: the set of lines, and the
// bounding box that contains them.
struct CanvasState {
  // Tracks whether there has been a change since the last send, to prevent redundant updates.
  bool changed = true;
  examples_canvas_addlinemetered::BoundingBox bounding_box;
};

// An implementation of the |Instance| protocol.
class InstanceImpl final : public fidl::Server<examples_canvas_addlinemetered::Instance> {
 public:
  // Bind this implementation to a channel.
  InstanceImpl(async_dispatcher_t* dispatcher,
               fidl::ServerEnd<examples_canvas_addlinemetered::Instance> server_end)
      : binding_(fidl::BindServer(
            dispatcher, std::move(server_end), this,
            [this](InstanceImpl* impl, fidl::UnbindInfo info,
                   fidl::ServerEnd<examples_canvas_addlinemetered::Instance> server_end) {
              if (info.reason() != ::fidl::Reason::kPeerClosedWhileReading) {
                FX_LOGS(ERROR) << "Shutdown unexpectedly";
              }
              delete this;
            })),
        weak_factory_(this) {
    // Start the update timer on startup. Our server sends one update per second
    ScheduleOnDrawnEvent(dispatcher, zx::sec(1));
  }

  void AddLine(AddLineRequest& request, AddLineCompleter::Sync& completer) override {
    auto points = request.line();
    FX_LOGS(INFO) << "AddLine request received: [Point { x: " << points[1].x()
                  << ", y: " << points[1].y() << " }, Point { x: " << points[0].x()
                  << ", y: " << points[0].y() << " }]";

    // Update the bounding box to account for the new line we've just "added" to the canvas.
    auto& bounds = state_.bounding_box;
    for (const auto& point : request.line()) {
      if (point.x() < bounds.top_left().x()) {
        bounds.top_left().x() = point.x();
      }
      if (point.y() > bounds.top_left().y()) {
        bounds.top_left().y() = point.y();
      }
      if (point.x() > bounds.bottom_right().x()) {
        bounds.bottom_right().x() = point.x();
      }
      if (point.y() < bounds.bottom_right().y()) {
        bounds.bottom_right().y() = point.y();
      }
    }

    // Mark the state as "dirty", so that an update is sent back to the client on the next |OnDrawn|
    // event.
    state_.changed = true;

    // Because this is now a two-way method, we must use the generated |completer| to send an in
    // this case empty reply back to the client. This is the mechanic which syncs the flow rate
    // between the client and server on this method, thereby preventing the client from "flooding"
    // the server with unacknowledged work.
    completer.Reply();
    FX_LOGS(INFO) << "AddLine response sent";
  }

  void handle_unknown_method(
      fidl::UnknownMethodMetadata<examples_canvas_addlinemetered::Instance> metadata,
      fidl::UnknownMethodCompleter::Sync& completer) override {
    FX_LOGS(WARNING) << "Received an unknown method with ordinal " << metadata.method_ordinal;
  }

 private:
  // Each scheduled update waits for the allotted amount of time, sends an update if something has
  // changed, and schedules the next update.
  void ScheduleOnDrawnEvent(async_dispatcher_t* dispatcher, zx::duration after) {
    async::PostDelayedTask(
        dispatcher,
        [&, dispatcher, after, weak = weak_factory_.GetWeakPtr()] {
          // Halt execution if the binding has been deallocated already.
          if (!weak) {
            return;
          }

          // Schedule the next update if the binding still exists.
          weak->ScheduleOnDrawnEvent(dispatcher, after);

          // No need to send an update if nothing has changed since the last one.
          if (!weak->state_.changed) {
            return;
          }

          // This is where we would draw the actual lines. Since this is just an example, we'll
          // avoid doing the actual rendering, and simply send the bounding box to the client
          // instead.
          auto result = fidl::SendEvent(binding_)->OnDrawn(state_.bounding_box);
          if (!result.is_ok()) {
            return;
          }

          auto top_left = state_.bounding_box.top_left();
          auto bottom_right = state_.bounding_box.bottom_right();
          FX_LOGS(INFO) << "OnDrawn event sent: top_left: Point { x: " << top_left.x()
                        << ", y: " << top_left.y()
                        << " }, bottom_right: Point { x: " << bottom_right.x()
                        << ", y: " << bottom_right.y() << " }";

          // Reset the change tracker.
          state_.changed = false;
        },
        after);
  }

  fidl::ServerBindingRef<examples_canvas_addlinemetered::Instance> binding_;
  CanvasState state_ = CanvasState{};

  // Generates weak references to this object, which are appropriate to pass into asynchronous
  // callbacks that need to access this object. The references are automatically invalidated
  // if this object is destroyed.
  fxl::WeakPtrFactory<InstanceImpl> weak_factory_;
};

int main(int argc, char** argv) {
  FX_LOGS(INFO) << "Started";

  // The event loop is used to asynchronously listen for incoming connections and requests from the
  // client. The following initializes the loop, and obtains the dispatcher, which will be used when
  // binding the server implementation to a channel.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Create an |OutgoingDirectory| instance.
  //
  // The |component::OutgoingDirectory| class serves the outgoing directory for our component. This
  // directory is where the outgoing FIDL protocols are installed so that they can be provided to
  // other components.
  component::OutgoingDirectory outgoing = component::OutgoingDirectory(dispatcher);

  // The `ServeFromStartupInfo()` function sets up the outgoing directory with the startup handle.
  // The startup handle is a handle provided to every component by the system, so that they can
  // serve capabilities (e.g. FIDL protocols) to other components.
  zx::result result = outgoing.ServeFromStartupInfo();
  if (result.is_error()) {
    FX_LOGS(ERROR) << "Failed to serve outgoing directory: " << result.status_string();
    return -1;
  }

  // Register a handler for components trying to connect to
  // |examples.canvas.addlinemetered.Instance|.
  result = outgoing.AddUnmanagedProtocol<examples_canvas_addlinemetered::Instance>(
      [dispatcher](fidl::ServerEnd<examples_canvas_addlinemetered::Instance> server_end) {
        // Create an instance of our InstanceImpl that destroys itself when the connection closes.
        new InstanceImpl(dispatcher, std::move(server_end));
      });
  if (result.is_error()) {
    FX_LOGS(ERROR) << "Failed to add Instance protocol: " << result.status_string();
    return -1;
  }

  // Everything is wired up. Sit back and run the loop until an incoming connection wakes us up.
  FX_LOGS(INFO) << "Listening for incoming connections";
  loop.Run();
  return 0;
}

C++ (Wire)

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <fidl/examples.canvas.addlinemetered/cpp/wire.h>
#include <lib/async-loop/cpp/loop.h>
#include <lib/component/incoming/cpp/protocol.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <charconv>

#include <examples/fidl/new/canvas/add_line_metered/cpp_wire/client/config.h>

// The |EventHandler| is a derived class that we pass into the |fidl::WireClient| to handle incoming
// events asynchronously.
class EventHandler : public fidl::WireAsyncEventHandler<examples_canvas_addlinemetered::Instance> {
 public:
  // Handler for |OnDrawn| events sent from the server.
  void OnDrawn(fidl::WireEvent<examples_canvas_addlinemetered::Instance::OnDrawn>* event) override {
    auto top_left = event->top_left;
    auto bottom_right = event->bottom_right;
    FX_LOGS(INFO) << "OnDrawn event received: top_left: Point { x: " << top_left.x
                  << ", y: " << top_left.y << " }, bottom_right: Point { x: " << bottom_right.x
                  << ", y: " << bottom_right.y << " }";
    loop_.Quit();
  }

  void on_fidl_error(fidl::UnbindInfo error) override { FX_LOGS(ERROR) << error; }

  void handle_unknown_event(
      fidl::UnknownEventMetadata<examples_canvas_addlinemetered::Instance> metadata) override {
    FX_LOGS(WARNING) << "Received an unknown event with ordinal " << metadata.event_ordinal;
  }

  explicit EventHandler(async::Loop& loop) : loop_(loop) {}

 private:
  async::Loop& loop_;
};

// A helper function that takes a coordinate in string form, like "123,-456", and parses it into a
// a struct of the form |{ in64 x; int64 y; }|.
::examples_canvas_addlinemetered::wire::Point ParsePoint(std::string_view input) {
  int64_t x = 0;
  int64_t y = 0;
  size_t index = input.find(',');
  if (index != std::string::npos) {
    std::from_chars(input.data(), input.data() + index, x);
    std::from_chars(input.data() + index + 1, input.data() + input.length(), y);
  }
  return ::examples_canvas_addlinemetered::wire::Point{.x = x, .y = y};
}

// A helper function that takes a coordinate pair in string form, like "1,2:-3,-4", and parses it
// into an array of 2 |Point| structs.
::fidl::Array<::examples_canvas_addlinemetered::wire::Point, 2> ParseLine(
    const std::string& action) {
  auto input = std::string_view(action);
  size_t index = input.find(':');
  if (index != std::string::npos) {
    return {ParsePoint(input.substr(0, index)), ParsePoint(input.substr(index + 1))};
  }
  return {};
}

int main(int argc, const char** argv) {
  FX_LOGS(INFO) << "Started";

  // Retrieve component configuration.
  auto conf = config::Config::TakeFromStartupHandle();

  // Start up an async loop and dispatcher.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Connect to the protocol inside the component's namespace. This can fail so it's wrapped in a
  // |zx::result| and it must be checked for errors.
  zx::result client_end = component::Connect<examples_canvas_addlinemetered::Instance>();
  if (!client_end.is_ok()) {
    FX_LOGS(ERROR) << "Synchronous error when connecting to the |Instance| protocol: "
                   << client_end.status_string();
    return -1;
  }

  // Create an instance of the event handler.
  EventHandler event_handler(loop);

  // Create an asynchronous client using the newly-established connection.
  fidl::WireClient client(std::move(*client_end), dispatcher, &event_handler);
  FX_LOGS(INFO) << "Outgoing connection enabled";

  for (const auto& action : conf.script()) {
    // If the next action in the script is to "WAIT", block until an |OnDrawn| event is received
    // from the server.
    if (action == "WAIT") {
      loop.Run();
      loop.ResetQuit();
      continue;
    }

    // Draw a line to the canvas by calling the server, using the two points we just parsed
    // above as arguments.
    auto line = ParseLine(action);
    FX_LOGS(INFO) << "AddLine request sent: [Point { x: " << line[1].x << ", y: " << line[1].y
                  << " }, Point { x: " << line[0].x << ", y: " << line[0].y << " }]";

    client->AddLine(line).ThenExactlyOnce(
        [&](fidl::WireUnownedResult<examples_canvas_addlinemetered::Instance::AddLine>& result) {
          // Check if the FIDL call succeeded or not.
          if (!result.ok()) {
            // Check that our two-way call succeeded, and handle the error appropriately. In the
            // case of this example, there is nothing we can do to recover here, except to log an
            // error and exit the program.
            FX_LOGS(ERROR) << "Could not send AddLine request: " << result.status_string();
          }
          FX_LOGS(INFO) << "AddLine response received";

          // Quit the loop, thereby handing control back to the outer loop of actions being iterated
          // over.
          loop.Quit();
        });

    // Run the loop until the callback is resolved, at which point we can continue from here.
    loop.Run();
    loop.ResetQuit();
  }

  // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
  // referenced bug has been resolved, we can remove the sleep.
  sleep(2);
  return 0;
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <fidl/examples.canvas.addlinemetered/cpp/wire.h>
#include <lib/async-loop/cpp/loop.h>
#include <lib/async/cpp/task.h>
#include <lib/component/outgoing/cpp/outgoing_directory.h>
#include <lib/fidl/cpp/wire/channel.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <src/lib/fxl/macros.h>
#include <src/lib/fxl/memory/weak_ptr.h>

// A struct that stores the two things we care about for this example: the set of lines, and the
// bounding box that contains them.
struct CanvasState {
  // Tracks whether there has been a change since the last send, to prevent redundant updates.
  bool changed = true;
  examples_canvas_addlinemetered::wire::BoundingBox bounding_box;
};

// An implementation of the |Instance| protocol.
class InstanceImpl final : public fidl::WireServer<examples_canvas_addlinemetered::Instance> {
 public:
  // Bind this implementation to a channel.
  InstanceImpl(async_dispatcher_t* dispatcher,
               fidl::ServerEnd<examples_canvas_addlinemetered::Instance> server_end)
      : binding_(fidl::BindServer(
            dispatcher, std::move(server_end), this,
            [this](InstanceImpl* impl, fidl::UnbindInfo info,
                   fidl::ServerEnd<examples_canvas_addlinemetered::Instance> server_end) {
              if (info.reason() != ::fidl::Reason::kPeerClosedWhileReading) {
                FX_LOGS(ERROR) << "Shutdown unexpectedly";
              }
              delete this;
            })),
        weak_factory_(this) {
    // Start the update timer on startup. Our server sends one update per second
    ScheduleOnDrawnEvent(dispatcher, zx::sec(1));
  }

  void AddLine(AddLineRequestView request, AddLineCompleter::Sync& completer) override {
    auto points = request->line;
    FX_LOGS(INFO) << "AddLine request received: [Point { x: " << points[1].x
                  << ", y: " << points[1].y << " }, Point { x: " << points[0].x
                  << ", y: " << points[0].y << " }]";

    // Update the bounding box to account for the new line we've just "added" to the canvas.
    auto& bounds = state_.bounding_box;
    for (const auto& point : request->line) {
      if (point.x < bounds.top_left.x) {
        bounds.top_left.x = point.x;
      }
      if (point.y > bounds.top_left.y) {
        bounds.top_left.y = point.y;
      }
      if (point.x > bounds.bottom_right.x) {
        bounds.bottom_right.x = point.x;
      }
      if (point.y < bounds.bottom_right.y) {
        bounds.bottom_right.y = point.y;
      }
    }

    // Mark the state as "dirty", so that an update is sent back to the client on the next |OnDrawn|
    // event.
    state_.changed = true;

    // Because this is now a two-way method, we must use the generated |completer| to send an in
    // this case empty reply back to the client. This is the mechanic which syncs the flow rate
    // between the client and server on this method, thereby preventing the client from "flooding"
    // the server with unacknowledged work.
    completer.Reply();
    FX_LOGS(INFO) << "AddLine response sent";
  }

  void handle_unknown_method(
      fidl::UnknownMethodMetadata<examples_canvas_addlinemetered::Instance> metadata,
      fidl::UnknownMethodCompleter::Sync& completer) override {
    FX_LOGS(WARNING) << "Received an unknown method with ordinal " << metadata.method_ordinal;
  }

 private:
  // Each scheduled update waits for the allotted amount of time, sends an update if something has
  // changed, and schedules the next update.
  void ScheduleOnDrawnEvent(async_dispatcher_t* dispatcher, zx::duration after) {
    async::PostDelayedTask(
        dispatcher,
        [&, dispatcher, after, weak = weak_factory_.GetWeakPtr()] {
          // Halt execution if the binding has been deallocated already.
          if (!weak) {
            return;
          }

          // Schedule the next update if the binding still exists.
          weak->ScheduleOnDrawnEvent(dispatcher, after);

          // No need to send an update if nothing has changed since the last one.
          if (!weak->state_.changed) {
            return;
          }

          // This is where we would draw the actual lines. Since this is just an example, we'll
          // avoid doing the actual rendering, and simply send the bounding box to the client
          // instead.
          auto top_left = weak->state_.bounding_box.top_left;
          auto bottom_right = weak->state_.bounding_box.bottom_right;
          fidl::Status status =
              fidl::WireSendEvent(weak->binding_)->OnDrawn(top_left, bottom_right);
          if (!status.ok()) {
            return;
          }
          FX_LOGS(INFO) << "OnDrawn event sent: top_left: Point { x: " << top_left.x
                        << ", y: " << top_left.y
                        << " }, bottom_right: Point { x: " << bottom_right.x
                        << ", y: " << bottom_right.y << " }";

          // Reset the change tracker.
          weak->state_.changed = false;
        },
        after);
  }

  fidl::ServerBindingRef<examples_canvas_addlinemetered::Instance> binding_;
  CanvasState state_ = CanvasState{};

  // Generates weak references to this object, which are appropriate to pass into asynchronous
  // callbacks that need to access this object. The references are automatically invalidated
  // if this object is destroyed.
  fxl::WeakPtrFactory<InstanceImpl> weak_factory_;
};

int main(int argc, char** argv) {
  FX_LOGS(INFO) << "Started";

  // The event loop is used to asynchronously listen for incoming connections and requests from the
  // client. The following initializes the loop, and obtains the dispatcher, which will be used when
  // binding the server implementation to a channel.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Create an |OutgoingDirectory| instance.
  //
  // The |component::OutgoingDirectory| class serves the outgoing directory for our component. This
  // directory is where the outgoing FIDL protocols are installed so that they can be provided to
  // other components.
  component::OutgoingDirectory outgoing = component::OutgoingDirectory(dispatcher);

  // The `ServeFromStartupInfo()` function sets up the outgoing directory with the startup handle.
  // The startup handle is a handle provided to every component by the system, so that they can
  // serve capabilities (e.g. FIDL protocols) to other components.
  zx::result result = outgoing.ServeFromStartupInfo();
  if (result.is_error()) {
    FX_LOGS(ERROR) << "Failed to serve outgoing directory: " << result.status_string();
    return -1;
  }

  // Register a handler for components trying to connect to
  // |examples.canvas.addlinemetered.Instance|.
  result = outgoing.AddUnmanagedProtocol<examples_canvas_addlinemetered::Instance>(
      [dispatcher](fidl::ServerEnd<examples_canvas_addlinemetered::Instance> server_end) {
        // Create an instance of our InstanceImpl that destroys itself when the connection closes.
        new InstanceImpl(dispatcher, std::move(server_end));
      });
  if (result.is_error()) {
    FX_LOGS(ERROR) << "Failed to add Instance protocol: " << result.status_string();
    return -1;
  }

  // Everything is wired up. Sit back and run the loop until an incoming connection wakes us up.
  FX_LOGS(INFO) << "Listening for incoming connections";
  loop.Run();
  return 0;
}

HLCPP

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <lib/async-loop/cpp/loop.h>
#include <lib/sys/cpp/component_context.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <charconv>

#include <examples/canvas/addlinemetered/cpp/fidl.h>
#include <examples/fidl/new/canvas/add_line_metered/hlcpp/client/config.h>

#include "lib/fpromise/result.h"

// A helper function that takes a coordinate in string form, like "123,-456", and parses it into a
// a struct of the form |{ in64 x; int64 y; }|.
::examples::canvas::addlinemetered::Point ParsePoint(std::string_view input) {
  int64_t x = 0;
  int64_t y = 0;
  size_t index = input.find(',');
  if (index != std::string::npos) {
    std::from_chars(input.data(), input.data() + index, x);
    std::from_chars(input.data() + index + 1, input.data() + input.length(), y);
  }
  return ::examples::canvas::addlinemetered::Point{.x = x, .y = y};
}

// A helper function that takes a coordinate pair in string form, like "1,2:-3,-4", and parses it
// into an array of 2 |Point| structs.
::std::array<::examples::canvas::addlinemetered::Point, 2> ParseLine(const std::string& action) {
  auto input = std::string_view(action);
  size_t index = input.find(':');
  if (index != std::string::npos) {
    return {ParsePoint(input.substr(0, index)), ParsePoint(input.substr(index + 1))};
  }
  return {};
}

int main(int argc, const char** argv) {
  FX_LOGS(INFO) << "Started";

  // Retrieve component configuration.
  auto conf = config::Config::TakeFromStartupHandle();

  // Start up an async loop.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Connect to the protocol inside the component's namespace, then create an asynchronous client
  // using the newly-established connection.
  examples::canvas::addlinemetered::InstancePtr instance_proxy;
  auto context = sys::ComponentContext::Create();
  context->svc()->Connect(instance_proxy.NewRequest(dispatcher));
  FX_LOGS(INFO) << "Outgoing connection enabled";

  instance_proxy.set_error_handler([&loop](zx_status_t status) {
    FX_LOGS(ERROR) << "Shutdown unexpectedly";
    loop.Quit();
  });

  // Provide a lambda to handle incoming |OnDrawn| events asynchronously.
  instance_proxy.events().OnDrawn = [&loop](
                                        ::examples::canvas::addlinemetered::Point top_left,
                                        ::examples::canvas::addlinemetered::Point bottom_right) {
    FX_LOGS(INFO) << "OnDrawn event received: top_left: Point { x: " << top_left.x
                  << ", y: " << top_left.y << " }, bottom_right: Point { x: " << bottom_right.x
                  << ", y: " << bottom_right.y << " }";
    loop.Quit();
  };

  instance_proxy.events().handle_unknown_event = [](uint64_t ordinal) {
    FX_LOGS(WARNING) << "Received an unknown event with ordinal " << ordinal;
  };

  for (const auto& action : conf.script()) {
    // If the next action in the script is to "WAIT", block until an |OnDrawn| event is received
    // from the server.
    if (action == "WAIT") {
      loop.Run();
      loop.ResetQuit();
      continue;
    }

    // Draw a line to the canvas by calling the server, using the two points we just parsed
    // above as arguments.
    auto line = ParseLine(action);
    FX_LOGS(INFO) << "AddLine request sent: [Point { x: " << line[1].x << ", y: " << line[1].y
                  << " }, Point { x: " << line[0].x << ", y: " << line[0].y << " }]";

    instance_proxy->AddLine(line, [&](fpromise::result<void, fidl::FrameworkErr> result) {
      if (result.is_error()) {
        // Check that our flexible two-way call was known to the server and handle the case of an
        // unknown method appropriately. In the case of this example, there is nothing we can do to
        // recover here, except to log an error and exit the program.
        FX_LOGS(ERROR) << "Server does not implement AddLine";
      }
      FX_LOGS(INFO) << "AddLine response received";

      // Quit the loop, thereby handing control back to the outer loop of actions being iterated
      // over.
      loop.Quit();
    });

    // Run the loop until the callback is resolved, at which point we can continue from here.
    loop.Run();
    loop.ResetQuit();
  }

  // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
  // referenced bug has been resolved, we can remove the sleep.
  sleep(2);
  return 0;
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <lib/async-loop/cpp/loop.h>
#include <lib/async-loop/default.h>
#include <lib/async/cpp/task.h>
#include <lib/fidl/cpp/binding.h>
#include <lib/sys/cpp/component_context.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <examples/canvas/addlinemetered/cpp/fidl.h>
#include <src/lib/fxl/macros.h>
#include <src/lib/fxl/memory/weak_ptr.h>

// A struct that stores the two things we care about for this example: the set of lines, and the
// bounding box that contains them.
struct CanvasState {
  // Tracks whether there has been a change since the last send, to prevent redundant updates.
  bool changed = true;
  examples::canvas::addlinemetered::BoundingBox bounding_box;
};

// An implementation of the |Instance| protocol.
class InstanceImpl final : public examples::canvas::addlinemetered::Instance {
 public:
  // Bind this implementation to an |InterfaceRequest|.
  InstanceImpl(async_dispatcher_t* dispatcher,
               fidl::InterfaceRequest<examples::canvas::addlinemetered::Instance> request)
      : binding_(fidl::Binding<examples::canvas::addlinemetered::Instance>(this)),
        weak_factory_(this) {
    binding_.Bind(std::move(request), dispatcher);

    // Gracefully handle abrupt shutdowns.
    binding_.set_error_handler([this](zx_status_t status) mutable {
      if (status != ZX_ERR_PEER_CLOSED) {
        FX_LOGS(ERROR) << "Shutdown unexpectedly";
      }
      delete this;
    });

    // Start the update timer on startup. Our server sends one update per second.
    ScheduleOnDrawnEvent(dispatcher, zx::sec(1));
  }

  void AddLine(::std::array<::examples::canvas::addlinemetered::Point, 2> line,
               AddLineCallback callback) override {
    FX_LOGS(INFO) << "AddLine request received: [Point { x: " << line[1].x << ", y: " << line[1].y
                  << " }, Point { x: " << line[0].x << ", y: " << line[0].y << " }]";

    // Update the bounding box to account for the new line we've just "added" to the canvas.
    auto& bounds = state_.bounding_box;
    for (const auto& point : line) {
      if (point.x < bounds.top_left.x) {
        bounds.top_left.x = point.x;
      }
      if (point.y > bounds.top_left.y) {
        bounds.top_left.y = point.y;
      }
      if (point.x > bounds.bottom_right.x) {
        bounds.bottom_right.x = point.x;
      }
      if (point.y < bounds.bottom_right.y) {
        bounds.bottom_right.y = point.y;
      }
    }

    // Mark the state as "dirty", so that an update is sent back to the client on the next |OnDrawn|
    // event.
    state_.changed = true;

    // Because this is now a two-way method, we must use the generated |callback| to send an in
    // this case empty reply back to the client. This is the mechanic which syncs the flow rate
    // between the client and server on this method, thereby preventing the client from "flooding"
    // the server with unacknowledged work.
    callback(fpromise::ok());
    FX_LOGS(INFO) << "AddLine response sent";
  }

  void handle_unknown_method(uint64_t ordinal, bool method_has_response) override {
    FX_LOGS(WARNING) << "Received an unknown method with ordinal " << ordinal;
  }

 private:
  // Each scheduled update waits for the allotted amount of time, sends an update if something has
  // changed, and schedules the next update.
  void ScheduleOnDrawnEvent(async_dispatcher_t* dispatcher, zx::duration after) {
    async::PostDelayedTask(
        dispatcher,
        [&, dispatcher, after, weak = weak_factory_.GetWeakPtr()] {
          // Halt execution if the binding has been deallocated already.
          if (!weak) {
            return;
          }

          // Schedule the next update if the binding still exists.
          weak->ScheduleOnDrawnEvent(dispatcher, after);

          // No need to send an update if nothing has changed since the last one.
          if (!weak->state_.changed) {
            return;
          }

          // This is where we would draw the actual lines. Since this is just an example, we'll
          // avoid doing the actual rendering, and simply send the bounding box to the client
          // instead.
          auto top_left = state_.bounding_box.top_left;
          auto bottom_right = state_.bounding_box.bottom_right;
          binding_.events().OnDrawn(top_left, bottom_right);
          FX_LOGS(INFO) << "OnDrawn event sent: top_left: Point { x: " << top_left.x
                        << ", y: " << top_left.y
                        << " }, bottom_right: Point { x: " << bottom_right.x
                        << ", y: " << bottom_right.y << " }";

          // Reset the change tracker.
          state_.changed = false;
        },
        after);
  }

  fidl::Binding<examples::canvas::addlinemetered::Instance> binding_;
  CanvasState state_ = CanvasState{};

  // Generates weak references to this object, which are appropriate to pass into asynchronous
  // callbacks that need to access this object. The references are automatically invalidated
  // if this object is destroyed.
  fxl::WeakPtrFactory<InstanceImpl> weak_factory_;
};

int main(int argc, char** argv) {
  FX_LOGS(INFO) << "Started";

  // The event loop is used to asynchronously listen for incoming connections and requests from the
  // client. The following initializes the loop, and obtains the dispatcher, which will be used when
  // binding the server implementation to a channel.
  //
  // Note that unlike the new C++ bindings, HLCPP bindings rely on the async loop being attached to
  // the current thread via the |kAsyncLoopConfigAttachToCurrentThread| configuration.
  async::Loop loop(&kAsyncLoopConfigAttachToCurrentThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Create an |OutgoingDirectory| instance.
  //
  // The |component::OutgoingDirectory| class serves the outgoing directory for our component.
  // This directory is where the outgoing FIDL protocols are installed so that they can be
  // provided to other components.
  auto context = sys::ComponentContext::CreateAndServeOutgoingDirectory();

  // Register a handler for components trying to connect to
  // |examples.canvas.addlinemetered.Instance|.
  context->outgoing()->AddPublicService(
      fidl::InterfaceRequestHandler<examples::canvas::addlinemetered::Instance>(
          [dispatcher](fidl::InterfaceRequest<examples::canvas::addlinemetered::Instance> request) {
            // Create an instance of our |InstanceImpl| that destroys itself when the connection
            // closes.
            new InstanceImpl(dispatcher, std::move(request));
          }));

  // Everything is wired up. Sit back and run the loop until an incoming connection wakes us up.
  FX_LOGS(INFO) << "Listening for incoming connections";
  loop.Run();
  return 0;
}

Push bounded data using events

In FIDL, servers can send clients unsolicited messages called events. Protocols that use events need to provide particular attention to flow control because the event mechanism itself does not provide any flow control.

A good use case for events is when at most one instance of the event will be sent for the lifetime of the channel. In this pattern, the protocol does not need any flow control for the event:

protocol DeathWish {
    -> OnFatalError(struct {
        error_code zx.Status;
    });
};

Another good use case for events is when the client requests that the server produce events and when the overall number of events produced by the server is bounded. This pattern is a more sophisticated version of the hanging get pattern in which the server can respond to the "get" request a bounded number of times (rather than just once):

protocol NetworkScanner {
    ScanForNetworks();
    -> OnNetworkDiscovered(struct {
        network string;
    });
    -> OnScanFinished();
};

Throttle events using acknowledgements

If there is no known bound on the number of events ahead of time, consider having the client acknowledge the events by sending a message. This pattern is a more awkward version of the acknowledgement pattern in which the roles of client and server are switched. As in the other pattern, the server should throttle event production to match the rate at which the client consumes the events:

protocol View1 {
    -> OnInputEvent(struct {
        event InputEvent;
    });
    NotifyInputEventHandled();
};

One advantage to this pattern over the normal acknowledgement pattern is that the client can more easily acknowledge multiple events with a single message because the acknowledgement is disassociated from the event being acknowledged. This pattern allows for more efficient batch processing by reducing the volume of acknowledgement messages and works well for in-order processing of multiple event types:

protocol View2 {
    -> OnInputEvent(struct {
        event InputEvent;
        seq uint64;
    });
    -> OnFocusChangedEvent(struct {
        event FocusChangedEvent;
        seq uint64;
    });
    NotifyEventsHandled(struct {
        last_seq uint64;
    });
};

Unlike throttle push using acknowledgements, this pattern does not express the relationship between the request and the response in FIDL syntax and therefore it is prone to misuse. Flow control will only work when clients correctly implement sending of the notification message.

FIDL recipe: Throttled event pattern

Events are FIDL calls initiated from the server. Because these calls have no built-in client-side response, they are not flow controlled: a server could queue up a very large number of such calls, and flood the client. One solution to this problem is the throttled event pattern. This pattern involves adding a client-called FIDL method to serve as an acknowledgement point for one or more events to sync to.

Servers should refrain from sending more of the throttled events (the exact semantics here are specific to the implementing protocol) until they have received the next acknowledgement call from the client. Similarly, clients should close the connection the if servers send more throttled events than allowed before the client has acknowledged them. These restrictions are not built into the FIDL runtime, and require some manual implementation on the part of client/server implementers to ensure correct behavior.

One way to improve the performance of the Instance protocol is to allow the batching of lines: rather than sending a single AddLine(...); every time we have a new line we'd like to add to the canvas, waiting for the reply, then doing it again for the next line, we can instead batch many lines into a single invocation of the new AddLines(...); call. The client can now decide how to best segment a large collection of lines to be drawn.

Naively implemented, we would find ourselves in a situation where the server and the client are completely unsynchronized: the client can flood the server with unbounded AddLines(...); calls, and the server can similarly flood the client with more -> OnDrawn(...); events than it can handle. The solution to both of these problems is to add a simple Ready() -> (); method for synchronization purposes. This method is called by the client whenever it is prepared to receive the next draw update, with the response from the server indicating that the client can proceed with more requests.

We now have some flow control in both directions. The protocol now implements the feed forward pattern, allowing many uncontrolled calls before some synchronizing "commit" call triggers the actual work on the server. This prevents the client from overwhelming the server with work. Similarly, the server is no longer allowed to send unbounded -> OnDrawn(...); events: each event must follow a signal from the client, the Ready() -> (); call, that indicates that it is ready to do more work. This is known as the throttled event pattern.

The concrete implementations must apply some of these rules manually: the client must close the connection if it receives an -> OnDrawn(...); event it did not request via the Ready() -> (); method.

The FIDL, CML, and realm interface definitions are as follows:

FIDL

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
library examples.canvas.clientrequesteddraw;

/// A point in 2D space.
type Point = struct {
    x int64;
    y int64;
};

/// A line in 2D space.
alias Line = array<Point, 2>;

/// A bounding box in 2D space. This is the result of "drawing" operations on our canvas, and what
/// the server reports back to the client. These bounds are sufficient to contain all of the
/// lines (inclusive) on a canvas at a given time.
type BoundingBox = struct {
    top_left Point;
    bottom_right Point;
};

/// Manages a single instance of a canvas. Each session of this protocol is responsible for a new
/// canvas.
@discoverable
open protocol Instance {
    /// Add multiple lines to the canvas. We are able to reduce protocol chatter and the number of
    /// requests needed by batching instead of calling the simpler `AddLine(...)` one line at a
    /// time.
    flexible AddLines(struct {
        lines vector<Line>;
    });

    /// Rather than the server randomly performing draws, or trying to guess when to do so, the
    /// client must explicitly ask for them. This creates a bit of extra chatter with the additional
    /// method invocation, but allows much greater client-side control of when the canvas is "ready"
    /// for a view update, thereby eliminating unnecessary draws.
    ///
    /// This method also has the benefit of "throttling" the `-> OnDrawn(...)` event - rather than
    /// allowing a potentially unlimited flood of `-> OnDrawn(...)` calls, we now have the runtime
    /// enforced semantic that each `-> OnDrawn(...)` call must follow a unique `Ready() -> ()` call
    /// from the client. An unprompted `-> OnDrawn(...)` is invalid, and should cause the channel to
    /// immediately close.
    flexible Ready() -> ();

    /// Update the client with the latest drawing state. The server makes no guarantees about how
    /// often this event occurs - it could occur multiple times per board state, for example.
    flexible -> OnDrawn(BoundingBox);
};

CML

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    include: [ "syslog/client.shard.cml" ],
    program: {
        runner: "elf",
        binary: "bin/client_bin",
    },
    use: [
        { protocol: "examples.canvas.clientrequesteddraw.Instance" },
    ],
    config: {
        // A script for the client to follow. Entries in the script may take one of two forms: a
        // pair of signed-integer coordinates like "-2,15:4,5", or the string "READY". The former
        // builds a local vector sent via a single `AddLines(...)` call, while the latter sends a
        // `Ready() -> ()` call pauses execution until the next `->OnDrawn(...)` event is received.
        //
        // TODO(https://fxbug.dev/42178362): It would absolve individual language implementations of a great
        //   deal of string parsing if we were able to use a vector of `union { Point; Ready}` here.
        script: {
            type: "vector",
            max_count: 100,
            element: {
                type: "string",
                max_size: 64,
            },
        },

    },
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    include: [ "syslog/client.shard.cml" ],
    program: {
        runner: "elf",
        binary: "bin/server_bin",
    },
    capabilities: [
        { protocol: "examples.canvas.clientrequesteddraw.Instance" },
    ],
    expose: [
        {
            protocol: "examples.canvas.clientrequesteddraw.Instance",
            from: "self",
        },
    ],
}

Realm

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    children: [
        {
            name: "client",
            url: "#meta/client.cm",
        },
        {
            name: "server",
            url: "#meta/server.cm",
        },
    ],
    offer: [
        // Route the protocol under test from the server to the client.
        {
            protocol: "examples.canvas.clientrequesteddraw.Instance",
            from: "#server",
            to: "#client",
        },

        // Route diagnostics support to all children.
        {
            protocol: [
                "fuchsia.inspect.InspectSink",
                "fuchsia.logger.LogSink",
            ],
            from: "parent",
            to: [
                "#client",
                "#server",
            ],
        },
    ],
}

Client and server implementations can then be written in any supported language:

Rust

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

use {
    anyhow::{format_err, Context as _, Error},
    config::Config,
    fidl_examples_canvas_clientrequesteddraw::{InstanceEvent, InstanceMarker, Point},
    fuchsia_component::client::connect_to_protocol,
    futures::TryStreamExt,
    std::{thread, time},
};

#[fuchsia::main]
async fn main() -> Result<(), Error> {
    println!("Started");

    // Load the structured config values passed to this component at startup.
    let config = Config::take_from_startup_handle();

    // Use the Component Framework runtime to connect to the newly spun up server component. We wrap
    // our retained client end in a proxy object that lets us asynchronously send Instance requests
    // across the channel.
    let instance = connect_to_protocol::<InstanceMarker>()?;
    println!("Outgoing connection enabled");

    let mut batched_lines = Vec::<[Point; 2]>::new();
    for action in config.script.into_iter() {
        // If the next action in the script is to "PUSH", send a batch of lines to the server.
        if action == "PUSH" {
            instance.add_lines(&batched_lines).context("Could not send lines")?;
            println!("AddLines request sent");
            batched_lines.clear();
            continue;
        }

        // If the next action in the script is to "WAIT", block until an OnDrawn event is received
        // from the server.
        if action == "WAIT" {
            let mut event_stream = instance.take_event_stream();
            loop {
                match event_stream
                    .try_next()
                    .await
                    .context("Error getting event response from proxy")?
                    .ok_or_else(|| format_err!("Proxy sent no events"))?
                {
                    InstanceEvent::OnDrawn { top_left, bottom_right } => {
                        println!(
                            "OnDrawn event received: top_left: {:?}, bottom_right: {:?}",
                            top_left, bottom_right
                        );
                        break;
                    }
                    InstanceEvent::_UnknownEvent { ordinal, .. } => {
                        println!("Received an unknown event with ordinal {ordinal}");
                    }
                }
            }

            // Now, inform the server that we are ready to receive more updates whenever they are
            // ready for us.
            println!("Ready request sent");
            instance.ready().await.context("Could not send ready call")?;
            println!("Ready success");
            continue;
        }

        // Add a line to the next batch. Parse the string input, making two points out of it.
        let mut points = action
            .split(":")
            .map(|point| {
                let integers = point
                    .split(",")
                    .map(|integer| integer.parse::<i64>().unwrap())
                    .collect::<Vec<i64>>();
                Point { x: integers[0], y: integers[1] }
            })
            .collect::<Vec<Point>>();

        // Assemble a line from the two points.
        let from = points.pop().ok_or(format_err!("line requires 2 points, but has 0"))?;
        let to = points.pop().ok_or(format_err!("line requires 2 points, but has 1"))?;
        let mut line: [Point; 2] = [from, to];

        // Batch a line for drawing to the canvas using the two points provided.
        println!("AddLines batching line: {:?}", &mut line);
        batched_lines.push(line);
    }

    // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
    // referenced bug has been resolved, we can remove the sleep.
    thread::sleep(time::Duration::from_secs(2));
    Ok(())
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

use {
    anyhow::{anyhow, Context as _, Error},
    fidl::endpoints::RequestStream as _,
    fidl_examples_canvas_clientrequesteddraw::{
        BoundingBox, InstanceRequest, InstanceRequestStream, Point,
    },
    fuchsia_async::{Time, Timer},
    fuchsia_component::server::ServiceFs,
    fuchsia_zircon::{self as zx},
    futures::future::join,
    futures::prelude::*,
    std::sync::{Arc, Mutex},
};

// A struct that stores the two things we care about for this example: the bounding box the lines
// that have been added thus far, and bit to track whether or not there have been changes since the
// last `OnDrawn` event.
#[derive(Debug)]
struct CanvasState {
    // Tracks whether there has been a change since the last send, to prevent redundant updates.
    changed: bool,
    // Tracks whether or not the client has declared itself ready to receive more updated.
    ready: bool,
    bounding_box: BoundingBox,
}

/// Handler for the `AddLines` method.
fn add_lines(state: &mut CanvasState, lines: Vec<[Point; 2]>) {
    // Update the bounding box to account for the new lines we've just "added" to the canvas.
    let bounds = &mut state.bounding_box;
    for line in lines {
        println!("AddLines printing line: {:?}", line);
        for point in line {
            if point.x < bounds.top_left.x {
                bounds.top_left.x = point.x;
            }
            if point.y > bounds.top_left.y {
                bounds.top_left.y = point.y;
            }
            if point.x > bounds.bottom_right.x {
                bounds.bottom_right.x = point.x;
            }
            if point.y < bounds.bottom_right.y {
                bounds.bottom_right.y = point.y;
            }
        }
    }

    // Mark the state as "dirty", so that an update is sent back to the client on the next tick.
    state.changed = true
}

/// Creates a new instance of the server, paired to a single client across a zircon channel.
async fn run_server(stream: InstanceRequestStream) -> Result<(), Error> {
    // Create a new in-memory state store for the state of the canvas. The store will live for the
    // lifetime of the connection between the server and this particular client.
    let state = Arc::new(Mutex::new(CanvasState {
        changed: true,
        ready: true,
        bounding_box: BoundingBox {
            top_left: Point { x: 0, y: 0 },
            bottom_right: Point { x: 0, y: 0 },
        },
    }));

    // Take ownership of the control_handle from the stream, which will allow us to push events from
    // a different async task.
    let control_handle = stream.control_handle();

    // A separate watcher task periodically "draws" the canvas, and notifies the client of the new
    // state. We'll need a cloned reference to the canvas state to be accessible from the new
    // task.
    let state_ref = state.clone();
    let update_sender = || async move {
        loop {
            // Our server sends one update per second, but only if the client has declared that it
            // is ready to receive one.
            Timer::new(Time::after(zx::Duration::from_seconds(1))).await;
            let mut state = state_ref.lock().unwrap();
            if !state.changed || !state.ready {
                continue;
            }

            // After acquiring the lock, this is where we would draw the actual lines. Since this is
            // just an example, we'll avoid doing the actual rendering, and simply send the bounding
            // box to the client instead.
            let bounds = state.bounding_box;
            match control_handle.send_on_drawn(&bounds.top_left, &bounds.bottom_right) {
                Ok(_) => println!(
                    "OnDrawn event sent: top_left: {:?}, bottom_right: {:?}",
                    bounds.top_left, bounds.bottom_right
                ),
                Err(_) => return,
            }

            // Reset the change and ready trackers.
            state.ready = false;
            state.changed = false;
        }
    };

    // Handle requests on the protocol sequentially - a new request is not handled until its
    // predecessor has been processed.
    let state_ref = &state;
    let request_handler =
        stream.map(|result| result.context("failed request")).try_for_each(|request| async move {
            // Match based on the method being invoked.
            match request {
                InstanceRequest::AddLines { lines, .. } => {
                    println!("AddLines request received");
                    add_lines(&mut state_ref.lock().unwrap(), lines);
                }
                InstanceRequest::Ready { responder, .. } => {
                    println!("Ready request received");
                    // The client must only call `Ready() -> ();` after receiving an `-> OnDrawn();`
                    // event; if two "consecutive" `Ready() -> ();` calls are received, this
                    // interaction has entered an invalid state, and should be aborted immediately.
                    let mut state = state_ref.lock().unwrap();
                    if state.ready == true {
                        return Err(anyhow!("Invalid back-to-back `Ready` requests received"));
                    }

                    state.ready = true;
                    responder.send().context("Error responding")?;
                } //
                InstanceRequest::_UnknownMethod { ordinal, .. } => {
                    println!("Received an unknown method with ordinal {ordinal}");
                }
            }
            Ok(())
        });

    // This line will only be reached if the server errors out. The stream will await indefinitely,
    // thereby creating a long-lived server. Here, we first wait for the updater task to realize the
    // connection has died, then bubble up the error.
    join(request_handler, update_sender()).await.0
}

// A helper enum that allows us to treat a `Instance` service instance as a value.
enum IncomingService {
    Instance(InstanceRequestStream),
}

#[fuchsia::main]
async fn main() -> Result<(), Error> {
    println!("Started");

    // Add a discoverable instance of our `Instance` protocol - this will allow the client to see
    // the server and connect to it.
    let mut fs = ServiceFs::new_local();
    fs.dir("svc").add_fidl_service(IncomingService::Instance);
    fs.take_and_serve_directory_handle()?;
    println!("Listening for incoming connections");

    // The maximum number of concurrent clients that may be served by this process.
    const MAX_CONCURRENT: usize = 10;

    // Serve each connection simultaneously, up to the `MAX_CONCURRENT` limit.
    fs.for_each_concurrent(MAX_CONCURRENT, |IncomingService::Instance(stream)| {
        run_server(stream).unwrap_or_else(|e| println!("{:?}", e))
    })
    .await;

    Ok(())
}

C++ (Natural)

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <fidl/examples.canvas.clientrequesteddraw/cpp/fidl.h>
#include <lib/async-loop/cpp/loop.h>
#include <lib/component/incoming/cpp/protocol.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <charconv>

#include <examples/fidl/new/canvas/client_requested_draw/cpp_natural/client/config.h>

// The |EventHandler| is a derived class that we pass into the |fidl::WireClient| to handle incoming
// events asynchronously.
class EventHandler : public fidl::AsyncEventHandler<examples_canvas_clientrequesteddraw::Instance> {
 public:
  // Handler for |OnDrawn| events sent from the server.
  void OnDrawn(
      fidl::Event<examples_canvas_clientrequesteddraw::Instance::OnDrawn>& event) override {
    ::examples_canvas_clientrequesteddraw::Point top_left = event.top_left();
    ::examples_canvas_clientrequesteddraw::Point bottom_right = event.bottom_right();
    FX_LOGS(INFO) << "OnDrawn event received: top_left: Point { x: " << top_left.x()
                  << ", y: " << top_left.y() << " }, bottom_right: Point { x: " << bottom_right.x()
                  << ", y: " << bottom_right.y() << " }";
    loop_.Quit();
  }

  void on_fidl_error(fidl::UnbindInfo error) override { FX_LOGS(ERROR) << error; }

  void handle_unknown_event(
      fidl::UnknownEventMetadata<examples_canvas_clientrequesteddraw::Instance> metadata) override {
    FX_LOGS(WARNING) << "Received an unknown event with ordinal " << metadata.event_ordinal;
  }

  explicit EventHandler(async::Loop& loop) : loop_(loop) {}

 private:
  async::Loop& loop_;
};

// A helper function that takes a coordinate in string form, like "123,-456", and parses it into a
// a struct of the form |{ in64 x; int64 y; }|.
::examples_canvas_clientrequesteddraw::Point ParsePoint(std::string_view input) {
  int64_t x = 0;
  int64_t y = 0;
  size_t index = input.find(',');
  if (index != std::string::npos) {
    std::from_chars(input.data(), input.data() + index, x);
    std::from_chars(input.data() + index + 1, input.data() + input.length(), y);
  }
  return ::examples_canvas_clientrequesteddraw::Point(x, y);
}

using Line = ::std::array<::examples_canvas_clientrequesteddraw::Point, 2>;

// A helper function that takes a coordinate pair in string form, like "1,2:-3,-4", and parses it
// into an array of 2 |Point| structs.
Line ParseLine(const std::string& action) {
  auto input = std::string_view(action);
  size_t index = input.find(':');
  if (index != std::string::npos) {
    return {ParsePoint(input.substr(0, index)), ParsePoint(input.substr(index + 1))};
  }
  return {};
}

int main(int argc, const char** argv) {
  FX_LOGS(INFO) << "Started";

  // Retrieve component configuration.
  auto conf = config::Config::TakeFromStartupHandle();

  // Start up an async loop and dispatcher.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Connect to the protocol inside the component's namespace. This can fail so it's wrapped in a
  // |zx::result| and it must be checked for errors.
  zx::result client_end = component::Connect<examples_canvas_clientrequesteddraw::Instance>();
  if (!client_end.is_ok()) {
    FX_LOGS(ERROR) << "Synchronous error when connecting to the |Instance| protocol: "
                   << client_end.status_string();
    return -1;
  }

  // Create an instance of the event handler.
  EventHandler event_handler(loop);

  // Create an asynchronous client using the newly-established connection.
  fidl::Client client(std::move(*client_end), dispatcher, &event_handler);
  FX_LOGS(INFO) << "Outgoing connection enabled";

  std::vector<Line> batched_lines;
  for (const auto& action : conf.script()) {
    // If the next action in the script is to "PUSH", send a batch of lines to the server.
    if (action == "PUSH") {
      fit::result<fidl::Error> result = client->AddLines(batched_lines);
      if (!result.is_ok()) {
        // Check that our one-way call was enqueued successfully, and handle the error
        // appropriately. In the case of this example, there is nothing we can do to recover here,
        // except to log an error and exit the program.
        FX_LOGS(ERROR) << "Could not send AddLines request: " << result.error_value();
        return -1;
      }

      batched_lines.clear();
      FX_LOGS(INFO) << "AddLines request sent";
      continue;
    }

    // If the next action in the script is to "WAIT", block until an |OnDrawn| event is received
    // from the server.
    if (action == "WAIT") {
      loop.Run();
      loop.ResetQuit();

      // Now, inform the server that we are ready to receive more updates whenever they are
      // ready for us.
      FX_LOGS(INFO) << "Ready request sent";
      client->Ready().ThenExactlyOnce(
          [&](fidl::Result<examples_canvas_clientrequesteddraw::Instance::Ready> result) {
            // Check if the FIDL call succeeded or not.
            if (result.is_ok()) {
              FX_LOGS(INFO) << "Ready success";
            } else {
              FX_LOGS(ERROR) << "Could not send Ready request: " << result.error_value();
            }

            // Quit the loop, thereby handing control back to the outer loop of actions being
            // iterated over.
            loop.Quit();
          });

      // Run the loop until the callback is resolved, at which point we can continue from here.
      loop.Run();
      loop.ResetQuit();

      continue;
    }

    // Batch a line for drawing to the canvas using the two points provided.
    Line line = ParseLine(action);
    batched_lines.push_back(line);
    FX_LOGS(INFO) << "AddLines batching line: [Point { x: " << line[1].x() << ", y: " << line[1].y()
                  << " }, Point { x: " << line[0].x() << ", y: " << line[0].y() << " }]";
  }

  // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
  // referenced bug has been resolved, we can remove the sleep.
  sleep(2);
  return 0;
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <fidl/examples.canvas.clientrequesteddraw/cpp/fidl.h>
#include <lib/async-loop/cpp/loop.h>
#include <lib/async/cpp/task.h>
#include <lib/component/outgoing/cpp/outgoing_directory.h>
#include <lib/fidl/cpp/wire/channel.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <src/lib/fxl/macros.h>
#include <src/lib/fxl/memory/weak_ptr.h>

// A struct that stores the two things we care about for this example: the set of lines, and the
// bounding box that contains them.
struct CanvasState {
  // Tracks whether there has been a change since the last send, to prevent redundant updates.
  bool changed = true;
  // Tracks whether or not the client has declared itself ready to receive more updated.
  bool ready = true;
  examples_canvas_clientrequesteddraw::BoundingBox bounding_box;
};

// An implementation of the |Instance| protocol.
class InstanceImpl final : public fidl::Server<examples_canvas_clientrequesteddraw::Instance> {
 public:
  // Bind this implementation to a channel.
  InstanceImpl(async_dispatcher_t* dispatcher,
               fidl::ServerEnd<examples_canvas_clientrequesteddraw::Instance> server_end)
      : binding_(dispatcher, std::move(server_end), this, std::mem_fn(&InstanceImpl::OnFidlClosed)),
        weak_factory_(this) {
    // Start the update timer on startup. Our server sends one update per second
    ScheduleOnDrawnEvent(dispatcher, zx::sec(1));
  }

  void OnFidlClosed(fidl::UnbindInfo info) {
    if (info.reason() != ::fidl::Reason::kPeerClosedWhileReading) {
      FX_LOGS(ERROR) << "Shutdown unexpectedly";
    }
    delete this;
  }

  void AddLines(AddLinesRequest& request, AddLinesCompleter::Sync& completer) override {
    FX_LOGS(INFO) << "AddLines request received";
    for (const auto& points : request.lines()) {
      FX_LOGS(INFO) << "AddLines printing line: [Point { x: " << points[1].x()
                    << ", y: " << points[1].y() << " }, Point { x: " << points[0].x()
                    << ", y: " << points[0].y() << " }]";

      // Update the bounding box to account for the new line we've just "added" to the canvas.
      auto& bounds = state_.bounding_box;
      for (const auto& point : points) {
        if (point.x() < bounds.top_left().x()) {
          bounds.top_left().x() = point.x();
        }
        if (point.y() > bounds.top_left().y()) {
          bounds.top_left().y() = point.y();
        }
        if (point.x() > bounds.bottom_right().x()) {
          bounds.bottom_right().x() = point.x();
        }
        if (point.y() < bounds.bottom_right().y()) {
          bounds.bottom_right().y() = point.y();
        }
      }
    }

    // Mark the state as "dirty", so that an update is sent back to the client on the next |OnDrawn|
    // event.
    state_.changed = true;
  }

  void Ready(ReadyCompleter::Sync& completer) override {
    FX_LOGS(INFO) << "Ready request received";

    // The client must only call `Ready() -> ();` after receiving an `-> OnDrawn();` event; if two
    // "consecutive" `Ready() -> ();` calls are received, this interaction has entered an invalid
    // state, and should be aborted immediately.
    if (state_.ready == true) {
      FX_LOGS(ERROR) << "Invalid back-to-back `Ready` requests received";
    }

    state_.ready = true;
    completer.Reply();
  }

  void handle_unknown_method(
      fidl::UnknownMethodMetadata<examples_canvas_clientrequesteddraw::Instance> metadata,
      fidl::UnknownMethodCompleter::Sync& completer) override {
    FX_LOGS(WARNING) << "Received an unknown method with ordinal " << metadata.method_ordinal;
  }

 private:
  // Each scheduled update waits for the allotted amount of time, sends an update if something has
  // changed, and schedules the next update.
  void ScheduleOnDrawnEvent(async_dispatcher_t* dispatcher, zx::duration after) {
    async::PostDelayedTask(
        dispatcher,
        [&, dispatcher, after, weak = weak_factory_.GetWeakPtr()] {
          // Halt execution if the binding has been deallocated already.
          if (!weak) {
            return;
          }

          // Schedule the next update if the binding still exists.
          weak->ScheduleOnDrawnEvent(dispatcher, after);

          // No need to send an update if nothing has changed since the last one, or the client has
          // not yet informed us that it is ready for more updates.
          if (!weak->state_.changed || !weak->state_.ready) {
            return;
          }

          // This is where we would draw the actual lines. Since this is just an example, we'll
          // avoid doing the actual rendering, and simply send the bounding box to the client
          // instead.
          auto result = fidl::SendEvent(binding_)->OnDrawn(state_.bounding_box);
          if (!result.is_ok()) {
            return;
          }

          auto top_left = state_.bounding_box.top_left();
          auto bottom_right = state_.bounding_box.bottom_right();
          FX_LOGS(INFO) << "OnDrawn event sent: top_left: Point { x: " << top_left.x()
                        << ", y: " << top_left.y()
                        << " }, bottom_right: Point { x: " << bottom_right.x()
                        << ", y: " << bottom_right.y() << " }";

          // Reset the change and ready trackers.
          state_.ready = false;
          state_.changed = false;
        },
        after);
  }

  fidl::ServerBinding<examples_canvas_clientrequesteddraw::Instance> binding_;
  CanvasState state_ = CanvasState{};

  // Generates weak references to this object, which are appropriate to pass into asynchronous
  // callbacks that need to access this object. The references are automatically invalidated
  // if this object is destroyed.
  fxl::WeakPtrFactory<InstanceImpl> weak_factory_;
};

int main(int argc, char** argv) {
  FX_LOGS(INFO) << "Started";

  // The event loop is used to asynchronously listen for incoming connections and requests from the
  // client. The following initializes the loop, and obtains the dispatcher, which will be used when
  // binding the server implementation to a channel.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Create an |OutgoingDirectory| instance.
  //
  // The |component::OutgoingDirectory| class serves the outgoing directory for our component. This
  // directory is where the outgoing FIDL protocols are installed so that they can be provided to
  // other components.
  component::OutgoingDirectory outgoing = component::OutgoingDirectory(dispatcher);

  // The `ServeFromStartupInfo()` function sets up the outgoing directory with the startup handle.
  // The startup handle is a handle provided to every component by the system, so that they can
  // serve capabilities (e.g. FIDL protocols) to other components.
  zx::result result = outgoing.ServeFromStartupInfo();
  if (result.is_error()) {
    FX_LOGS(ERROR) << "Failed to serve outgoing directory: " << result.status_string();
    return -1;
  }

  // Register a handler for components trying to connect to
  // |examples.canvas.clientrequesteddraw.Instance|.
  result = outgoing.AddUnmanagedProtocol<examples_canvas_clientrequesteddraw::Instance>(
      [dispatcher](fidl::ServerEnd<examples_canvas_clientrequesteddraw::Instance> server_end) {
        // Create an instance of our InstanceImpl that destroys itself when the connection closes.
        new InstanceImpl(dispatcher, std::move(server_end));
      });
  if (result.is_error()) {
    FX_LOGS(ERROR) << "Failed to add Instance protocol: " << result.status_string();
    return -1;
  }

  // Everything is wired up. Sit back and run the loop until an incoming connection wakes us up.
  FX_LOGS(INFO) << "Listening for incoming connections";
  loop.Run();
  return 0;
}

C++ (Wire)

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <fidl/examples.canvas.clientrequesteddraw/cpp/wire.h>
#include <lib/async-loop/cpp/loop.h>
#include <lib/component/incoming/cpp/protocol.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <charconv>

#include <examples/fidl/new/canvas/client_requested_draw/cpp_wire/client/config.h>

// The |EventHandler| is a derived class that we pass into the |fidl::WireClient| to handle incoming
// events asynchronously.
class EventHandler
    : public fidl::WireAsyncEventHandler<examples_canvas_clientrequesteddraw::Instance> {
 public:
  // Handler for |OnDrawn| events sent from the server.
  void OnDrawn(
      fidl::WireEvent<examples_canvas_clientrequesteddraw::Instance::OnDrawn>* event) override {
    ::examples_canvas_clientrequesteddraw::wire::Point top_left = event->top_left;
    ::examples_canvas_clientrequesteddraw::wire::Point bottom_right = event->bottom_right;
    FX_LOGS(INFO) << "OnDrawn event received: top_left: Point { x: " << top_left.x
                  << ", y: " << top_left.y << " }, bottom_right: Point { x: " << bottom_right.x
                  << ", y: " << bottom_right.y << " }";
    loop_.Quit();
  }

  void on_fidl_error(fidl::UnbindInfo error) override { FX_LOGS(ERROR) << error; }

  void handle_unknown_event(
      fidl::UnknownEventMetadata<examples_canvas_clientrequesteddraw::Instance> metadata) override {
    FX_LOGS(WARNING) << "Received an unknown event with ordinal " << metadata.event_ordinal;
  }

  explicit EventHandler(async::Loop& loop) : loop_(loop) {}

 private:
  async::Loop& loop_;
};

// A helper function that takes a coordinate in string form, like "123,-456", and parses it into a
// a struct of the form |{ in64 x; int64 y; }|.
::examples_canvas_clientrequesteddraw::wire::Point ParsePoint(std::string_view input) {
  int64_t x = 0;
  int64_t y = 0;
  size_t index = input.find(',');
  if (index != std::string::npos) {
    std::from_chars(input.data(), input.data() + index, x);
    std::from_chars(input.data() + index + 1, input.data() + input.length(), y);
  }
  return ::examples_canvas_clientrequesteddraw::wire::Point{.x = x, .y = y};
}

using Line = ::fidl::Array<::examples_canvas_clientrequesteddraw::wire::Point, 2>;

// A helper function that takes a coordinate pair in string form, like "1,2:-3,-4", and parses it
// into an array of 2 |Point| structs.
Line ParseLine(const std::string& action) {
  auto input = std::string_view(action);
  size_t index = input.find(':');
  if (index != std::string::npos) {
    return {ParsePoint(input.substr(0, index)), ParsePoint(input.substr(index + 1))};
  }
  return {};
}

int main(int argc, const char** argv) {
  FX_LOGS(INFO) << "Started";

  // Retrieve component configuration.
  auto conf = config::Config::TakeFromStartupHandle();

  // Start up an async loop and dispatcher.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Connect to the protocol inside the component's namespace. This can fail so it's wrapped in a
  // |zx::result| and it must be checked for errors.
  zx::result client_end = component::Connect<examples_canvas_clientrequesteddraw::Instance>();
  if (!client_end.is_ok()) {
    FX_LOGS(ERROR) << "Synchronous error when connecting to the |Instance| protocol: "
                   << client_end.status_string();
    return -1;
  }

  // Create an instance of the event handler.
  EventHandler event_handler(loop);

  // Create an asynchronous client using the newly-established connection.
  fidl::WireClient client(std::move(*client_end), dispatcher, &event_handler);
  FX_LOGS(INFO) << "Outgoing connection enabled";

  std::vector<Line> batched_lines;
  for (const auto& action : conf.script()) {
    // If the next action in the script is to "PUSH", send a batch of lines to the server.
    if (action == "PUSH") {
      fidl::Status status = client->AddLines(fidl::VectorView<Line>::FromExternal(batched_lines));
      if (!status.ok()) {
        // Check that our one-way call was enqueued successfully, and handle the error
        // appropriately. In the case of this example, there is nothing we can do to recover here,
        // except to log an error and exit the program.
        FX_LOGS(ERROR) << "Could not send AddLines request: " << status.error();
        return -1;
      }

      batched_lines.clear();
      FX_LOGS(INFO) << "AddLines request sent";
      continue;
    }

    // If the next action in the script is to "WAIT", block until an |OnDrawn| event is received
    // from the server.
    if (action == "WAIT") {
      loop.Run();
      loop.ResetQuit();

      // Now, inform the server that we are ready to receive more updates whenever they are
      // ready for us.
      FX_LOGS(INFO) << "Ready request sent";
      client->Ready().ThenExactlyOnce(
          [&](fidl::WireUnownedResult<examples_canvas_clientrequesteddraw::Instance::Ready>&
                  result) {
            // Check if the FIDL call succeeded or not.
            if (result.ok()) {
              FX_LOGS(INFO) << "Ready success";
            } else {
              FX_LOGS(ERROR) << "Could not send Ready request: " << result.error();
            }

            // Quit the loop, thereby handing control back to the outer loop of actions being
            // iterated over.
            loop.Quit();
          });

      // Run the loop until the callback is resolved, at which point we can continue from here.
      loop.Run();
      loop.ResetQuit();

      continue;
    }

    // Batch a line for drawing to the canvas using the two points provided.
    Line line = ParseLine(action);
    batched_lines.push_back(line);
    FX_LOGS(INFO) << "AddLines batching line: [Point { x: " << line[1].x << ", y: " << line[1].y
                  << " }, Point { x: " << line[0].x << ", y: " << line[0].y << " }]";
  }

  // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
  // referenced bug has been resolved, we can remove the sleep.
  sleep(2);
  return 0;
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <fidl/examples.canvas.clientrequesteddraw/cpp/wire.h>
#include <lib/async-loop/cpp/loop.h>
#include <lib/async/cpp/task.h>
#include <lib/component/outgoing/cpp/outgoing_directory.h>
#include <lib/fidl/cpp/wire/channel.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <src/lib/fxl/macros.h>
#include <src/lib/fxl/memory/weak_ptr.h>

// A struct that stores the two things we care about for this example: the set of lines, and the
// bounding box that contains them.
struct CanvasState {
  // Tracks whether there has been a change since the last send, to prevent redundant updates.
  bool changed = true;
  // Tracks whether or not the client has declared itself ready to receive more updated.
  bool ready = true;
  examples_canvas_clientrequesteddraw::wire::BoundingBox bounding_box;
};

// An implementation of the |Instance| protocol.
class InstanceImpl final : public fidl::WireServer<examples_canvas_clientrequesteddraw::Instance> {
 public:
  // Bind this implementation to a channel.
  InstanceImpl(async_dispatcher_t* dispatcher,
               fidl::ServerEnd<examples_canvas_clientrequesteddraw::Instance> server_end)
      : binding_(dispatcher, std::move(server_end), this, std::mem_fn(&InstanceImpl::OnFidlClosed)),
        weak_factory_(this) {
    // Start the update timer on startup. Our server sends one update per second
    ScheduleOnDrawnEvent(dispatcher, zx::sec(1));
  }

  void OnFidlClosed(fidl::UnbindInfo info) {
    if (info.reason() != ::fidl::Reason::kPeerClosedWhileReading) {
      FX_LOGS(ERROR) << "Shutdown unexpectedly";
    }
    delete this;
  }

  void AddLines(AddLinesRequestView request, AddLinesCompleter::Sync& completer) override {
    FX_LOGS(INFO) << "AddLines request received";
    for (const auto& points : request->lines) {
      FX_LOGS(INFO) << "AddLines printing line: [Point { x: " << points[1].x
                    << ", y: " << points[1].y << " }, Point { x: " << points[0].x
                    << ", y: " << points[0].y << " }]";

      // Update the bounding box to account for the new line we've just "added" to the canvas.
      auto& bounds = state_.bounding_box;
      for (const auto& point : points) {
        if (point.x < bounds.top_left.x) {
          bounds.top_left.x = point.x;
        }
        if (point.y > bounds.top_left.y) {
          bounds.top_left.y = point.y;
        }
        if (point.x > bounds.bottom_right.x) {
          bounds.bottom_right.x = point.x;
        }
        if (point.y < bounds.bottom_right.y) {
          bounds.bottom_right.y = point.y;
        }
      }
    }

    // Mark the state as "dirty", so that an update is sent back to the client on the next |OnDrawn|
    // event.
    state_.changed = true;
  }

  void Ready(ReadyCompleter::Sync& completer) override {
    FX_LOGS(INFO) << "Ready request received";

    // The client must only call `Ready() -> ();` after receiving an `-> OnDrawn();` event; if two
    // "consecutive" `Ready() -> ();` calls are received, this interaction has entered an invalid
    // state, and should be aborted immediately.
    if (state_.ready == true) {
      FX_LOGS(ERROR) << "Invalid back-to-back `Ready` requests received";
    }

    state_.ready = true;
    completer.Reply();
  }

  void handle_unknown_method(
      fidl::UnknownMethodMetadata<examples_canvas_clientrequesteddraw::Instance> metadata,
      fidl::UnknownMethodCompleter::Sync& completer) override {
    FX_LOGS(WARNING) << "Received an unknown method with ordinal " << metadata.method_ordinal;
  }

 private:
  // Each scheduled update waits for the allotted amount of time, sends an update if something has
  // changed, and schedules the next update.
  void ScheduleOnDrawnEvent(async_dispatcher_t* dispatcher, zx::duration after) {
    async::PostDelayedTask(
        dispatcher,
        [&, dispatcher, after, weak = weak_factory_.GetWeakPtr()] {
          // Halt execution if the binding has been deallocated already.
          if (!weak) {
            return;
          }

          // Schedule the next update if the binding still exists.
          weak->ScheduleOnDrawnEvent(dispatcher, after);

          // No need to send an update if nothing has changed since the last one, or the client has
          // not yet informed us that it is ready for more updates.
          if (!weak->state_.changed || !weak->state_.ready) {
            return;
          }

          // This is where we would draw the actual lines. Since this is just an example, we'll
          // avoid doing the actual rendering, and simply send the bounding box to the client
          // instead.
          auto top_left = weak->state_.bounding_box.top_left;
          auto bottom_right = weak->state_.bounding_box.bottom_right;
          fidl::Status status =
              fidl::WireSendEvent(weak->binding_)->OnDrawn(top_left, bottom_right);
          if (!status.ok()) {
            return;
          }
          FX_LOGS(INFO) << "OnDrawn event sent: top_left: Point { x: " << top_left.x
                        << ", y: " << top_left.y
                        << " }, bottom_right: Point { x: " << bottom_right.x
                        << ", y: " << bottom_right.y << " }";

          // Reset the change and ready trackers.
          state_.ready = false;
          weak->state_.changed = false;
        },
        after);
  }

  fidl::ServerBinding<examples_canvas_clientrequesteddraw::Instance> binding_;
  CanvasState state_ = CanvasState{};

  // Generates weak references to this object, which are appropriate to pass into asynchronous
  // callbacks that need to access this object. The references are automatically invalidated
  // if this object is destroyed.
  fxl::WeakPtrFactory<InstanceImpl> weak_factory_;
};

int main(int argc, char** argv) {
  FX_LOGS(INFO) << "Started";

  // The event loop is used to asynchronously listen for incoming connections and requests from the
  // client. The following initializes the loop, and obtains the dispatcher, which will be used when
  // binding the server implementation to a channel.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Create an |OutgoingDirectory| instance.
  //
  // The |component::OutgoingDirectory| class serves the outgoing directory for our component. This
  // directory is where the outgoing FIDL protocols are installed so that they can be provided to
  // other components.
  component::OutgoingDirectory outgoing = component::OutgoingDirectory(dispatcher);

  // The `ServeFromStartupInfo()` function sets up the outgoing directory with the startup handle.
  // The startup handle is a handle provided to every component by the system, so that they can
  // serve capabilities (e.g. FIDL protocols) to other components.
  zx::result result = outgoing.ServeFromStartupInfo();
  if (result.is_error()) {
    FX_LOGS(ERROR) << "Failed to serve outgoing directory: " << result.status_string();
    return -1;
  }

  // Register a handler for components trying to connect to
  // |examples.canvas.clientrequesteddraw.Instance|.
  result = outgoing.AddUnmanagedProtocol<examples_canvas_clientrequesteddraw::Instance>(
      [dispatcher](fidl::ServerEnd<examples_canvas_clientrequesteddraw::Instance> server_end) {
        // Create an instance of our InstanceImpl that destroys itself when the connection closes.
        new InstanceImpl(dispatcher, std::move(server_end));
      });
  if (result.is_error()) {
    FX_LOGS(ERROR) << "Failed to add Instance protocol: " << result.status_string();
    return -1;
  }

  // Everything is wired up. Sit back and run the loop until an incoming connection wakes us up.
  FX_LOGS(INFO) << "Listening for incoming connections";
  loop.Run();
  return 0;
}

HLCPP

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <lib/async-loop/cpp/loop.h>
#include <lib/sys/cpp/component_context.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <charconv>

#include <examples/canvas/clientrequesteddraw/cpp/fidl.h>
#include <examples/fidl/new/canvas/client_requested_draw/hlcpp/client/config.h>

// A helper function that takes a coordinate in string form, like "123,-456", and parses it into a
// a struct of the form |{ in64 x; int64 y; }|.
::examples::canvas::clientrequesteddraw::Point ParsePoint(std::string_view input) {
  int64_t x = 0;
  int64_t y = 0;
  size_t index = input.find(',');
  if (index != std::string::npos) {
    std::from_chars(input.data(), input.data() + index, x);
    std::from_chars(input.data() + index + 1, input.data() + input.length(), y);
  }
  return ::examples::canvas::clientrequesteddraw::Point{.x = x, .y = y};
}

using Line = ::std::array<::examples::canvas::clientrequesteddraw::Point, 2>;

// A helper function that takes a coordinate pair in string form, like "1,2:-3,-4", and parses it
// into an array of 2 |Point| structs.
Line ParseLine(const std::string& action) {
  auto input = std::string_view(action);
  size_t index = input.find(':');
  if (index != std::string::npos) {
    return {ParsePoint(input.substr(0, index)), ParsePoint(input.substr(index + 1))};
  }
  return {};
}

int main(int argc, const char** argv) {
  FX_LOGS(INFO) << "Started";

  // Retrieve component configuration.
  auto conf = config::Config::TakeFromStartupHandle();

  // Start up an async loop.
  async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Connect to the protocol inside the component's namespace, then create an asynchronous client
  // using the newly-established connection.
  examples::canvas::clientrequesteddraw::InstancePtr instance_proxy;
  auto context = sys::ComponentContext::Create();
  context->svc()->Connect(instance_proxy.NewRequest(dispatcher));
  FX_LOGS(INFO) << "Outgoing connection enabled";

  instance_proxy.set_error_handler([&loop](zx_status_t status) {
    FX_LOGS(ERROR) << "Shutdown unexpectedly";
    loop.Quit();
  });

  // Provide a lambda to handle incoming |OnDrawn| events asynchronously.
  instance_proxy.events().OnDrawn =
      [&loop](::examples::canvas::clientrequesteddraw::Point top_left,
              ::examples::canvas::clientrequesteddraw::Point bottom_right) {
        FX_LOGS(INFO) << "OnDrawn event received: top_left: Point { x: " << top_left.x
                      << ", y: " << top_left.y << " }, bottom_right: Point { x: " << bottom_right.x
                      << ", y: " << bottom_right.y << " }";
        loop.Quit();
      };

  instance_proxy.events().handle_unknown_event = [](uint64_t ordinal) {
    FX_LOGS(WARNING) << "Received an unknown event with ordinal " << ordinal;
  };

  std::vector<Line> batched_lines;
  for (const auto& action : conf.script()) {
    // If the next action in the script is to "PUSH", send a batch of lines to the server.
    if (action == "PUSH") {
      instance_proxy->AddLines(batched_lines);
      batched_lines.clear();
      FX_LOGS(INFO) << "AddLines request sent";
      continue;
    }

    // If the next action in the script is to "WAIT", block until an |OnDrawn| event is received
    // from the server.
    if (action == "WAIT") {
      loop.Run();
      loop.ResetQuit();

      // Now, inform the server that we are ready to receive more updates whenever they are ready
      // for us.
      FX_LOGS(INFO) << "Ready request sent";
      instance_proxy->Ready([&](fpromise::result<void, fidl::FrameworkErr> result) {
        if (result.is_error()) {
          // Check that our flexible two-way call was known to the server and handle the case of an
          // unknown method appropriately. In the case of this example, there is nothing we can do
          // to recover here, except to log an error and exit the program.
          FX_LOGS(ERROR) << "Server does not implement AddLine";
        }

        FX_LOGS(INFO) << "Ready success";

        // Quit the loop, thereby handing control back to the outer loop of actions being iterated
        // over.
        loop.Quit();
      });

      // Run the loop until the callback is resolved, at which point we can continue from here.
      loop.Run();
      loop.ResetQuit();

      continue;
    }

    // Batch a line for drawing to the canvas using the two points provided.
    Line line = ParseLine(action);
    batched_lines.push_back(line);
    FX_LOGS(INFO) << "AddLines batching line: [Point { x: " << line[1].x << ", y: " << line[1].y
                  << " }, Point { x: " << line[0].x << ", y: " << line[0].y << " }]";
  }

  // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
  // referenced bug has been resolved, we can remove the sleep.
  sleep(2);
  return 0;
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

#include <lib/async-loop/cpp/loop.h>
#include <lib/async-loop/default.h>
#include <lib/async/cpp/task.h>
#include <lib/fidl/cpp/binding.h>
#include <lib/sys/cpp/component_context.h>
#include <lib/syslog/cpp/macros.h>
#include <unistd.h>

#include <examples/canvas/clientrequesteddraw/cpp/fidl.h>
#include <src/lib/fxl/macros.h>
#include <src/lib/fxl/memory/weak_ptr.h>

// A struct that stores the two things we care about for this example: the set of lines, and the
// bounding box that contains them.
struct CanvasState {
  // Tracks whether there has been a change since the last send, to prevent redundant updates.
  bool changed = true;
  // Tracks whether or not the client has declared itself ready to receive more updated.
  bool ready = true;
  examples::canvas::clientrequesteddraw::BoundingBox bounding_box;
};

using Line = ::std::array<::examples::canvas::clientrequesteddraw::Point, 2>;

// An implementation of the |Instance| protocol.
class InstanceImpl final : public examples::canvas::clientrequesteddraw::Instance {
 public:
  // Bind this implementation to an |InterfaceRequest|.
  InstanceImpl(async_dispatcher_t* dispatcher,
               fidl::InterfaceRequest<examples::canvas::clientrequesteddraw::Instance> request)
      : binding_(fidl::Binding<examples::canvas::clientrequesteddraw::Instance>(this)),
        weak_factory_(this) {
    binding_.Bind(std::move(request), dispatcher);

    // Gracefully handle abrupt shutdowns.
    binding_.set_error_handler([this](zx_status_t status) mutable {
      if (status != ZX_ERR_PEER_CLOSED) {
        FX_LOGS(ERROR) << "Shutdown unexpectedly";
      }
      delete this;
    });

    // Start the update timer on startup. Our server sends one update per second.
    ScheduleOnDrawnEvent(dispatcher, zx::sec(1));
  }

  void AddLines(std::vector<Line> lines) override {
    FX_LOGS(INFO) << "AddLines request received";
    for (const auto& points : lines) {
      FX_LOGS(INFO) << "AddLines printing line: [Point { x: " << points[1].x
                    << ", y: " << points[1].y << " }, Point { x: " << points[0].x
                    << ", y: " << points[0].y << " }]";

      // Update the bounding box to account for the new line we've just "added" to the canvas.
      auto& bounds = state_.bounding_box;
      for (const auto& point : points) {
        if (point.x < bounds.top_left.x) {
          bounds.top_left.x = point.x;
        }
        if (point.y > bounds.top_left.y) {
          bounds.top_left.y = point.y;
        }
        if (point.x > bounds.bottom_right.x) {
          bounds.bottom_right.x = point.x;
        }
        if (point.y < bounds.bottom_right.y) {
          bounds.bottom_right.y = point.y;
        }
      }
    }

    // Mark the state as "dirty", so that an update is sent back to the client on the next
    // |OnDrawn| event.
    state_.changed = true;
  }

  void Ready(ReadyCallback callback) override {
    FX_LOGS(INFO) << "Ready request received";

    // The client must only call `Ready() -> ();` after receiving an `-> OnDrawn();` event; if
    // two "consecutive" `Ready() -> ();` calls are received, this interaction has entered an
    // invalid state, and should be aborted immediately.
    if (state_.ready == true) {
      FX_LOGS(ERROR) << "Invalid back-to-back `Ready` requests received";
    }

    state_.ready = true;
    callback(fpromise::ok());
  }

  void handle_unknown_method(uint64_t ordinal, bool method_has_response) override {
    FX_LOGS(WARNING) << "Received an unknown method with ordinal " << ordinal;
  }

 private:
  // Each scheduled update waits for the allotted amount of time, sends an update if something
  // has changed, and schedules the next update.
  void ScheduleOnDrawnEvent(async_dispatcher_t* dispatcher, zx::duration after) {
    async::PostDelayedTask(
        dispatcher,
        [&, dispatcher, after, weak = weak_factory_.GetWeakPtr()] {
          // Halt execution if the binding has been deallocated already.
          if (!weak) {
            return;
          }

          // Schedule the next update if the binding still exists.
          weak->ScheduleOnDrawnEvent(dispatcher, after);

          // No need to send an update if nothing has changed since the last one, or the client
          // has not yet informed us that it is ready for more updates.
          if (!weak->state_.changed || !weak->state_.ready) {
            return;
          }

          // This is where we would draw the actual lines. Since this is just an example, we'll
          // avoid doing the actual rendering, and simply send the bounding box to the client
          // instead.
          auto top_left = state_.bounding_box.top_left;
          auto bottom_right = state_.bounding_box.bottom_right;
          binding_.events().OnDrawn(top_left, bottom_right);
          FX_LOGS(INFO) << "OnDrawn event sent: top_left: Point { x: " << top_left.x
                        << ", y: " << top_left.y
                        << " }, bottom_right: Point { x: " << bottom_right.x
                        << ", y: " << bottom_right.y << " }";

          // Reset the change and ready trackers.
          state_.ready = false;
          state_.changed = false;
        },
        after);
  }

  fidl::Binding<examples::canvas::clientrequesteddraw::Instance> binding_;
  CanvasState state_ = CanvasState{};

  // Generates weak references to this object, which are appropriate to pass into asynchronous
  // callbacks that need to access this object. The references are automatically invalidated
  // if this object is destroyed.
  fxl::WeakPtrFactory<InstanceImpl> weak_factory_;
};

int main(int argc, char** argv) {
  FX_LOGS(INFO) << "Started";

  // The event loop is used to asynchronously listen for incoming connections and requests from
  // the client. The following initializes the loop, and obtains the dispatcher, which will be
  // used when binding the server implementation to a channel.
  //
  // Note that unlike the new C++ bindings, HLCPP bindings rely on the async loop being attached
  // to the current thread via the |kAsyncLoopConfigAttachToCurrentThread| configuration.
  async::Loop loop(&kAsyncLoopConfigAttachToCurrentThread);
  async_dispatcher_t* dispatcher = loop.dispatcher();

  // Create an |OutgoingDirectory| instance.
  //
  // The |component::OutgoingDirectory| class serves the outgoing directory for our component.
  // This directory is where the outgoing FIDL protocols are installed so that they can be
  // provided to other components.
  auto context = sys::ComponentContext::CreateAndServeOutgoingDirectory();

  // Register a handler for components trying to connect to
  // |examples.canvas.clientrequesteddraw.Instance|.
  context->outgoing()->AddPublicService(
      fidl::InterfaceRequestHandler<examples::canvas::clientrequesteddraw::Instance>(
          [dispatcher](
              fidl::InterfaceRequest<examples::canvas::clientrequesteddraw::Instance> request) {
            // Create an instance of our |InstanceImpl| that destroys itself when the connection
            // closes.
            new InstanceImpl(dispatcher, std::move(request));
          }));

  // Everything is wired up. Sit back and run the loop until an incoming connection wakes us up.
  FX_LOGS(INFO) << "Listening for incoming connections";
  loop.Run();
  return 0;
}

Feed-forward dataflow

Some protocols have feed-forward dataflow, which avoids round-trip latency by having data flow primarily in one direction, typically from client to server. The protocol only synchronizes the two endpoints when necessary. Feed-forward dataflow also increases throughput because fewer total context switches are required to perform a given task.

The key to feed-forward dataflow is to remove the need for clients to wait for results from prior method calls before sending subsequent messages. For example, protocol request pipelining removes the need for the client to wait for the server to reply with a protocol before the client can use the protocol. Similarly, client-assigned identifiers (see below) remove the need for the client to wait for the server to assign identifiers for state held by the server.

Typically, a feed-forward protocol will involve the client submitting a sequence of one-way method calls without waiting for a response from the server. After submitting these messages, the client explicitly synchronizes with the server by calling a method such as Commit or Flush that has a reply. The reply might be an empty message or might contain information about whether the submitted sequence succeeded. In more sophisticated protocols, the one-way messages are represented as a union of command objects rather than individual method calls; see the command union pattern below.

Protocols that use feed-forward dataflow work well with optimistic error handling strategies. Rather than having the server reply to every method with a status value, which encourages the client to wait for a round trip between each message, instead include a status reply only if the method can fail for reasons that are not under the control of the client. If the client sends a message that the client should have known was invalid (e.g., referencing an invalid client-assigned identifier), signal the error by closing the connection. If the client sends a message the client could not have known was invalid, either provide a response that signals success or failure (which requires the client to synchronize) or remember the error and ignore subsequent dependent requests until the client synchronizes and recovers from the error in some way.

Example:

protocol Canvas {
    Flush() -> (struct {
        code zx.Status;
    });
    Clear();
    UploadImage(struct {
        image_id uint32;
        image Image;
    });
    PaintImage(struct {
        image_id uint32;
        x float32;
        y float32;
    });
    DiscardImage(struct {
        image_id uint32;
    });
    PaintSmileyFace(struct {
        x float32;
        y float32;
    });
    PaintMoustache(struct {
        x float32;
        y float32;
    });
};

FIDL recipe: Size constraint

FIDL vectors and strings may carry a size constraint, which specifies a limit to how many members the type can contain. In the case of vectors this refers to the number of elements stored in the vector, while for strings it refers to the number-of-bytes the string contains.

The use of size constraints is strongly encouraged, since it sets an upper bound on what would otherwise be an unboundedly large type.

A useful operation for key-value stores is in-order iteration: that is, when given a key, to return a (usually paginated) list of elements that appear after it, in order.

Reasoning

In FIDL, this is best done using an iterator, which is generally implemented as a separate protocol over which this iteration can occur. Using a separate protocol, and therefore a separate channel, has a number of benefits, including de-interleaving the iteration pull requests from other operations done over the main protocol.

The client and server side of the channel connection for protocol P can be represented as FIDL data types, as a client_end:P and server_end:P, respectively. These types are collectively known as protocol ends, and represent the other (non-@discoverable) way of connecting a FIDL client to its corresponding server: over an existing FIDL connection!

Protocol ends are specific instances of a general FIDL concept: the resource type. A resource type is intended to contain FIDL handles, which necessitates extra restrictions on how the type can be used. The type must be always be unique, as the underlying resource is mediated by some other capability manager (usually the Zircon kernel). Duplicating such a resource via a simple in-memory copy, without involving the manager, is impossible. To prevent such duplication, all resource types in FIDL are always move-only.

Finally, the Get() method of the Iterator protocol itself makes use of a size constraint on the return payload. This limits the amount of data that may be transmitted in a single pull, allowing for some measure of resource use control. It also creates a natural pagination boundary: rather than a giant dump of all of the results at once, the server only needs to prepare small batches at a time.

Implementation

The FIDL, CML, and realm interface definitions are as follows:

FIDL

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
library examples.keyvaluestore.additerator;

/// An item in the store. The key must match the regex `^[A-z][A-z0-9_\.\/]{2,62}[A-z0-9]$`. That
/// is, it must start with a letter, end with a letter or number, contain only letters, numbers,
/// periods, and slashes, and be between 4 and 64 characters long.
type Item = struct {
    key string:128;
    value vector<byte>:64000;
};

/// An enumeration of things that may go wrong when trying to write a value to our store.
type WriteError = flexible enum {
    UNKNOWN = 0;
    INVALID_KEY = 1;
    INVALID_VALUE = 2;
    ALREADY_EXISTS = 3;
};

/// An enumeration of things that may go wrong when trying to create an iterator.
type IterateConnectionError = flexible enum {
    /// The starting key was not found.
    UNKNOWN_START_AT = 1;
};

/// A key-value store which supports insertion and iteration.
@discoverable
open protocol Store {
    /// Writes an item to the store.
    flexible WriteItem(struct {
        attempt Item;
    }) -> () error WriteError;

    /// Iterates over the items in the store, using lexicographic ordering over the keys.
    ///
    /// The [`iterator`] is [pipelined][pipelining] to the server, such that the client can
    /// immediately send requests over the new connection.
    ///
    /// [pipelining]: https://fuchsia.dev/fuchsia-src/development/api/fidl?hl=en#request-pipelining
    flexible Iterate(resource struct {
        /// If present, requests to start the iteration at this item.
        starting_at string:<128, optional>;

        /// The [`Iterator`] server endpoint. The client creates both ends of the channel and
        /// retains the `client_end` locally to use for pulling iteration pages, while sending the
        /// `server_end` off to be fulfilled by the server.
        iterator server_end:Iterator;
    }) -> () error IterateConnectionError;
};

/// An iterator for the key-value store. Note that this protocol makes no guarantee of atomicity -
/// the values may change between pulls from the iterator. Unlike the `Store` protocol above, this
/// protocol is not `@discoverable`: it is not independently published by the component that
/// implements it, but rather must have one of its two protocol ends transmitted over an existing
/// FIDL connection.
///
/// As is often the case with iterators, the client indicates that they are done with an instance of
/// the iterator by simply closing their end of the connection.
///
/// Since the iterator is associated only with the Iterate method, it is declared as closed rather
/// than open. This is because changes to how iteration works are more likely to require replacing
/// the Iterate method completely (which is fine because that method is flexible) rather than
/// evolving the Iterator protocol.
closed protocol Iterator {
    /// Gets the next batch of keys.
    ///
    /// The client pulls keys rather than having the server proactively push them, to implement
    /// [flow control][flow-control] over the messages.
    ///
    /// [flow-control]:
    ///     https://fuchsia.dev/fuchsia-src/development/api/fidl?hl=en#prefer_pull_to_push
    strict Get() -> (struct {
        /// A list of keys. If the iterator has reached the end of iteration, the list will be
        /// empty. The client is expected to then close the connection.
        entries vector<string:128>:10;
    });
};

CML

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    include: [ "syslog/client.shard.cml" ],
    program: {
        runner: "elf",
        binary: "bin/client_bin",
    },
    use: [
        { protocol: "examples.keyvaluestore.additerator.Store" },
    ],
    config: {
        write_items: {
            type: "vector",
            max_count: 16,
            element: {
                type: "string",
                max_size: 64,
            },
        },

        // A key to iterate from, after all items in `write_items` have been written.
        iterate_from: {
            type: "string",
            max_size: 64,
        },

    },
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    include: [ "syslog/client.shard.cml" ],
    program: {
        runner: "elf",
        binary: "bin/server_bin",
    },
    capabilities: [
        { protocol: "examples.keyvaluestore.additerator.Store" },
    ],
    expose: [
        {
            protocol: "examples.keyvaluestore.additerator.Store",
            from: "self",
        },
    ],
}

Realm

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
{
    children: [
        {
            name: "client",
            url: "#meta/client.cm",
        },
        {
            name: "server",
            url: "#meta/server.cm",
        },
    ],
    offer: [
        // Route the protocol under test from the server to the client.
        {
            protocol: "examples.keyvaluestore.additerator.Store",
            from: "#server",
            to: "#client",
        },

        // Route diagnostics support to all children.
        {
            protocol: [
                "fuchsia.inspect.InspectSink",
                "fuchsia.logger.LogSink",
            ],
            from: "parent",
            to: [
                "#client",
                "#server",
            ],
        },
    ],
}

Client and server implementations can then be written in any supported language:

Rust

Client

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

use {
    anyhow::{Context as _, Error},
    config::Config,
    fuchsia_component::client::connect_to_protocol,
    std::{thread, time},
};

use {
    fidl::endpoints::create_proxy,
    fidl_examples_keyvaluestore_additerator::{Item, IteratorMarker, StoreMarker},
    futures::join,
};

#[fuchsia::main]
async fn main() -> Result<(), Error> {
    println!("Started");

    // Load the structured config values passed to this component at startup.
    let config = Config::take_from_startup_handle();

    // Use the Component Framework runtime to connect to the newly spun up server component. We wrap
    // our retained client end in a proxy object that lets us asynchronously send `Store` requests
    // across the channel.
    let store = connect_to_protocol::<StoreMarker>()?;
    println!("Outgoing connection enabled");

    // This client's structured config has one parameter, a vector of strings. Each string is the
    // path to a resource file whose filename is a key and whose contents are a value. We iterate
    // over them and try to write each key-value pair to the remote store.
    for key in config.write_items.into_iter() {
        let path = format!("/pkg/data/{}.txt", key);
        let value = std::fs::read_to_string(path.clone())
            .with_context(|| format!("Failed to load {path}"))?;
        match store.write_item(&Item { key: key, value: value.into_bytes() }).await? {
            Ok(_) => println!("WriteItem Success"),
            Err(err) => println!("WriteItem Error: {}", err.into_primitive()),
        }
    }

    if !config.iterate_from.is_empty() {
        // This helper creates a channel, and returns two protocol ends: the `client_end` is already
        // conveniently bound to the correct FIDL protocol, `Iterator`, while the `server_end` is
        // unbound and ready to be sent over the wire.
        let (iterator, server_end) = create_proxy::<IteratorMarker>()?;

        // There is no need to wait for the iterator to connect before sending the first `Get()`
        // request - since we already hold the `client_end` of the connection, we can start queuing
        // requests on it immediately.
        let connect_to_iterator = store.iterate(Some(config.iterate_from.as_str()), server_end);
        let first_get = iterator.get();

        // Wait until both the connection and the first request resolve - an error in either case
        // triggers an immediate resolution of the combined future.
        let (connection, first_page) = join!(connect_to_iterator, first_get);

        // Handle any connection error. If this has occurred, it is impossible for the first `Get()`
        // call to have resolved successfully, so check this error first.
        if let Err(err) = connection.context("Could not connect to Iterator")? {
            println!("Iterator Connection Error: {}", err.into_primitive());
        } else {
            println!("Iterator Connection Success");

            // Consecutively repeat the `Get()` request if the previous response was not empty.
            let mut entries = first_page.context("Could not get page from Iterator")?;
            while !&entries.is_empty() {
                for entry in entries.iter() {
                    println!("Iterator Entry: {}", entry);
                }
                entries = iterator.get().await.context("Could not get page from Iterator")?;
            }
        }
    }

    // TODO(https://fxbug.dev/42156498): We need to sleep here to make sure all logs get drained. Once the
    // referenced bug has been resolved, we can remove the sleep.
    thread::sleep(time::Duration::from_secs(2));
    Ok(())
}

Server

// Copyright 2022 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.

use {
    anyhow::{Context as _, Error},
    fuchsia_component::server::ServiceFs,
    futures::prelude::*,
    lazy_static::lazy_static,
    regex::Regex,
};

use {
    fidl_examples_keyvaluestore_additerator::{
        Item, IterateConnectionError, IteratorRequest, IteratorRequestStream, StoreRequest,
        StoreRequestStream, WriteError,
    },
    fuchsia_async as fasync,
    std::collections::btree_map::Entry,
    std::collections::BTreeMap,
    std::ops::Bound::*,
    std::sync::Arc,
    std::sync::Mutex,
};

lazy_static! {
    static ref KEY_VALIDATION_REGEX: Regex =
        Regex::new(r"^[A-Za-z]\w+[A-Za-z0-9]$").expect("Key validation regex failed to compile");
}

/// Handler for the `WriteItem` method.
fn write_item(store: &mut BTreeMap<String, Vec<u8>>, attempt: Item) -> Result<(), WriteError> {
    // Validate the key.
    if !KEY_VALIDATION_REGEX.is_match(attempt.key.as_str()) {
        println!("Write error: INVALID_KEY, For key: {}", attempt.key);
        return Err(WriteError::InvalidKey);
    }

    // Validate the value.
    if attempt.value.is_empty() {
        println!("Write error: INVALID_VALUE, For key: {}", attempt.key);
        return Err(WriteError::InvalidValue);
    }

    // Write to the store, validating that the key did not already exist.
    match store.entry(attempt.key) {
        Entry::Occupied(entry) => {
            println!("Write error: ALREADY_EXISTS, For key: {}", entry.key());
            Err(WriteError::AlreadyExists)
        }
        Entry::Vacant(entry) => {
            println!("Wrote value at key: {}", entry.key());
            entry.insert(attempt.value);
            Ok(())
        }
    }
}

/// Handler for the `Iterate` method, which deals with validating that the requested start position
/// exists, and then sets up the asynchronous side channel for the actual iteration to occur over.
fn iterate(
    store: Arc<Mutex<BTreeMap<String, Vec<u8>>>>,
    starting_at: Option<String>,
    stream: IteratorRequestStream,
) -> Result<(), IterateConnectionError> {
    // Validate that the starting key, if supplied, actually exists.
    if let Some(start_key) = starting_at.clone() {
        if !store.lock().unwrap().contains_key(&start_key) {
            return Err(IterateConnectionError::UnknownStartAt);
        }
    }

    // Spawn a detached task. This allows the method call to return while the iteration continues in
    // a separate, unawaited task.
    fasync::Task::spawn(async move {
        // Serve the iteration requests. Note that access to the underlying store is behind a
        // contended `Mutex`, meaning that the iteration is not atomic: page contents could shift,
        // change, or disappear entirely between `Get()` requests.
        stream
            .map(|result| result.context("failed request"))
            .try_fold(
                match starting_at {
                    Some(start_key) => Included(start_key),
                    None => Unbounded,
                },
                |mut lower_bound, request| async {
                    match request {
                        IteratorRequest::Get { responder } => {
                            println!("Iterator page request received");

                            // The `page_size` should be kept in sync with the size constraint on
                            // the iterator's response, as defined in the FIDL protocol.
                            static PAGE_SIZE: usize = 10;

                            // An iterator, beginning at `lower_bound` and tracking the pagination's
                            // progress through iteration as each page is pulled by a client-sent
                            // `Get()` request.
                            let held_store = store.lock().unwrap();
                            let mut entries = held_store.range((lower_bound.clone(), Unbounded));
                            let mut current_page = vec![];
                            for _ in 0..PAGE_SIZE {
                                match entries.next() {
                                    Some(entry) => {
                                        current_page.push(entry.0.clone());
                                    }
                                    None => break,
                                }
                            }

                            // Update the `lower_bound` - either inclusive of the next item in the
                            // iteration, or exclusive of the last seen item if the iteration has
                            // finished. This `lower_bound` will be passed to the next request
                            // handler as its starting point.
                            lower_bound = match entries.next() {
                                Some(next) => Included(next.0.clone()),
                                None => match current_page.last() {
                                    Some(tail) => Excluded(tail.clone()),
                                    None => lower_bound,
                                },
                            };

                            // Send the page. At the end of this scope, the `held_store` lock gets
                            // dropped, and therefore released.
                            responder.send(&current_page).context("error sending reply")?;
                            println!("Iterator page sent");
                        }
                    }
                    Ok(lower_bound)
                },
            )
            .await
            .ok();
    })
    .detach();

    Ok(())
}

/// Creates a new instance of the server. Each server has its own bespoke, per-connection instance
/// of the key-value store.
async fn run_server(stream: StoreRequestStream) -> Result<(), Error> {
    // Create a new in-memory key-value store. The store will live for the lifetime of the
    // connection between the server and this particular client.
    //
    // Note that we now use an `Arc<Mutex<BTreeMap>>`, replacing the previous `RefCell<HashMap>`.
    // The `BTreeMap` is used because we want an ordered map, to better facilitate iteration. The
    // `Arc<Mutex<...>>` is used because there are now multiple async tasks accessing the: one main
    // task which handles communication over the protocol, and one additional task per iterator
    // protocol. `Arc<Mutex<...>>` is the simplest way to synchronize concurrent access between
    // these racing tasks.
    let store = &Arc::new(Mutex::new(BTreeMap::<String, Vec<u8>>::new()));

    // Serve all requests on the protocol sequentially - a new request is not handled until its
    // predecessor has been processed.
    stream
        .map(|result| result.context("failed request"))
        .try_for_each(|request| async {
            // Match based on the method being invoked.
            match request {
                StoreRequest::WriteItem { attempt, responder } => {
                    println!("WriteItem request received");

                    // The `responder` parameter is a special struct that manages the outgoing reply
                    // to this method call. Calling `send` on the responder exactly once will send
                    // the reply.
                    responder
                        .send(write_item(&mut store.clone().lock().unwrap(), attempt))
                        .context("error sending reply")?;
                    println!("WriteItem response sent");
                }
                StoreRequest::Iterate { starting_at, iterator, responder } => {
                    println!("Iterate request received");

                    // The `iterate` handler does a quick check to see that the request is valid,
                    // then spins up a separate worker task to serve the newly minted `Iterator`
                    // protocol instance, allowing this call to return immediately and continue the
                    // request stream with other work.
                    responder
                        .send(iterate(store.clone(), starting_at, iterator.into_stream()?))
                        .context("error sending reply")?;
                    println!("Iterate response sent");
                } //
                StoreRequest::_UnknownMethod { ordinal, .. } => {
                    println!("Received an unknown method with ordinal {ordinal}");
                }
            }
            Ok(())
        })
        .await
}

// A helper enum that allows us to treat a `Store` service instance as a value.
enum IncomingService {
    Store(StoreRequestStream),
}

#[fuchsia::main]
async fn main() -> Result<(), Error> {
    println!("Started");

    // Add a discoverable instance of our `Store` protocol - this will allow the client to see the
    // server and connect to it.
    let mut fs = ServiceFs::new_local();
    fs.dir("svc").add_fidl_service(IncomingService::Store);
    fs.take_and_serve_directory_handle()?;
    println!("Listening for incoming connections");

    // The maximum number of concurrent clients that may be served by this process.
    const MAX_CONCURRENT: usize = 10;

    // Serve each connection simultaneously, up to the `MAX_CONCURRENT` limit.
    fs.for_each_concurrent(MAX_CONCURRENT, |IncomingService::Store(stream)| {
        run_server(stream).unwrap_or_else(|e| println!("{:?}", e))
    })
    .await;

    Ok(())
}

C++ (Natural)

Client

// TODO(https://fxbug.dev/42060656): C++ (Natural) implementation.

Server

// TODO(https://fxbug.dev/42060656): C++ (Natural) implementation.

C++ (Wire)

Client

// TODO(https://fxbug.dev/42060656): C++ (Wire) implementation.

Server

// TODO(https://fxbug.dev/42060656): C++ (Wire) implementation.

HLCPP

Client

// TODO(https://fxbug.dev/42060656): HLCPP implementation.

Server

// TODO(https://fxbug.dev/42060656): HLCPP implementation.

Privacy by design

The client and server in a protocol frequently have access to different sets of sensitive data. Privacy or security problems can be caused by unintentionally leaking more data than necessary over the protocol.

When designing a protocol pay particular attention to fields in your protocol that:

  • Contain personally identifiable information such as names, email addresses, or payment details.
  • Are supplied by the user so potentially contain personal information. Examples include device names and comment fields.
  • Act as a unique identifier that can be correlated across vendors, users, devices, or resets. Examples include serial numbers, MAC addresses, IP addresses and global account IDs.

These types of fields are reviewed thoroughly and the availability of protocols that include them may be restricted. Make sure that your protocols don't contain more information than is needed.

If a use case for an API requires personal or linkable data and other use cases do not, consider using two different protocols so that access to the more sensitive use case may be controlled separately.

Consider two hypothetical examples that illustrate privacy violations caused by API design choices:

Example 1 - Serial numbers in a peripheral control API

Consider a peripheral control API that includes the serial numbers of USB peripherals. A serial number does not contain personal data but it is a very stable identifier that is easy to correlate. Including the serial number in this API leads to many privacy concerns:

  • Any client with access to the API could correlate the different accounts using the same Fuchsia device.
  • Any client with access to the API could correlate the different personae within an account.
  • Different software vendors could collude to learn whether they are being used by the same users or on the same device.
  • If a peripheral is moved between devices, any client with access to the API could correlate the set of devices and users the peripheral is shared between.
  • If a peripheral is sold, clients with access to the API could correlate the old and new owner of the peripheral.
  • Some manufacturers encode information in their serial numbers. This may let clients with access to the API deduce where or when the user purchased the peripheral.

In this example, the intent of the serial number is to allow clients to detect when the same USB peripheral is reconnected. Meeting this intent does require a stable identifier but it does not require a global identifier. Different clients do not need to receive the same identifier, the same client does not need to receive the same identifier across different Fuchsia devices, and the identifier does not need to remain constant across factory reset events.

In this example, a good alternative is to send an identifier that is only guaranteed to be stable for a single client on a single device. This identifier could potentially be a hash of the peripheral's serial number, the Fuchsia device identifier, and the moniker of the connection.

Example 2 - Device names in a device setup API

Consider a device setup API that includes the model of the phone that is used to assist in the setup of a device. In most cases a phone's model string is set by the OEM, but some phones report a user-supplied device name as their model. This leads to many model strings containing the real names or pseudonyms of their users. Therefore, this API risks associating a user across identities or across devices. A rare or pre-release model string might reveal sensitive information even when it isn't supplied by the user.

In some cases, it might be appropriate to use the model string but restrict which clients can access the API. Alternatively, the API could use fields that are never controlled by the user such as the manufacturer string. Another alternative is to sanitize the model string by comparing it to an allowlist of popular phone models and replacing rare model strings with a generic string.

Client-assigned identifiers

Often a protocol will let a client manipulate multiple pieces of state held by the server. When designing an object system, the typical approach to this problem is to create separate objects for each coherent piece of state held by the server. However, when designing a protocol, using separate objects for each piece of state has several disadvantages.

Creating separate protocol instances for each logical object consumes kernel resources because each instance requires a separate channel object. Each instance maintains a separate FIFO queue of messages. Using separate instances for each logical object means that messages sent to different objects can be reordered with respect to each other, leading to out-of-order interactions between the client and the server.

The client-assigned identifier pattern avoids these problems by having the client assign uint32 or uint64 identifiers to objects retained by the server. All the messages exchanged between the client and the server are funnelled through a single protocol instance, which provides a consistent FIFO ordering for the whole interaction.

Having the client (rather than the server) assign the identifiers allows for feed-forward dataflow because the client can assign an identifier to an object and then operate on that object immediately without waiting for the server to reply with the object's identifier. In this pattern, the identifiers are valid only within the scope of the current connection, and typically the zero identifier is reserved as a sentinel. Security note: Clients should not use addresses in their address space as their identifiers because these addresses can leak the layout of their address space.

The client-assigned identifier pattern has some disadvantages. For example, clients are more difficult to author because clients need to manage their own identifiers. Developers commonly want to create a client library that provides an object-oriented facade for the service to hide the complexity of managing identifiers, which itself is an antipattern (see client libraries below).

A strong signal that you should create a separate protocol instance to represent an object rather than using a client-assigned identifier is when you want to use the kernel's object capability system to protect access to that object. For example, if you want a client to be able to interact with an object but you do not want the client to be able to interact with other objects, creating a separate protocol instance means you can use the underlying channel as a capability that controls access to that object.

Command union

In protocols that use feed-forward dataflow, the client often sends many one-way messages to the server before sending a two-way synchronization message. If the protocol involves a particularly high volume of messages, the overhead for sending a message can become noticeable. In those situations, consider using the command union pattern to batch multiple commands into a single message.

In this pattern, the client sends a vector of commands rather than sending an individual message for each command. The vector contains a union of all the possible commands, and the server uses the union tag as the selector for command dispatch in addition to using the method ordinal number:

type PokeCmd = struct {
    x int32;
    y int32;
};

type ProdCmd = struct {
    message string:64;
};

type MyCommand = strict union {
    1: poke PokeCmd;
    2: prod ProdCmd;
};

protocol HighVolumeSink {
    Enqueue(struct {
        commands vector<MyCommand>;
    });
    Commit() -> (struct {
        result MyStatus;
    });
};

Typically the client buffers the commands locally in its address space and sends them to the server in a batch. The client should flush the batch to the server before hitting the channel capacity limits in either bytes and handles.

For protocols with even higher message volumes, consider using a ring buffer in a zx.Handle:VMO for the data plane and an associated zx.Handle:FIFO for the control plane. Such protocols place a higher implementation burden on the client and the server but are appropriate when you need maximal performance. For example, the block device protocol uses this approach to optimize performance.

Pagination

FIDL messages are typically sent over channels, which have a maximum message size. In many cases, the maximum message size is sufficient to transmit reasonable amounts of data, but there are use cases for transmitting large (or even unbounded) amounts of data. One way to transmit a large or unbounded amount of information is to use a pagination pattern.

Paginating writes

A simple approach to paginating writes to the server is to let the client send data in multiple messages and then have a "finalize" method that causes the server to process the sent data:

protocol Foo1 {
    AddBars(resource struct {
        bars vector<client_end:Bar>;
    });
    UseTheBars() -> (struct {
        args Args;
    });
};

For example, this pattern is used by fuchsia.process.Launcher to let the client send an arbitrary number of environment variables.

A more sophisticated version of this pattern creates a protocol that represents the transaction, often called a tear-off protocol:

protocol BarTransaction {
    Add(resource struct {
        bars vector<client_end:Bar>;
    });
    Commit() -> (struct {
        args Args;
    });
};

protocol Foo2 {
    StartBarTransaction(resource struct {
        transaction server_end:BarTransaction;
    });
};

This approach is useful when the client might be performing many operations concurrently and breaking the writes into separate messages loses atomicity. Notice that BarTransaction does not need an Abort method. The better approach to aborting the transaction is for the client to close the BarTransaction protocol.

Paginating reads

A simple approach to paginating reads from the server is to let the server send multiple responses to a single request using events:

protocol EventBasedGetter {
    GetBars();
    -> OnBars(resource struct {
        bars vector<client_end:Bar>;
    });
    -> OnBarsDone();
};

Depending on the domain-specific semantics, this pattern might also require a second event that signals when the server is done sending data. This approach works well for simple cases but has a number of scaling problems. For example, the protocol lacks flow control and the client has no way to stop the server if the client no longer needs additional data (short of closing the whole protocol).

A more robust approach uses a tear-off protocol to create an iterator:

protocol BarIterator {
    GetNext() -> (resource struct {
        bars vector<client_end:Bar>;
    });
};

protocol ChannelBasedGetter {
    GetBars(resource struct {
        iterator server_end:BarIterator;
    });
};

After calling GetBars, the client uses protocol request pipelining to queue the first GetNext call immediately. Thereafter, the client repeatedly calls GetNext to read additional data from the server, bounding the number of outstanding GetNext messages to provide flow control. Notice that the iterator need not require a "done" response because the server can reply with an empty vector and then close the iterator when done.

Another approach to paginating reads is to use a token. In this approach, the server stores the iterator state on the client in the form of an opaque token, and the client returns the token to the server with each partial read:

type Token = struct {
    opaque array<uint8, 16>;
};

protocol TokenBasedGetter {
    // If token is null, fetch the first N entries. If token is not null, return
    // the N items starting at token. Returns as many entries as it can in
    // results and populates next_token if more entries are available.
    GetEntries(struct {
        token box<Token>;
    }) -> (struct {
        entries vector<Entry>;
        next_token box<Token>;
    });
};

This pattern is especially attractive when the server can escrow all of its pagination state to the client and therefore no longer need to maintain pagination state at all. The server should document whether the client can persist the token and reuse it across instances of the protocol. Security note: In either case, the server must validate the token supplied by the client to ensure that the client's access is limited to its own paginated results and does not include results intended for another client.

Eventpair correlation

When using client-assigned identifiers, clients identify objects held by the server using identifiers that are meaningful only in the context of their own connection to the server. However, some use cases require correlating objects across clients. For example, in fuchsia.ui.scenic, clients largely interact with nodes in the scene graph using client-assigned identifiers. However, importing a node from another process requires correlating the reference to that node across process boundaries.

The eventpair correlation pattern solves this problem using a feed-forward dataflow by relying on the kernel to provide the necessary security. First, the client that wishes to export an object creates a zx::eventpair and sends one of the entangled events to the server along with its client-assigned identifier of the object. The client then sends the other entangled event to the other client, which forwards the event to the server with its own client-assigned identifier for the now-shared object:

protocol Exporter {
    ExportThing(resource struct {
        client_assigned_id uint32;
        export_token zx.Handle:EVENTPAIR;
    });
};

protocol Importer {
    ImportThing(resource struct {
        some_other_client_assigned_id uint32;
        import_token zx.Handle:EVENTPAIR;
    });
};

To correlate the objects, the server calls zx_object_get_info with ZX_INFO_HANDLE_BASIC and matches the koid and related_koid properties from the entangled event objects.

Eventpair cancellation

When using tear-off protocol transactions, the client can cancel long-running operations by closing the client end of the protocol. The server should listen for ZX_CHANNEL_PEER_CLOSED and abort the transaction to avoid wasting resources.

There is a similar use case for operations that do not have a dedicated channel. For example, the fuchsia.net.http.Loader protocol has a Fetch method that initiates an HTTP request. The server replies to the request with the HTTP response once the HTTP transaction is complete, which might take a significant amount of time. The client has no obvious way to cancel the request short of closing the entire Loader protocol, which might cancel many other outstanding requests.

The eventpair cancellation pattern solves this problem by having the client include one of the entangled events from a zx::eventpair as a parameter to the method. The server then listens for ZX_EVENTPAIR_PEER_CLOSED and cancels the operation when that signal is asserted. Using a zx::eventpair is better than using a zx::event or some other signal, because the zx::eventpair approach implicitly handles the case where the client crashes or otherwise tears down. The kernel generates ZX_EVENTPAIR_PEER_CLOSED when the entangled event retained by the client is destroyed.

Empty protocols

Sometimes an empty protocol can provide value. For example, a method that creates an object might also receive a server_end:FooController parameter. The caller provides an implementation of this empty protocol:

protocol FooController {};

The FooController does not contain any methods for controlling the created object, but the server can use the ZX_CHANNEL_PEER_CLOSED signal on the protocol to trigger destruction of the object. In the future, the protocol could potentially be extended with methods for controlling the created object.

Controlling settings-like data

Often, servers will expose settings that the client can modify. Prefer using a table to represent such settings. For instance, the fuchsia.accessibility library defines:

type Settings = table {
    1: magnification_enabled bool;
    2: magnification_zoom_factor float32;
    3: screen_reader_enabled bool;
    4: color_inversion_enabled bool;
    5: color_correction ColorCorrection;
    6: color_adjustment_matrix array<float32, 9>;
};

(Comments are omitted for readability.)

There are various ways to provide clients the ability to change these settings.

The partial update approach exposes an Update method taking a partial settings value, and changes fields only if they are present in the partial value.

protocol TheManagerOfSomeSorts {
    /// Description how the update modifies the behavior.
    ///
    /// Only fields present in the settings value will be changed.
    Update(struct {
        settings Settings;
    }) -> (struct {
        args Args;
    });
};

The replace approach exposes a Replace method taking a complete settings value, and changes the settings to the newly provided one.

protocol TheManagerOfOtherSorts {
    /// Description how the override modifies the behavior.
    ///
    /// This replaces the setting.
    Replace(struct {
        settings Settings;
    }) -> (struct {
        args Args;
    });
};

Things to avoid:

  • Avoid using the verb Set or Override for either the partial update or the replace approach since what semantics are offered will be ambiguous.

  • Avoid individual methods to update settings' fields such as SetMagnificationEnabled. Such individal methods are more burdensome to maintain, and callers rarely want to update a single value.

  • Avoid removing a setting with a magic value, like -1. Instead, remove a setting by the absence of that settings field.

Referring to union variants and table fields

It is often useful to refer to fields of types, such as referring to one or multiple fields of a table or referring to a specific union variant.

Consider an API which provides metadata as a table with many fields. If this metadata can grow to be quite large, it is often useful to have a mechanism for the recipient to indicate to the sender which fields in this metadata will be read, thus avoiding sending superfluous fields which will not be considered by the recipients. In such cases, having a parallel bits whose members match one-to-one with the fields of the table can be a strong foundation to build your API:

type MetadataFields = flexible bits {
    VERSION = 0b1;
    COUNT = 0b10;
};

type Metadata = table {
    1: version string;
    2: count uint32;
    // ...
};

protocol MetadataGetter {
    Get(struct {
        fields MetadataFields;
    }) -> (struct {
        metadata Metadata;
    });
};

Now, consider the command union. In a complex scenario, the server may want the ability to describe the commands it supports. In such cases, having a parallel enum whose members match one-to-one with the variants of the union can be a strong foundation to build your API:

type MyCommandVariant = strict enum {
    POKE = 1;
    PROD = 2;
    // ...
};

protocol HighVolumeSinkContinued {
    SupportedCommands() -> (struct {
        supported_commands vector<MyCommandVariant>;
    });
};

Note that while it might be tempting to use a bits value to represent the set of commands, this leads to some more difficult choices down the line, If your API evolves such that you need to refer to a specific command, having an enum fits naturally. Should you have started with a bits value, you are now faced with one of two bad choices:

  1. Introduce a enum which means that there are now two ways to refer to fields, and possibly conversion issues in client code (to go from one representation to the other); or

  2. Continue to use the bits with the restriction that only one bit be set at any given time, and now mapping back to which specific bit is set is cumbersome.

In summary, for table:

  • Name the bits by the name of the table along with the suffix Fields (plural). Each member value should be the bit at the ordinal index, i.e. 1 << (ordinal - 1).

  • Similarly to the advice for union, you want to match the flexibility between the bits and the table, i.e. since FIDL only supports flexible tables today, the bits must be flexible.

For union:

  • Name the enum listing all variants by the name of the union along with the suffix Variant (singular). Each member value should be the ordinal of the variant it describes.

  • Match the flexibility between the union and the enum, i.e. if the union is strict then the enum must also be strict.

Antipatterns

This section describes several antipatterns: design patterns that often provide negative value. Learning to recognize these patterns is the first step towards avoiding using them in the wrong ways.

Pushed settings: avoid when possible

The Fuchsia platform generally prefers pull semantics. The semantics of components provide an important example; capabilities are assumed to be pulled from components, allowing component startup to be lazy and component shutdown ordering to be inferred from the directed graph of capability routes.

Designs in which a FIDL protocol is used to push configuration from one component to another are tempting because of their apparent simplicity. This arises in cases where component A pushes policy to component B for B's business logic. This causes the platform to misunderstand the dependency relationship: A is not automatically started in support of B's function, A may be shut down before B has completed the execution of its business logic. This in turn leads to workarounds involving weak dependencies along with phantom reverse dependencies to produce the desired behavior; all this is simpler when policy is pulled rather than pushed.

When possible, prefer a design in which policy is pulled rather than pushed.

Client libraries: use with care

Ideally, clients interface with protocols defined in FIDL using language-specific client libraries generated by the FIDL compiler. While this approach lets Fuchsia provide high-quality support for a large number of target languages, sometimes the protocol is too low-level to program directly. In such cases, it's appropriate to provide a hand-written client library that interfaces to the same underlying protocol, but is easier to use correctly.

For example, fuchsia.io has a client library, libfdio.so, which provides a POSIX-like frontend to the protocol. Clients that expect a POSIX-style open/close/read/write interface can link against libfdio.so and speak the fuchsia.io protocol with minimal modification. This client library provides value because the library adapts between an existing library interface and the underlying FIDL protocol.

Another kind of client library that provides positive value is a framework. A framework is an extensive client library that provides a structure for a large portion of the application. Typically, a framework provides a significant amount of abstraction over a diverse set of protocols. For example, Flutter is a framework that can be viewed as an extensive client library for the fuchsia.ui protocols.

FIDL protocols should be fully documented regardless of whether the protocol has an associated client library. An independent group of software engineers should be able to understand and correctly use the protocol directly given its definition without need to reverse-engineer the client library. When the protocol has a client library, aspects of the protocol that are low-level and subtle enough to motivate you to create a client library should be documented clearly.

The main difficulty with client libraries is that they need to be maintained for every target language, which tends to mean client libraries are missing (or lower quality) for less popular languages. Client libraries also tend to ossify the underlying protocols because they cause every client to interact with the server in exactly the same way. The servers grow to expect this exact interaction pattern and fail to work correctly when clients deviate from the pattern used by the client library.

In order to include the client library in the Fuchsia SDK, we should provide implementations of the library in at least two languages.

Service hubs: use with care

A service hub is a Discoverable protocol that simply lets you discover a number of other protocols, typically with explicit names:

// BAD
@discoverable
protocol ServiceHub {
    GetFoo(resource struct {
        foo server_end:Foo;
    });
    GetBar(resource struct {
        bar server_end:Bar;
    });
    GetBaz(resource struct {
        baz server_end:Baz;
    });
    GetQux(resource struct {
        qux server_end:Qux;
    });
};

Particularly if stateless, the ServiceHub protocol does not provide much value over simply making the individual protocol services discoverable directly:

@discoverable
protocol Foo {};

@discoverable
protocol Bar {};

@discoverable
protocol Baz {};

@discoverable
protocol Qux {};

Either way, the client can establish a connection to the enumerated services. In the latter case, the client can discover the same services through the normal mechanism used throughout the system to discover services. Using the normal mechanism lets the core platform apply appropriate policy to discovery.

However, service hubs can be useful in some situations. For example, if the protocol were stateful or was obtained through some process more elaborate than normal service discovery, then the protocol could provide value by transferring state to the obtained services. As another example, if the methods for obtaining the services take additional parameters, then the protocol could provide value by taking those parameters into account when connecting to the services.

Overly object-oriented design: no

Some libraries create separate protocol instances for every logical object in the protocol, but this approach has a number of disadvantages:

  • Message ordering between the different protocol instances is undefined. Messages sent over a single protocol are processed in FIFO order (in each direction), but messages sent over different channels race. When the interaction between the client and the server is spread across many channels, there is a larger potential for bugs when messages are unexpectedly reordered.

  • Each protocol instance has a cost in terms of kernel resources, waiting queues, and scheduling. Although Fuchsia is designed to scale to large numbers of channels, the costs add up over the whole system and creating a huge proliferation of objects to model every logical object in the system places a large burden on the system.

  • Error handling and teardown is much more complicated because the number of error and teardown states grows exponentially with the number of protocol instances involved in the interaction. When you use a single protocol instance, both the client and the server can cleanly shut down the interaction by closing the protocol. With multiple protocol instances, the interaction can get into states where the interaction is partially shutdown or where the two parties have inconsistent views of the shutdown state.

    • Coordination across protocol boundaries is more complex than within a single protocol because multiple protocols need to allow for the possibility that different protocols will be used by different clients, who might not completely trust each other.

However, there are use cases for separating functionality into multiple protocols:

  • Providing separate protocols can be beneficial for security because some clients might have access to only one of the protocols and thereby be restricted in their interactions with the server.

  • Separate protocols can also more easily be used from separate threads. For example, one protocol might be bound to one thread and another protocol might be bound to another thread.

  • Clients and servers pay a (small) cost for each method in a protocol. Having one giant protocol that contains every possible method can be less efficient than having multiple smaller protocols if only a few of the smaller protocols are needed at a time.

  • Sometimes the state held by the server factors cleanly along method boundaries. In those cases, consider factoring the protocol into smaller protocols along those same boundaries to provide separate protocols for interacting with separate state.

A good way to avoid over object-orientation is to use client-assigned identifiers to model logical objects in the protocol. That pattern lets clients interact with a potentially large set of logical objects through a single protocol.

Specifying absence with magic values: no

We typically want to instruct the server to set some state, but allow for the removal of state too. The following uses magic values to instruct removal:

// BAD
protocol View3 {
    ReplaceHitRegion(struct {
        id uint64;
        // Set to { -1, -1 } to remove.
        size fuchsia.math.Rect;
    });
    ReplaceName(struct {
        // Set to "" to remove.
        name string;
    });
};

However, FIDL offers optionality on many data types. Using optionality yields a more idiomatic interface:

protocol View4 {
    ReplaceHitRegion(struct {
        id uint64;
        size box<fuchsia.math.Rect>;
    });
    ReplaceName(struct {
        name string:optional;
    });
};

  1. While the FIDL type system is a structural type system when it comes to ABI, i.e. names have no bearing, only the structure of types matters, the FIDL type system has named types semantics when it comes to API.