Prerequisites
This tutorial builds on the C++ getting started tutorials.
Overview
A common aspect of using FIDL on Fuchsia is passing protocol endpoints across protocols. Many FIDL messages include either the client end or the server end of a channel, where the channel is used to communicate over a different FIDL protocol. In this case, the client end allows making requests to the specified protocol, whereas the server end must implement the specified protocol. An alternate set of terms for client end and server end are protocol and protocol request.
This tutorial covers:
- The usage of these client and server ends, both in FIDL and in the C++ FIDL bindings.
- The protocol request pipelining pattern and its benefits.
The full example code for this tutorial is located at
//examples/fidl/cpp/request_pipelining
.
The FIDL protocol
This tutorial implements the EchoLauncher
protocol from the
fuchsia.examples
library:
@discoverable
closed protocol EchoLauncher {
strict GetEcho(struct {
echo_prefix string:MAX_STRING_LENGTH;
}) -> (resource struct {
response client_end:Echo;
});
strict GetEchoPipelined(resource struct {
echo_prefix string:MAX_STRING_LENGTH;
request server_end:Echo;
});
};
This is a protocol that lets clients retrieve an instance of the Echo
protocol. Clients can specify a prefix, and the resulting Echo
instance
adds that prefix to every response.
There are two methods that can be used to accomplish this:
GetEcho
: Takes the prefix as a request, and responds with the client end of a channel connected to an implementation of theEcho
protocol. After receiving the client end in the response, the client can start making requests on theEcho
protocol using the client end.GetEchoPipelined
: Takes the server end of a channel as one of the request parameters and binds an implementation ofEcho
to it. The client that made the request is assumed to already hold the client end, and will start makingEcho
requests on that channel after callingGetEchoPipelined
.
As the name suggests, the latter uses a pattern called protocol request pipelining, and is the preferred approach. This tutorial implements both approaches.
Implement the server
Implement the Echo protocol
This implementation of Echo
allows specifying a prefix in order to
distinguish between the different instances of Echo
servers:
// Implementation of the Echo protocol that prepends a prefix to every response.
class EchoImpl final : public fidl::Server<fuchsia_examples::Echo> {
public:
explicit EchoImpl(std::string prefix) : prefix_(prefix) {}
// This method is not used in the request pipelining example, so requests are ignored.
void SendString(SendStringRequest& request, SendStringCompleter::Sync& completer) override {}
void EchoString(EchoStringRequest& request, EchoStringCompleter::Sync& completer) override {
FX_LOGS(INFO) << "Got echo request for prefix " << prefix_;
completer.Reply(prefix_ + request.value());
}
const std::string prefix_;
};
The SendString
handler is empty as the client just uses EchoString
.
Implement the EchoLauncher protocol
// Implementation of EchoLauncher. Each method creates an instance of EchoImpl
// with the specified prefix.
class EchoLauncherImpl final : public fidl::Server<fuchsia_examples::EchoLauncher> {
public:
explicit EchoLauncherImpl(async_dispatcher_t* dispatcher) : dispatcher_(dispatcher) {}
void GetEcho(GetEchoRequest& request, GetEchoCompleter::Sync& completer) override {
FX_LOGS(INFO) << "Got non-pipelined request";
auto [client_end, server_end] = fidl::Endpoints<fuchsia_examples::Echo>::Create();
fidl::BindServer(dispatcher_, std::move(server_end),
std::make_unique<EchoImpl>(request.echo_prefix()));
completer.Reply(std::move(client_end));
}
void GetEchoPipelined(GetEchoPipelinedRequest& request,
GetEchoPipelinedCompleter::Sync& completer) override {
FX_LOGS(INFO) << "Got pipelined request";
fidl::BindServer(dispatcher_, std::move(request.request()),
std::make_unique<EchoImpl>(request.echo_prefix()));
}
async_dispatcher_t* dispatcher_;
};
For GetEcho
, the code first needs to instantiate both ends of the channel. It
then launches an Echo
instance using the server end, and then sends a response
back with the client end. For GetEchoPipelined
, the client has already done
the work of creating both ends of the channel. It keeps one end and has passed
the other to the server, so all the code needs to do is to bind the server end
to a new EchoImpl
.
Serve the EchoLauncher protocol
The main loop is the same as in the
server tutorial but serves an EchoLauncher
instead of Echo
.
int main(int argc, char** argv) {
async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
async_dispatcher_t* dispatcher = loop.dispatcher();
component::OutgoingDirectory outgoing = component::OutgoingDirectory(dispatcher);
zx::result result = outgoing.ServeFromStartupInfo();
if (result.is_error()) {
FX_LOGS(ERROR) << "Failed to serve outgoing directory: " << result.status_string();
return -1;
}
result = outgoing.AddUnmanagedProtocol<fuchsia_examples::EchoLauncher>(
[dispatcher](fidl::ServerEnd<fuchsia_examples::EchoLauncher> server_end) {
FX_LOGS(INFO) << "Incoming connection for "
<< fidl::DiscoverableProtocolName<fuchsia_examples::EchoLauncher>;
fidl::BindServer(dispatcher, std::move(server_end),
std::make_unique<EchoLauncherImpl>(dispatcher));
});
if (result.is_error()) {
FX_LOGS(ERROR) << "Failed to add EchoLauncher protocol: " << result.status_string();
return -1;
}
FX_LOGS(INFO) << "Running echo launcher server" << std::endl;
loop.Run();
return 0;
}
Build the server
Optionally, to check that things are correct, try building the server:
Configure your GN build to include the server:
fx set core.x64 --with //examples/fidl/cpp/request_pipelining/server:echo-server
Build the Fuchsia image:
fx build
Implement the client
After connecting to the EchoLauncher
server, the client
code connects to one instance of Echo
using GetEcho
and another using
GetEchoPipelined
and then makes an EchoString
request on each instance.
Non-pipelined client
This is the non-pipelined code:
int main(int argc, const char** argv) {
async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
async_dispatcher_t* dispatcher = loop.dispatcher();
int num_responses = 0;
// Connect to the EchoLauncher protocol
zx::result launcher_client_end = component::Connect<fuchsia_examples::EchoLauncher>();
ZX_ASSERT(launcher_client_end.is_ok());
fidl::Client launcher(std::move(*launcher_client_end), dispatcher);
// Make a non-pipelined request to get an instance of Echo
launcher->GetEcho({"non pipelined: "})
.ThenExactlyOnce([&](fidl::Result<fuchsia_examples::EchoLauncher::GetEcho>& result) {
ZX_ASSERT(result.is_ok());
// Take the Echo client end in the response, bind it to another client, and
// make an EchoString request on it.
fidl::SharedClient echo(std::move(result->response()), dispatcher);
echo->EchoString({"hello!"})
.ThenExactlyOnce(
// Clone |echo| into the callback so that the client
// is only destroyed after we receive the response.
[&, echo = echo.Clone()](fidl::Result<fuchsia_examples::Echo::EchoString>& result) {
ZX_ASSERT(result.is_ok());
FX_LOGS(INFO) << "Got echo response " << result->response();
if (++num_responses == 2) {
loop.Quit();
}
});
});
auto [client_end, server_end] = fidl::Endpoints<fuchsia_examples::Echo>::Create();
// Make a pipelined request to get an instance of Echo
ZX_ASSERT(launcher->GetEchoPipelined({"pipelined: ", std::move(server_end)}).is_ok());
// A client can be initialized using the client end without waiting for a response
fidl::Client echo_pipelined(std::move(client_end), dispatcher);
echo_pipelined->EchoString({"hello!"})
.ThenExactlyOnce([&](fidl::Result<fuchsia_examples::Echo::EchoString>& result) {
ZX_ASSERT(result.is_ok());
FX_LOGS(INFO) << "Got echo response " << result->response();
if (++num_responses == 2) {
loop.Quit();
}
});
loop.Run();
return num_responses == 2 ? 0 : 1;
}
This code has two layers of callbacks:
- The outer layer handles the launcher
GetEcho
response. - The inner layer handles the
EchoString
response.
Inside the GetEcho
response callback, the code binds the returned client end
to a fidl::SharedClient<Echo>
, and places a clone into the EchoString
callback, so that the client's lifetime is extended until when the echo response
is received, which will most likely be after the top level callback returns.
Pipelined client
Despite having to create a pair of endpoints first, the pipelined code is much simpler:
int main(int argc, const char** argv) {
async::Loop loop(&kAsyncLoopConfigNeverAttachToThread);
async_dispatcher_t* dispatcher = loop.dispatcher();
int num_responses = 0;
// Connect to the EchoLauncher protocol
zx::result launcher_client_end = component::Connect<fuchsia_examples::EchoLauncher>();
ZX_ASSERT(launcher_client_end.is_ok());
fidl::Client launcher(std::move(*launcher_client_end), dispatcher);
// Make a non-pipelined request to get an instance of Echo
launcher->GetEcho({"non pipelined: "})
.ThenExactlyOnce([&](fidl::Result<fuchsia_examples::EchoLauncher::GetEcho>& result) {
ZX_ASSERT(result.is_ok());
// Take the Echo client end in the response, bind it to another client, and
// make an EchoString request on it.
fidl::SharedClient echo(std::move(result->response()), dispatcher);
echo->EchoString({"hello!"})
.ThenExactlyOnce(
// Clone |echo| into the callback so that the client
// is only destroyed after we receive the response.
[&, echo = echo.Clone()](fidl::Result<fuchsia_examples::Echo::EchoString>& result) {
ZX_ASSERT(result.is_ok());
FX_LOGS(INFO) << "Got echo response " << result->response();
if (++num_responses == 2) {
loop.Quit();
}
});
});
auto [client_end, server_end] = fidl::Endpoints<fuchsia_examples::Echo>::Create();
// Make a pipelined request to get an instance of Echo
ZX_ASSERT(launcher->GetEchoPipelined({"pipelined: ", std::move(server_end)}).is_ok());
// A client can be initialized using the client end without waiting for a response
fidl::Client echo_pipelined(std::move(client_end), dispatcher);
echo_pipelined->EchoString({"hello!"})
.ThenExactlyOnce([&](fidl::Result<fuchsia_examples::Echo::EchoString>& result) {
ZX_ASSERT(result.is_ok());
FX_LOGS(INFO) << "Got echo response " << result->response();
if (++num_responses == 2) {
loop.Quit();
}
});
loop.Run();
return num_responses == 2 ? 0 : 1;
}
Unlike in the client tutorial, the async loop is run to completion once, which runs both non-pipelined and pipelined code concurrently in order to observe the order of operations. The client keeps track of the number of responses being received, so that it can quit the loop once no more messages from the server are expected.
Build the client
Optionally, to check that things are correct, try building the client:
Configure your GN build to include the server:
fx set core.x64 --with //examples/fidl/cpp/request_pipelining/client:echo-client
Build the Fuchsia image:
fx build
Run the example code
For this tutorial, a
realm
component is
provided to declare the appropriate capabilities and routes for
fuchsia.examples.Echo
and fuchsia.examples.EchoLauncher
.
Configure your build to include the provided package that includes the echo realm, server, and client:
fx set core.x64 --with //examples/fidl/cpp/request_pipelining
Build the Fuchsia image:
fx build
Run the
echo_realm
component. This creates the client and server component instances and routes the capabilities:ffx component run /core/ffx-laboratory:echo_realm fuchsia-pkg://fuchsia.com/echo-launcher-cpp#meta/echo_realm.cm
Start the
echo_client
instance:ffx component start /core/ffx-laboratory:echo_realm/echo_client
The server component starts when the client attempts to connect to the
EchoLauncher
protocol. You should see output similar to the following
in the device logs (ffx log
):
[echo_server][I] Running echo launcher server
[echo_server][I] Incoming connection for fuchsia.examples.EchoLauncher
[echo_server][I] Got non-pipelined request
[echo_server][I] Got pipelined request
[echo_server][I] Got echo request for prefix pipelined:
[echo_server][I] Got echo request for prefix non pipelined:
[echo_client][I] Got echo response pipelined: hello!
[echo_client][I] Got echo response non pipelined: hello!
Based on the print order, you can see that the pipelined case is faster. The echo response for the pipelined case arrives first, even though the non pipelined request is sent first, since request pipelining saves a roundtrip between the client and server. Request pipelining also simplifies the code.
For further reading about protocol request pipelining, including how to handle protocol requests that may fail, see the FIDL API rubric.
Terminate the realm component to stop execution and clean up the component instances:
ffx component destroy /core/ffx-laboratory:echo_realm