Prerequisites
This tutorial builds on the HLCPP getting started tutorials.
Overview
A common aspect of using FIDL on Fuchsia is passing protocols themselves across protocols. More precisely, many messages include either the client end or the server end of a channel, where the channel is used to communicate over a specific protocol. In this case, client end means that the remote end of the channel implements the specified protocol, whereas server end means that the remote end is making requests for the specified protocol. An alternate set of terms for client end and server end are protocol and protocol request.
This tutorial covers:
- The usage of these client and server ends, both in FIDL and in the HLCPP FIDL bindings.
- The request pipelining pattern and its benefits.
The full example code for this tutorial is located at
//examples/fidl/hlcpp/request_pipelining
.
The FIDL protocol
To do so, this tutorial implements the EchoLauncher
protocol from the
fuchsia.examples library:
@discoverable
closed protocol EchoLauncher {
strict GetEcho(struct {
echo_prefix string:MAX_STRING_LENGTH;
}) -> (resource struct {
response client_end:Echo;
});
strict GetEchoPipelined(resource struct {
echo_prefix string:MAX_STRING_LENGTH;
request server_end:Echo;
});
};
This is a protocol that lets clients retrieve an instance of the Echo
protocol. Clients can specify a prefix, and the resulting Echo
instance
adds that prefix to every response.
There are two methods that can be used to accomplish this:
GetEcho
: Takes the prefix as a request, and responds with the client end of a channel connected to an implementation of theEcho
protocol. After receiving the client end in the response, the client can start making requests on theEcho
protocol using the client end.GetEchoPipelined
: Takes the server end of a channel as one of the request parameters and binds an implementation ofEcho
to it. The client that made the request is assumed to already hold the client end, and will start makingEcho
requests on that channel after callingGetEchoPipeliend
.
As the name suggests, the latter uses a pattern called protocol request pipelining, and is the preferred approach. This tutorial implements both approaches.
Implement the server
Implement the Echo protocol
This implementation of Echo
allows specifying a prefix in order to
distinguish between the different instances of Echo
servers:
class EchoImpl : public fuchsia::examples::Echo {
public:
explicit EchoImpl(std::string prefix) : prefix_(prefix) {}
void EchoString(std::string value, EchoStringCallback callback) override {
std::cout << "Got echo request for prefix " << prefix_ << std::endl;
callback(prefix_ + value);
}
void SendString(std::string value) override {}
const std::string prefix_;
};
The SendString
handler is empty as the client just uses EchoString
.
Implement the EchoLauncher protocol
This class uses
a binding set to keep track of all of the instances of Echo
that it launches:
class EchoLauncherImpl : public fuchsia::examples::EchoLauncher {
public:
void GetEcho(std::string echo_prefix, GetEchoCallback callback) override {
std::cout << "Got non pipelined request" << std::endl;
fidl::InterfaceHandle<fuchsia::examples::Echo> client_end;
fidl::InterfaceRequest<fuchsia::examples::Echo> server_end = client_end.NewRequest();
bindings_.AddBinding(std::make_unique<EchoImpl>(echo_prefix), std::move(server_end));
callback(std::move(client_end));
}
void GetEchoPipelined(std::string echo_prefix,
fidl::InterfaceRequest<fuchsia::examples::Echo> server_end) override {
std::cout << "Got pipelined request" << std::endl;
bindings_.AddBinding(std::make_unique<EchoImpl>(echo_prefix), std::move(server_end));
}
fidl::BindingSet<fuchsia::examples::Echo, std::unique_ptr<fuchsia::examples::Echo>> bindings_;
};
The code explicitly specifies not just the protocol that the binding set is
templated on, but also the pointer type of the bindings that it stores. The code uses
unique_ptr
instead of raw pointers so that the binding set owns the
instances of EchoImpl
.
This is the implementation of the two methods:
class EchoLauncherImpl : public fuchsia::examples::EchoLauncher {
public:
void GetEcho(std::string echo_prefix, GetEchoCallback callback) override {
std::cout << "Got non pipelined request" << std::endl;
fidl::InterfaceHandle<fuchsia::examples::Echo> client_end;
fidl::InterfaceRequest<fuchsia::examples::Echo> server_end = client_end.NewRequest();
bindings_.AddBinding(std::make_unique<EchoImpl>(echo_prefix), std::move(server_end));
callback(std::move(client_end));
}
void GetEchoPipelined(std::string echo_prefix,
fidl::InterfaceRequest<fuchsia::examples::Echo> server_end) override {
std::cout << "Got pipelined request" << std::endl;
bindings_.AddBinding(std::make_unique<EchoImpl>(echo_prefix), std::move(server_end));
}
fidl::BindingSet<fuchsia::examples::Echo, std::unique_ptr<fuchsia::examples::Echo>> bindings_;
};
For GetEcho
, the code first needs to instantiate both ends of the
channel. It creates a Binding
using the server end, and then sends a response
back with the client end. For GetEchoPipelined
, the client has already done
the work of creating both ends of the channel. It keeps one end and has passed
the other to the server, so all the code needs to do is bind it to an Echo
implementation.
Serve the EchoLauncher protocol
The main loop is the same as in the
server tutorial but serves an EchoLauncher
instead of Echo
.
int main(int argc, const char** argv) {
async::Loop loop(&kAsyncLoopConfigAttachToCurrentThread);
EchoLauncherImpl impl;
fidl::Binding<fuchsia::examples::EchoLauncher> binding(&impl);
fidl::InterfaceRequestHandler<fuchsia::examples::EchoLauncher> handler =
[&](fidl::InterfaceRequest<fuchsia::examples::EchoLauncher> request) {
binding.Bind(std::move(request));
};
auto context = sys::ComponentContext::CreateAndServeOutgoingDirectory();
context->outgoing()->AddPublicService(std::move(handler));
std::cout << "Running echo launcher server" << std::endl;
return loop.Run();
}
Build the server
Optionally, o check that things are correct, try building the server:
Configure your GN build to include the server:
fx set core.x64 --with //examples/fidl/hlcpp/request_pipelining/server:echo-server
Build the Fuchsia image:
fx build
Implement the client
After connecting to the EchoLauncher
server, the client
code connects to one instance of Echo
using GetEcho
and another using
GetEchoPipelined
and then makes an EchoString
request on each instance.
This is the non-pipelined code:
fuchsia::examples::EchoPtr echo;
auto callback = [&](fidl::InterfaceHandle<fuchsia::examples::Echo> client_end) {
std::cout << "Got non pipelined response\n";
echo.Bind(std::move(client_end));
echo->EchoString("hello!", [&](std::string response) {
std::cout << "Got echo response " << response << "\n";
if (++num_responses == 2) {
loop.Quit();
}
});
};
echo_launcher->GetEcho("not pipelined: ", std::move(callback));
This code has two layers of callbacks:
- The outer layer handles the launcher request.
- The inner layer handles the
EchoString
request.
Also, the code instantiates the EchoPtr
in the outer scope then Bind
s it
inside of the callback instead of calling fidl::InterfaceRequest<T>::Bind
.
This is because the proxy needs to be in scope when the echo response is
received, which will most likely be after the top level callback returns.
Despite having to initialize the channels, the pipelined code is much simpler:
fuchsia::examples::EchoPtr echo_pipelined;
echo_launcher->GetEchoPipelined("pipelined: ", echo_pipelined.NewRequest());
echo_pipelined->EchoString("hello!", [&](std::string response) {
std::cout << "Got echo response " << response << "\n";
if (++num_responses == 2) {
loop.Quit();
}
});
Build the client
Optionally, to check that things are correct, try building the client:
Configure your GN build to include the client:
fx set core.x64 --with //examples/fidl/hlcpp/request_pipelining/client:echo-client
Build the Fuchsia image:
fx build
Run the example code
For this tutorial, a
realm
component is
provided to declare the appropriate capabilities and routes for
fuchsia.examples.Echo
and fuchsia.examples.EchoLauncher
.
Configure your build to include the provided package that includes the echo realm, server, and client:
fx set core.x64 --with //examples/fidl/hlcpp:echo-launcher-hlcpp
Build the Fuchsia image:
fx build
Run the
echo_realm
component. This creates the client and server component instances and routes the capabilities:ffx component run /core/ffx-laboratory:echo_realm fuchsia-pkg://fuchsia.com/echo-launcher-hlcpp#meta/echo_realm.cm
Start the
echo_client
instance:ffx component start /core/ffx-laboratory:echo_realm/echo_client
The server component starts when the client attempts to connect to the
EchoLauncher
protocol. You should see output similar to the following
in the device logs (ffx log
):
[echo_server][][I] Running echo launcher server
[echo_server][][I] Got non pipelined request
[echo_server][][I] Got pipelined request
[echo_server][][I] Got echo request for prefix pipelined:
[echo_client][][I] Got non pipelined response
[echo_client][][I] Got echo response pipelined: hello!
[echo_server][][I] Got echo request for prefix not pipelined:
[echo_client][][I] Got echo response not pipelined: hello!
Based on the print order, you can see that the pipelined case is faster. The echo response for the pipelined case arrives first, even though the non pipelined request is sent first, since request pipelining saves a roundtrip between the client and server. Request pipelining also simplifies the code.
For further reading about protocol request pipelining, including how to handle protocol requests that may fail, see the FIDL API rubric.
Terminate the realm component to stop execution and clean up the component instances:
ffx component destroy /core/ffx-laboratory:echo_realm