Fuchsia is using Clang as the official compiler.
Prerequisites
You need CMake version 3.13.4 or newer to execute these commands. This is the minimum required version to build LLVM.
While CMake supports different build systems, it is recommended to use Ninja.
Both should be present in your Fuchsia checkout as prebuilts. The commands below
assume that cmake
and ninja
are in your PATH
:
export PATH=${FUCHSIA_DIR}/prebuilt/third_party/cmake/${platform}/bin:${PATH}
export PATH=${FUCHSIA_DIR}/prebuilt/third_party/ninja/${platform}/bin:${PATH}
Where ${FUCHSIA_DIR}
refers to the root directory of your Fuchsia source tree.
Getting Source
The example commands below use ${LLVM_SRCDIR}
to refer to the root of
your LLVM source tree checkout. You can use the official monorepo
https://github.com/llvm/llvm-project
maintained by the LLVM community:
LLVM_SRCDIR=${HOME}/llvm/llvm-project
git clone https://github.com/llvm/llvm-project ${LLVM_SRCDIR}
cd ${LLVM_SRCDIR}
git checkout ${REVISON_NUMBER}
Fuchsia IDK
Before building the runtime libraries that are built along with the
toolchain, you need a Fuchsia IDK
(formerly known as the SDK).
The IDK must be located in the directory pointed to by the ${IDK_DIR}
variable:
IDK_DIR=${HOME}/fuchsia-idk
To download the latest IDK, you can use the following:
# For Linux
cipd install fuchsia/sdk/core/linux-amd64 latest -root ${IDK_DIR}
Sysroot for Linux
To include compiler runtimes and C++ library for Linux, download the sysroot.
It must be located in the directory pointed by the ${SYSROOT_DIR}
variable.
SYSROOT_DIR=${FUCHSIA_DIR}/prebuilt/third_party/sysroot
Building a Clang Toolchain for Fuchsia
The Clang CMake build system supports bootstrap (aka multi-stage) builds. Fuchsia uses two-stage bootstrap build for the Clang compiler. However, for toolchain related development it is recommended to use the single-stage build.
If your goal is to experiment with clang, the single-stage build is likely what you are looking for. The first stage compiler is a host-only compiler with some options set needed for the second stage. The second stage compiler is the fully optimized compiler intended to ship to users.
Setting up these compilers requires a lot of options. To simplify the
configuration the Fuchsia Clang build settings are contained in CMake
cache files, which are part of the Clang codebase (Fuchsia.cmake
and
Fuchsia-stage2.cmake
).
mkdir llvm-build
mkdir llvm-install # For placing stripped binaries here
INSTALL_DIR=${pwd}/llvm-install
cd llvm-build
Single Stage Build Fuchsia Configuration
When developing Clang for Fuchsia, you can use the cache file to test the Fuchsia configuration, but run only the second stage, with LTO disabled, which gives you a faster build time suitable even for incremental development, without having to manually specify all options:
cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_TOOLCHAIN_FILE=${FUCHSIA_DIR}/scripts/clang/ToolChain.cmake \
-DLLVM_ENABLE_LTO=OFF \
-DLINUX_x86_64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux \
-DLINUX_aarch64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux \
-DLINUX_riscv64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/ubuntu20.04 \
-DFUCHSIA_SDK=${IDK_DIR} \
-DCMAKE_INSTALL_PREFIX= \
-C ${LLVM_SRCDIR}/clang/cmake/caches/Fuchsia-stage2.cmake \
${LLVM_SRCDIR}/llvm
ninja toolchain-distribution -j1000 # Build the distribution
If the above fails with an error related to Ninja, then you may need to add
ninja
to your PATH. You can find the prebuilt executable at
${FUCHSIA_DIR}//prebuilt/third_party/ninja/${platform}/bin
.
ninja toolchain-distribution
should be enough for building all binaries, but
the Fuchsia build assumes some libraries are stripped so ninja
install-toolchain-distribution-stripped
is necessary.
Two-Stage Build Fuchsia Configuration
This is roughly equivalent to what is run on the prod builders and used to build a toolchain that Fuchsia ships to users.
cmake -GNinja \
-DCMAKE_TOOLCHAIN_FILE=${FUCHSIA_DIR}/scripts/clang/ToolChain.cmake \
-DCMAKE_INSTALL_PREFIX= \
-DSTAGE2_LINUX_x86_64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux \
-DSTAGE2_LINUX_aarch64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux \
-DSTAGE2_LINUX_riscv64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/ubuntu20.04 \
-DSTAGE2_FUCHSIA_SDK=${IDK_DIR} \
-C ${LLVM_SRCDIR}/clang/cmake/caches/Fuchsia.cmake \
${LLVM_SRCDIR}/llvm
ninja stage2-toolchain-distribution -j1000
DESTDIR=${INSTALL_DIR} ninja stage2-install-toolchain-distribution-stripped -j1000
runtime.json
If the Fuchsia build fails due to a missing runtime.json
file, you must generate a new runtime.json
file by running the following command:
python3 ${FUCHSIA_DIR}/scripts/clang/generate_runtimes.py \
--clang-prefix ${INSTALL_DIR} --sdk-dir ${IDK_DIR} \
--build-id-dir ${INSTALL_DIR}/lib/.build-id > ${INSTALL_DIR}/lib/runtime.json
The generated file contains relative paths used by the Fuchsia build to know where various libraries from the toolchain are located.
Putting it All Together
Copy-paste code for building a single-stage toolchain. This code can be run from inside your LLVM build directory and assumes a linux environment.
cd ${LLVM_BUILD_DIR} # The directory your toolchain will be installed in
# Environment setup
FUCHSIA_DIR=${HOME}/fuchsia/ # Replace with wherever Fuchsia lives
LLVM_SRCDIR=${HOME}/llvm/llvm-project # Replace with wherever llvm-project lives
IDK_DIR=${HOME}/fuchsia-idk/
SYSROOT_DIR=${FUCHSIA_DIR}/prebuilt/third_party/sysroot
CLANG_TOOLCHAIN_PREFIX=${FUCHSIA_DIR}/prebuilt/third_party/clang/linux-x64/bin/
# Download necessary dependencies
cipd install fuchsia/sdk/core/linux-amd64 latest -root ${IDK_DIR}
# CMake invocation
cmake -G Ninja -DCMAKE_BUILD_TYPE=Release \
-DCMAKE_TOOLCHAIN_FILE=${FUCHSIA_DIR}/scripts/clang/ToolChain.cmake \
-DLLVM_ENABLE_LTO=OFF \
-DLINUX_x86_64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux \
-DLINUX_aarch64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/linux \
-DLINUX_riscv64-unknown-linux-gnu_SYSROOT=${SYSROOT_DIR}/ubuntu20.04 \
-DFUCHSIA_SDK=${IDK_DIR} \
-DCMAKE_INSTALL_PREFIX= \
-C ${LLVM_SRCDIR}/clang/cmake/caches/Fuchsia-stage2.cmake \
${LLVM_SRCDIR}/llvm
# Build and strip binaries and place them in the install directory
ninja toolchain-distribution -j1000
DESTDIR=${INSTALL_DIR} ninja install-toolchain-distribution-stripped -j1000
# Generate runtime.json
python3 ${FUCHSIA_DIR}/scripts/clang/generate_runtimes.py \
--clang-prefix ${INSTALL_DIR} --sdk-dir ${IDK_DIR} \
--build-id-dir ${INSTALL_DIR}/lib/.build-id > ${INSTALL_DIR}/lib/runtime.json
Building Fuchsia with a Custom Clang
To specify a custom clang toolchain for building Fuchsia, pass
--args clang_prefix=\"${INSTALL_DIR}/bin\"
to fx set
command and run fx build
.
fx set core.x64 --args=clang_prefix=\"${INSTALL_DIR}/bin\"
fx build
This file contains relative paths used by the Fuchsia build to know where various libraries from the toolchain are located.
Developing Clang
When developing Clang, you may want to use a setup that is more suitable for incremental development and fast turnaround time.
The simplest way to build LLVM is to use the following commands:
cmake -GNinja \
-DCMAKE_BUILD_TYPE=Debug \
-DLLVM_ENABLE_PROJECTS="clang;clang-tools-extra;lld" \
${LLVM_SRCDIR}/llvm
ninja
You can enable additional projects using the LLVM_ENABLE_PROJECTS
variable. To enable all common projects, you would use:
-DLLVM_ENABLE_PROJECTS="clang;clang-tools-extra;lld;compiler-rt;libcxx;libcxxabi;libunwind"
Similarly, you can also enable some projects to be built as runtimes which means these projects will be built using the just-built rather than the host compiler:
-DLLVM_ENABLE_PROJECTS="clang;clang-tools-extra;lld" \
-DLLVM_ENABLE_RUNTIMES="compiler-rt;libcxx;libcxxabi;libunwind" \
Both LLVM_ENABLE_PROJECTS
and LLVM_ENABLE_RUNTIMES
are already set in the
CMake cache files, so you normally don't need to set these unless you would
like to explicitly add more projects or runtimes.
Clang is a large project and compiler performance is absolutely critical. To reduce the build time, it is recommended to use Clang as a host compiler, and if possible, LLD as a host linker. These should be ideally built using LTO and for best possible performance also using Profile-Guided Optimizations (PGO).
To set the host compiler to Clang and the host linker to LLD, you can use the following extra flags:
-DCMAKE_C_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang \
-DCMAKE_CXX_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang++ \
-DLLVM_ENABLE_LLD=ON
This assumes that ${CLANG_TOOLCHAIN_PREFIX}
points to the bin
directory
of a Clang installation, with a trailing slash (as this Make variable is used
in the Zircon build). For example, to use the compiler from your Fuchsia
checkout (on Linux):
CLANG_TOOLCHAIN_PREFIX=${FUCHSIA_DIR}/prebuilt/third_party/clang/linux-x64/bin/
Sanitizers
Most sanitizers can be used on LLVM tools by adding
LLVM_USE_SANITIZER=<sanitizer name>
to your cmake invocation. MSan is
special however because some LLVM tools trigger false positives. To
build with MSan support you first need to build libc++ with MSan
support. You can do this in the same build. To set up a build with MSan
support first run CMake with LLVM_USE_SANITIZER=Memory
and
LLVM_ENABLE_LIBCXX=ON
.
cmake -GNinja \
-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_C_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang \
-DCMAKE_CXX_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang++ \
-DLLVM_ENABLE_PROJECTS="clang;clang-tools-extra;lld;libcxx;libcxxabi;libunwind" \
-DLLVM_USE_SANITIZER=Memory \
-DLLVM_ENABLE_LIBCXX=ON \
-DLLVM_ENABLE_LLD=ON \
${LLVM_SRCDIR}/llvm
Normally you would run Ninja at this point but we want to build
everything using a sanitized version of libc++ but if we build now it
will use libc++ from ${CLANG_TOOLCHAIN_PREFIX}
, which isn't sanitized.
So first we build just the cxx and cxxabi targets. These will be used in
place of the ones from ${CLANG_TOOLCHAIN_PREFIX}
when tools
dynamically link against libcxx
ninja cxx cxxabi
Now that you have a sanitized version of libc++ you can set your build to use
it instead of the one from ${CLANG_TOOLCHAIN_PREFIX}
and then build
everything.
ninja
Putting that all together:
cmake -GNinja \
-DCMAKE_BUILD_TYPE=Debug \
-DCMAKE_C_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang \
-DCMAKE_CXX_COMPILER=${CLANG_TOOLCHAIN_PREFIX}clang++ \
-DLLVM_USE_SANITIZER=Address \
-DLLVM_ENABLE_LIBCXX=ON \
-DLLVM_ENABLE_LLD=ON \
${LLVM_SRCDIR}/llvm
ninja libcxx libcxxabi
ninja
Testing Clang
To run Clang tests, you can use the check-<component>
target:
ninja check-llvm check-clang
You can all use check-all
to run all tests, but keep in mind that this
can take significant amount of time depending on the number of projects
you have enabled in your build.
To test only one specific test, you can use the environment variable
LIT_FILTER
. If the path to the test is clang/test/subpath/testname.cpp
, you
can use:
LIT_FILTER=testname.cpp ninja check-clang
The same trick can be applied for running tests in other sub-projects by
specifying different a different check-<component>
.
Downloading Toolchains from CAS
Our Clang Toolchain CI builders upload all build artifacts to Content Addressed Storage (CAS). It provides a convenient way to quickly download a specific toolchain without having to build from scratch. This can greatly speedup investigations into toolchain issues, since you can avoid the long LLVM build times, on top of building Fuchsia.
Below is an example of how to install the cas
tool from scratch and download a specific toolchain into a corpus directory:
$ cipd install infra/tools/luci/cas/linux-amd64 latest -root luci
$ ./luci/cas download -cas-instance chromium-swarm -digest \
ad53e1f315a849955190594fde6b07e11e76b40563db5779fcc69d6a6e04dc71/267 -dir corpus
In the example above the -digest
field is passed a unique id, which is used by the cas
tool to fetch the correct artifacts.
The digest
can be obtained from Fuchsia's CI builder by selecting the builder you want the toolchain from, then expanding the clang
->cas
->archive
fields and then clicking on the CAS_UI
link.
The resulting page will give show you some information about the CAS upload, including the digest.
Useful CMake Flags
There are many other CMake flags that are useful for building, but these are some that may be useful for toolchain building.
-DLLVM_PARALLEL_LINK_JOBS
Increase the number of link jobs that can be run in parallel (locally). The number of link jobs is dependent on RAM size. For LTO build you will need at least 10GB for each job.
Additional Resources
Documentation:
Talks:
- 2016 LLVM Developers’ Meeting: C. Bieneman "Developing and Shipping LLVM and Clang with CMake"
- 2017 LLVM Developers’ Meeting: Petr Hosek "Compiling cross-toolchains with CMake and runtimes build"