This page has moved to https://docs.px4.io/master/en/uavcan/node_firmware.html.
Click here if you are not redirected.
Vectorcontrol ESC Codebase (Pixhawk ESC 1.6 and S2740VC)
Download the ESC code:
‘etc’ containing ‘rc.txt’ from ‘C: px4 Firmware ’ onto the root of the memory card. Edit ‘rc.txt’ to allow for use with the PX4FMU, PX4IO, PX4FLOW or a GPS module. Return the memory card. Now the PX4 is configured to automatically execute all startup drivers and applications associated with the Simulink application on boot. Start the PX4 Flow driver. SENSFLOWMAXHGT: Maximum height above ground when reliant on optical flow. SENSFLOWMINHGT: Minimum height above ground when reliant on optical flow. SENSFLOWMAXR: Maximum angular flow rate reliably measurable by the optical flow sensor. SENSFLOWROT: Yaw rotation of the PX4FLOW board relative to the vehicle body. PX4 FLOW is a smart camera processing optical flow directly on the camera module. It is optimized for processing and outputs images only for development purposes. Its main output is a UART or I2C stream of flow measurements at 400 Hz.
Flashing the UAVCAN Bootloader
Before updating firmware via UAVCAN, the Pixhawk ESC 1.6 requires the UAVCAN bootloader be flashed. To build the bootloader, run:
After building, the bootloader image is located at firmware/px4esc_1_6-bootloader.bin
, and the OpenOCD configuration is located at openocd_px4esc_1_6.cfg
. Follow these instructions to install the bootloader on the ESC.
Compiling the Main Binary
This will build the UAVCAN node firmware for both supported ESCs. The firmware images will be located at com.thiemar.s2740vc-v1-1.0-1.0.<git hash>.bin
and org.pixhawk.px4esc-v1-1.6-1.0.<git hash>.binn
.
Sapog Codebase (Pixhawk ESC 1.4 and Zubax Orel 20)
Download the Sapog codebase:
Flashing the UAVCAN Bootloader
Before updating firmware via UAVCAN, the ESC requires the UAVCAN bootloader to be flashed. The bootloader can be built as follows:
The bootloader image is located at bootloader/firmware/bootloader.bin
, and the OpenOCD configuration is located at openocd.cfg
. Follow these instructions to install the bootloader on the ESC.
Compiling the Main Binary
Beware, some newer version of GCC lead to segfaults during linking. Version 4.9 did work at the time of writing.The firmware image will be located at firmware/build/io.px4.sapog-1.1-1.7.<xxxxxxxx>.application.bin
, where <xxxxxxxx>
is an arbitrary sequence of numbers and letters. There are two hardware version of the Zubax Orel 20 (1.0 and 1.1). Make sure you copy the binary to the correct folder in the subsequent description. The ESC firmware will check the hardware version and works on both products.1
Zubax GNSS
Please refer to the project page to learn how to build and flash the firmware.Zubax GNSS comes with a UAVCAN-capable bootloader, so its firmware can be updated in a uniform fashion via UAVCAN as described below.
Firmware Installation on the Autopilot
The UAVCAN node file names follow a naming convention which allows the Pixhawk to update all UAVCAN devices on the network, regardless of manufacturer. The firmware files generated in the steps above must therefore be copied to the correct locations on an SD card or the PX4 ROMFS in order for the devices to be updated.
The convention for firmware image names is:
e.g. com.thiemar.s2740vc-v1-1.0-1.0.68e34de6.bin
However, due to space/performance constraints (names may not exceed 28 charates), the UAVCAN firmware updater requires those filenames to be split and stored in a directory structure like the following:
e.g.
The ROMFS-based updater follows that pattern, but prepends the file name with _
so you add the firmware in:
Placing the binaries in the PX4 ROMFS
The resulting finale file locations are:
- S2740VC ESC:
ROMFS/px4fmu_common/uavcan/fw/com.thiemar.s2740vc-v1/1.0/_s2740vc-v1-1.0.<git hash>.bin
- Pixhawk ESC 1.6:
ROMFS/px4fmu_common/uavcan/fw/org.pixhawk.px4esc-v1/1.6/_px4esc-v1-1.6.<git hash>.bin
- Pixhawk ESC 1.4: `ROMFS/px4fmu_common/uavcan/fw/org.pixhawk.sapog-v1/1.4/_sapog-v1-1.4.
Alternatively UAVCAN firmware upgrading can be started manually on NSH via:
Docker containers are provided for the complete PX4 development toolchain including NuttX and Linux based hardware, Gazebo Simulation and ROS.
This topic shows how to use the available docker containers to access the build environment in a local Linux computer.
Dockerfiles and README can be found on Github here. They are built automatically on Docker Hub.
Prerequisites
PX4 containers are currently only supported on Linux (if you don't have Linux you can run the container inside a virtual machine). Do not use
boot2docker
with the default Linux image because it contains no X-Server.Install Docker for your Linux computer, preferably using one of the Docker-maintained package repositories to get the latest stable version. You can use either the Enterprise Edition or (free) Community Edition.
For local installation of non-production setups on Ubuntu, the quickest and easiest way to install Docker is to use the convenience script as shown below (alternative installation methods are found on the same page):
The default installation requires that you invoke Docker as the root user (i.e. using
sudo
). If you would like to use Docker as a non-root user, you can optionally add the user to the 'docker' group and then log out/in:Container Hierarchy
The available containers are listed below (from Github):
Container Description px4-dev-base Base setup common to all containers px4-dev-nuttx NuttX toolchain px4-dev-simulation NuttX toolchain + simulation (jMAVSim, Gazebo) px4-dev-ros NuttX toolchain, simulation + ROS (incl. MAVROS) px4-dev-raspi Raspberry Pi toolchain px4-dev-snapdragon Qualcomm Snapdragon Flight toolchain px4-dev-clang Clang tools px4-dev-nuttx-clang Clang and NuttX tools The most recent version can be accessed using the
latest
tag:px4io/px4-dev-ros:latest
(available tags are listed for each container on hub.docker.com. For example, the px4-dev-ros tags can be found here).Typically you should use a recent container, but not necessarily the latest (as this changes too often).
Use the Docker Container
The following instructions show how to build PX4 source code on the host computer using a toolchain running in a docker container. The information assumes that you have already downloaded the PX4 source code to src/Firmware, as shown:
Helper Script (docker_run.sh)
The easiest way to use the containers is via the docker_run.sh helper script. This script takes a PX4 build command as an argument (e.g.
make tests
). It starts up docker with a recent version (hard coded) of the appropriate container and sensible environment settings.For example, to build SITL you would call (from within the /Firmware directory):
Or to start a bash session using the NuttX toolchain:
The script is easy because you don't need to know anything much about Docker or think about what container to use. However it is not particularly robust! The manual approach discussed in the section below is more flexible and should be used if you have any problems with the script.
Calling Docker Manually
Download Px4 Flow (com16) Driver Manual
The syntax of a typical command is shown below. This runs a Docker container that has support for X forwarding (makes the simulation GUI available from inside the container). It maps the directory
<host_src>
from your computer to<container_src>
inside the container and forwards the UDP port needed to connect QGroundControl. With the-–privileged
option it will automatically have access to the devices on your host (e.g. a joystick and GPU). If you connect/disconnect a device you have to restart the container.Where,
<host_src>
: The host computer directory to be mapped to<container_src>
in the container. This should normally be the Firmware directory.<container_src>
: The location of the shared (source) directory when inside the container.<local_container_name>
: A name for the docker container being created. This can later be used if we need to reference the container again.<container>:<tag>
: The container with version tag to start - e.g.:px4io/px4-dev-ros:2017-10-23
.<build_command>
: The command to invoke on the new container. E.g.bash
is used to open a bash shell in the container.
The concrete example below shows how to open a bash shell and share the directory ~/src/Firmware on the host computer.
If everything went well you should be in a new bash shell now. Verify if everything works by running, for example, SITL:
Re-enter the Container
The
docker run
command can only be used to create a new container. To get back into this container (which will retain your changes) simply do:If you need multiple shells connected to the container, just open a new shell and execute that last command again.
Clearing the Container
Sometimes you may need to clear a container altogether. You can do so using its name:
If you can't remember the name, then you can list inactive container ids and then delete them, as shown below:
QGroundControl
When running a simulation instance e.g. SITL inside the docker container and controlling it via QGroundControl from the host, the communication link has to be set up manually. The autoconnect feature of QGroundControl does not work here.
In QGroundControl, navigate to Settings and select Comm Links. Create a new link that uses the UDP protocol. The port depends on the used configuration e.g. port 14557 for the SITL iris config. The IP address is the one of your docker container, usually 172.17.0.1/16 when using the default network.
Troubleshooting
Permission Errors
The container creates files as needed with a default user - typically 'root'. This can lead to permission errors where the user on the host computer is not able to access files created by the container.
The example above uses the line
--env=LOCAL_USER_ID='$(id -u)'
to create a user in the container with the same UID as the user on the host. This ensures that all files created within the container will be accessible on the host.Graphics Driver Issues
It's possible that running Gazebo will result in a similar error message like the following:
In that case the native graphics driver for your host system must be installed. Download the right driver and install it inside the container. For Nvidia drivers the following command should be used (otherwise the installer will see the loaded modules from the host and refuse to proceed):
More information on this can be found here.
Virtual Machine Support
Any recent Linux distribution should work.
The following configuration is tested:
- OS X with VMWare Fusion and Ubuntu 14.04 (Docker container with GUI support on Parallels make the X-Server crash).
Memory
Use at least 4GB memory for the virtual machine.
Compilation problems
If compilation fails with errors like this:
Try disabling parallel builds.
Allow Docker Control from the VM Host
Edit
/etc/defaults/docker
and add this line:You can then control docker from your host OS:
Legacy
Download Px4 Flow (com16) Driver Updater
The ROS multiplatform containers are not maintained anymore: https://github.com/PX4/containers/tree/master/docker/ros-indigo