arrow-left

All pages
gitbookPowered by GitBook
1 of 14

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

ROS2

hashtag
ROS2 Foxy Fitzroy Install Guide

circle-info

NOTE: ROS2 is new, but we suggest you use it over ROS1, as ROS1 will be deprecated in the near future. You may run into issues with the ROS2 section of this Gitbook. If you have any issues with the guide, please email landon.haugh@nxp.com if external, or use Teams/Email if internal. MAVROS is not compatible with ROS2. MicroRTPS and PX4 ROS Com replace MAVROS.

Follow the guide at the link below to install ROS2 Foxy Fitzroy on your NavQ running the Demo image.

circle-exclamation

Note 1: at Setup Sources step you might get an error message by curl. To avoid this, run the following commands:

circle-exclamation

Note 2: at Install ROS2 package step, run the ros-foxy-ros-base installer, as the Desktop tools are not needed on NavQ:

hashtag

sudo rm -rf /usr/lib/libcurl*
sudo apt install curl

Building and Installing FastRTPS for ROS2 communication to FMU

hashtag
FastRTPS and the microRTPS Agent

FastRTPS and the microRTPS agent are needed on NavQ in order to bridge uORB topics from PX4 to ROS2 on NavQ over a UART or UDP connection. Follow the guide below to build and install these packages.

circle-info

NOTE: FastRTPS and PX4 ROS Com work differently from MAVROS (ROS1). PX4 ROS Com subscribes to uORB topics rather than MAVLINK messages. See below for a diagram of how microRTPS and PX4 ROS Com works.

Follow the link below for more details on microRTPS and PX4 ROS Com:

hashtag
Installing FastRTPS and PX4 ROS Com on NavQ

hashtag
Prerequisites

hashtag
FastRTPS installation

First, we will install the FastRTPS project from eProsima. Use the following commands below to do so:

hashtag
px4_ros_com installation

Next, we will build and install the necessary software that will allow us to use ROS2 to communicate with the microRTPS bridge. First, run the following commands:

triangle-exclamation

URGENT: Building px4_ros_com requires a lot of ram. Enabling a swap disk is highly recommended. This will take up 1GB of space on your storage medium.

Run the following commands to enable a 1GB swapfile:

Now, build the workspace:

circle-info

This will take a long time to build on NavQ. In our experience, it takes anywhere from 45 minutes to an hour. Make sure you have a stable connection to NavQ over UART or SSH, and do not let the NavQ lose power!

hashtag
Sourcing ROS2 bash files

In order to run all of your specific ROS2 software successfully, you must source the install/setup.bash files in each of your ROS2 workspace folders. Add the following lines to your .bashrc to do so:

hashtag
Next steps

Continue to the next page to set up a systemd service that will automatically start the micrortps agent on your NavQ. The guide will also cover how to automatically start the client on the FMU.

sudo apt install ros-foxy-ros-base
~$ sudo apt update
~$ sudo apt install cmake python3-pip gradle python3-colcon-common-extensions gradle
~$ pip3 install --user pyros-genmsg
~$ mkdir src && cd src
~/src$ git clone --recursive https://github.com/eProsima/Fast-RTPS.git -b 1.8.x FastRTPS-1.8.2
~/src$ cd FastRTPS-1.8.2
~/src/FastRTPS-1.8.2$ mkdir build
~/src/FastRTPS-1.8.2$ cd build 
~/src/FastRTPS-1.8.2/build$ cmake -DTHIRDPARTY=ON -DSECURITY=ON .. 
~/src/FastRTPS-1.8.2/build$ make 
~/src/FastRTPS-1.8.2/build$ sudo make install
cd ~/src
~/src$ git clone --recursive https://github.com/eProsima/Fast-RTPS-Gen.git -b v1.0.4 Fast-RTPS-Gen
~/src$ cd Fast-RTPS-Gen
~/src/Fast-RTPS-Gen$ unset TERM
~/src/Fast-RTPS-Gen$ ./gradlew assemble
~/src/Fast-RTPS-Gen$ sudo su
~/src/Fast-RTPS-Gen$ unset TERM
~/src/Fast-RTPS-Gen# ./gradlew install
~/src/Fast-RTPS-Gen# exit
~/src/Fast-RTPS-Gen$
$ cd ~/
~$ mkdir -p ~/px4_ros_com_ros2/src

~$ git clone https://github.com/PX4/px4_ros_com.git ~/px4_ros_com_ros2/src/px4_ros_com
~$ git clone https://github.com/PX4/px4_msgs.git ~/px4_ros_com_ros2/src/px4_msgs
$ sudo fallocate -l 1G /swapfile
$ sudo chmod 600 /swapfile
$ sudo mkswap /swapfile
$ sudo swapon /swapfile
$ sudo vim /etc/fstab
Insert: /swapfile swap swap defaults 0 0
$ sudo swapon --show
(make sure swap is active)
~$ ./px4_ros_com_ros2/src/px4_ros_com/scripts/build_ros2_workspace.bash
source /opt/ros/foxy/setup.bash
source ~/px4_ros_com_ros2/install/setup.bash

Auto-start microRTPS client/agent on FMU/NavQ

hashtag
Creating a systemd service to auto-start the microRTPS agent on NavQ

Generate a start up script for the micrortps client under /usr/local/bin

with content

Save the file and exit nano. Make the file executable

Generate a systemd service file to start the startup script at boot

with content

Save the file and exit nano. Check if the process starts

You should see an state active (running), quit with <q> Enable the systemd service file finally to be active at boot

hashtag
Auto-start the microRTPS client on the FMU

circle-info

To run fastRTPS over Ethernet with NavQ board the is needed.

hashtag
Building PX4 with microRTPS

circle-info

You will need a Linux VM or computer to complete this step.

In order to use the microRTPS client on NavQ, you'll need to build PX4 with the _rtps tag for the fmuk66-v3 build target. To do this, you will need to have both the FastRTPS and Fast-RTPS-Gen packages installed. You can just follow the previous guide on your Linux development VM or computer.

Once you have successfully installed those two packages, you can navigate to your cloned PX4 repository and run the following:

hashtag
Flashing your FMU with the updated binary

You will need to flash your FMU with the updated RTPS binary. If you don't know how to do this yet, follow the guide here:

hashtag
Creating a startup file on the SD card

To make the microRTPS client start at boot on the FMU, you will need to have an SD card inserted. On your SD card, make a file at /etc/extras.txt and insert one of the following options:

circle-info

Calling set +e at beginning / set -e at end is needed to prevent from boot errors. Further details can be found on

sudo nano /usr/local/bin/start_micrortps_agent.sh
#!/bin/bash
## startup script for micro_rtps_agent
## agent will communicate to FMUK66 via UDP
## FMUK66 IPv4 addr = 10.0.0.2 
##
## Author: Gerald Peklar <gerald.peklar@nxp.com>  

source /opt/ros/foxy/setup.bash
source ~/px4_ros_com_ros2/install/setup.bash

# Comment out the line that you are not using:

# If you're using T1 Ethernet communication:
micrortps_agent -t UDP -i 10.0.0.2

# If you're using UART communication over the UART3 port:
micrortps_agent -d /dev/ttymxc2 -b 921600
sudo chmod +x /usr/local/bin/start_micrortps_agent.sh
RDDRONE-T1ADAPT
https://dev.px4.io/master/en/concept/system_startup.html#replacing-the-system-startuparrow-up-right
sudo nano /etc/systemd/system/micrortps_agent.service
[Unit]
Description=PX4 micrortps service
After=network.target

[Service]
Restart=always
TimeoutStartSec=10
User=navq
Group=navq
WorkingDirectory=~
ExecStart=/usr/local/bin/start_micrortps_agent.sh

[Install]
WantedBy=multi-user.target
sudo systemctl start micrortps_agent.service
sudo systemctl status micrortps_agent.service
sudo systemctl enable micrortps_agent.service
$ make nxp_fmuk66-v3_rtps
set +e
# For T1 Ethernet communication:
micrortps_client start -t UDP -i <NavQ_IP_Address>

# For UART communication over the TELEM port:
micrortps_client start -d /dev/ttyS4 -b 921600

# For UART communication over the IR/TELM2 port:
micrortps_client start -d /dev/ttyS1 -b 921600
set -e

Detecting AprilTags with ROS2

[WORK IN PROGRESS]

hashtag
Overview

triangle-exclamation

NOTE: This guide is currently a work in progress. Some details may not be finished.

In this section, we will guide you through the process needed to detect AprilTags on your NavQ. There are a few things that need to be done to accomplish this:

  1. Install ROS2 image tools

  2. Build and install AprilTag detection nodes

  3. Calibrate the camera on your NavQ using a checkerboard pattern

hashtag
Prerequisites

Before we start, you will need a few things:

hashtag
Checkerboard

To create a checkerboard for camera calibration, download this PDF:

hashtag
Desktop Setup

In order to calibrate the camera, you will need to set up your NavQ with a mouse, keyboard, and monitor. Use the included microUSB hub and HDMI connector to do so.

hashtag
Installing required ROS2 software

Install the following packages with the apt package manager by running the commands below:

Once that is finished, move on to the next step.

hashtag
Calibrating the camera

circle-info

This section is a consolidation of the specific commands for the NavQ. If you run into any issues with this section of the guide, email landon.haugh@nxp.com and refer to the official guide:

Hook up your NavQ to a monitor with the provided HDMI cord and connect a USB mouse + keyboard through the included microUSB hub. Open the terminal by clicking the icon at the top left of the screen and open the bash shell by running:

Have your printed checkerboard ready in a well lit environment and run the camera calibration software by running the following commands:

Now use the link in the note above to run through calibarating the camera.

hashtag
Detecting AprilTags

hashtag
Building apriltag_msgs

A prerequisite for the apriltag_ros node is apriltag_msgs. Clone the repo and build it by running these commands:

Make sure to source the install/setup.bash file so that apriltag_msgs can see it when being built.

hashtag
Building the apriltag_ros node

First, in order to detect AprilTags, we need to build the apriltag_ros node written by christianrauch. You can clone his repository by using this git repo:

To make his repo work with ROS2 Foxy, you will need to make a small change in the CMakeLists.txt file. Go to line 26 in that file and delete the apriltag:: token in the AprilTagNode apriltag::apriltag part.

Next, you'll want to save that file and run colcon build in the apriltag_ros folder. Once it is done building, you'll want to source the install/setup.bash file. Add this line to your .bashrc:

hashtag
Creating a new package to concatenate camera information to each camera frame

In order to make the apriltag_ros node work, we need to make sure that camera info messages are being sent in sync with each camera frame published by the cam2image node. We have written an example node that does just that. You can download it here:

You will need to replace the matrices in the node file to match your camera calibration parameters. The source file is located at pypysub/py_pysub/publisher_member_function.py. Once you have done that, make sure to build and install the node and source the install/setup.bash file.

hashtag
Running the code

To run the code, you'll need to run the following ROS nodes:

Run!
https://www.mrpt.org/downloads/camera-calibration-checker-board_9x7.pdfarrow-up-right
https://navigation.ros.org/tutorials/docs/camera_calibration.htmlarrow-up-right
file-archive
86KB
py_pysub.zip
archive
arrow-up-right-from-squareOpen
Full Camera Republish Node Workspace
$ sudo apt install ros-foxy-cv-bridge \
ros-foxy-image-tools \
ros-foxy-image-transport \
ros-foxy-image-transport-plugins \
ros-foxy-image-pipeline \
ros-foxy-camera-calibration-parsers \
ros-foxy-camera-info-manager \
ros-foxy-launch-testing-ament-cmake 
$ bash
# Start publishing camera images to ROS2 topic /camera/image_raw
$ ros2 run image_tools cam2image --ros-args -p device_id:=0 -p width:=640 -p height:=480 -r /image:=/camera/image_raw > /dev/null 2>&1 &
# Start the camera calibration software
$ ros2 run camera_calibration cameracalibrator --size 7x9 --square 0.02
$ git clone https://github.com/christianrauch/apriltag_msgs
$ cd apriltag_msgs
$ colcon build
$ git clone https://github.com/christianrauch/apriltag_ros
source /home/navq/<apriltag_ros folder>/install/setup.bash
$ ros2 run image_tools cam2image --ros-args -p device_id:=0 -p width:=640 -p height:=480 -r /image:=/camera_image > /dev/null 2>&1 &
$ ros2 run py_pysub talker > /dev/null 2>&1 &
$ ros2 launch apriltag_ros tag_16h5_all.launch.py --ros-args -p image_transport:=raw > apriltag_log.txt 2>&1 &

Package Management

To use the package manager (apt) on the Demo image, you'll need to change your timezone.

First, you'll need to locate the correct timezone file at /usr/share/zoneinfo. There should be a folder for your country and a file in that folder for the closest city to you.

$ rm -f /etc/localtime
$ ln -sf /usr/share/zoneinfo/<country>/<city> /etc/localtime

For example, if you're in Central Time USA, you'd use the following commands:

$ rm -f /etc/localtime
$ ln -sf /usr/share/zoneinfo/America/Chicago /etc/localtime

Now, you can run sudo apt update and sudo apt upgrade to get your system up to date.

OpenCV

With OpenCV on NavQ, you will be able to harness a vast library of computer vision tools for use in HoverGames. OpenCV is installed out of the box on the HoverGames-BSP image and can be installed easily through the package manager on the HoverGames-Demo image. If you'd like to get a jump start on OpenCV, follow the guide below to create a program that detects red objects.

hashtag
Quick Example

Let's go through a quick example of running OpenCV on the NavQ to identify a red object in an image taken on the Google Coral camera. This example will be written in Python and uses OpenCV.

hashtag
Installing OpenCV

If you are using the default OS that is shipped with the NavQ, you can skip this step.

If you're using the HoverGames-Demo image, you'll need to install python3-opencv. To do so, run the following command in your terminal:

hashtag
Imports

First, create a new python source file (name it whatever you want!). We only need two imports for this program: opencv (cv2) and numpy. Numpy is used to create arrays of HSV values.

hashtag
Capturing an image

To capture an image, we must first open a camera object and then read from it.

hashtag
Downsizing the image

To make our OpenCV pipeline run faster, we're going to shrink our image down to 640x480 resolution. This resolution isn't so small that the image quality will be reduced enough to make a difference in detecting objects, but it will make OpenCV process our image much quicker.

Another pre-processing step that we will run is a box blur. This will get rid of small artifacts in the image that can throw off our pipeline and will make detecting large objects much easier.

hashtag
Color filtering

In order to find objects that are red in our image, we will apply an HSV filter to the image to create a mask of the color red in the image.

circle-info

The lower_red and upper_red variables are found by using a program called GRIP. GRIP is a GUI program for OpenCV. It has tons of great features including code generation. To check GRIP out, go to the website herearrow-up-right.

hashtag
Finding contours

To find the location of the objects in our image, we will find contours in the mask and sort them by total area. This will allow us to filter out smaller objects that we aren't interested in. We will also be able to detect the objects' position in the image and draw a box around them.

hashtag
Storing the generated images

Finally, we will store the images we generated from this program: the mask and the final image with annotations (bounding box and text).

hashtag
Running the code

To run the code, you'll need to use python3. Run the following command (<file.py> will be the filename that you saved the code to):

hashtag
Source code

Here is the complete source code if you'd like to run it on your own NavQ as an example:

Ad-Hoc Streaming using Mobile Hotspot

hashtag
Configuring Windows

hashtag
Step 1 - Enable Mobile Hotspot

circle-info

You must have a WiFi adapter in your Laptop/PC to follow this guide.

To enable Mobile Hotspot on Windows, go to Settings->Network & Internet->Mobile Hotspot. Next, you'll want to edit your mobile hotspot settings to set a password and SSID. Once you've done this, you can enable Mobile Hotspot. You can see a full configuration in the screenshot below.

hashtag
Step 2 - Enable Port 5000/5600 in Firewall

By default, port 5000 or port 5600 is not open in the Windows firewall, so any UDP stream packets will be blocked. To enable this, go to your Windows search bar, and type "Firewall". Select "Windows Defender Firewall".

Once you open Windows Defender Firewall, you'll want to navigate to "Advanced Settings" from the menu on the left.

You will then be brought to a new window with Windows Firewall rules. To create a new rule for QGC streaming, you'll need to click "New Rule" on the right side.

You will be brought to a new window to add a rule. Select "Program" and click "Next".

At the next window, it will ask you to specify the program you are adding a rule for. Paste the following into that field and click "Next":

Once you've done this, you can click "Next" through the rest of the fields and you should be good to go.

circle-info

On the page that tells you to name your rule, just name it "QGroundControl".

hashtag
Step 3 - Connect NavQ to new Mobile Hotspot

To connect your NavQ to your new Mobile Hotspot, follow the connecting to WiFi guide in the Gitbook here:

hashtag
Step 4 - Stream to QGroundControl

Now you can stream to QGroundControl as you normally would. Follow the guide here:

hashtag
Configuring Ubuntu

hashtag
Step 1 - Enable Wifi Hotspot

To enable a WiFi hotspot in Ubuntu 20.04, you'll first need to go to Settings->WiFi. Then, at the top right, click the 3 dots button and select "Turn On Wi-Fi Hotspot...".

After you click that entry, this window will pop up. Enter a network name and password, and you should be good to go! Follow Steps 3 and 4 in the Windows section above to configure your NavQ.

Controlling your drone from NavQ using MAVROS

hashtag
MAVLink / MAVROS

The 8MMNavQ can control your HoverGames drone by communicating with the RDDRONE-FMUK66 over MAVROS. A UART cable will be included in the kit that connects the UART3 port on the 8MMNavQ to the TELEM2 port on the RDDRONE-FMUK66.

circle-info

NOTE: This page is for ROS1 only. MAVLINK and MAVROS are deprecated for ROS2 applications. ROS2 uses microRTPS and PX4 ROS Com in place of MAVROS.

triangle-exclamation

NOTICE: When running the off-board script, make sure that you confirm the landing zone for your drone in QGroundControl. The local position parameter in the offboard ROS node is set to x:0, y:0, z:2, which means it will hover at 2 meters above its landing zone. If the drone takes off from a position away from its landing zone, it will quickly return to its landing zone and hover 2 meters above it. This is especially important to note if you turn the drone on indoors and then place it somewhere outside to take off. We don't want your drone to smack into a building!

hashtag
Prerequisites

hashtag
Set up TELEM2 on the FMU

Connect to your FMU over USB and open QGroundControl. Navigate to Settings -> Parameters -> MAVLink and set these parameters:

Also, you'll need to make sure that the settings in Settings -> Parameters -> Serial look like this:

hashtag
Offboard control guide

hashtag
MAVROS Offboard node example

A coding guide for the ROS node we will be using is located at the link below.

This guide will help you install the ROS node outlined in the MAVROS Offboard Example.

hashtag
Setting up your development environment

To start, you'll want to make sure that you have already set up a development environment for ROS. ROS has a guide on how to get a catkin workspace set up in the link below.

Once you've completed that tutorial, you'll maybe want to add an extra line to your ~/.bashrc so that your devel/setup.bash is always sourced when you open a new terminal:

This will ensure that your development environment is properly set up when you open a new shell.

hashtag
Installing MAVROS specific packages

Follow the "binary installation" guide on the page below to install the necessary MAVROS packages from apt.

circle-exclamation

Make sure to use 'noetic' in place of 'kinetic' in the commands they give you on this page. Also, you do NOT need to follow the "Source Installation" section of the guide.

hashtag
Creating a new package

To create our first ROS package, we will want to navigate to our catkin workspace's src folder and run the following command:

This command will create a new package folder named offb and will add the dependencies roscpp, mavros_msgs, and geometry_msgs to the 'CMakeLists.txt' and 'package.xml' files. Next, you'll want to take the code from the PX4 MAVROS example and create a file named offb_node.cpp in the src/ folder in the offb package. Your directory structure should now look like this:

hashtag
Editing CMakeLists

In order to build your ROS package, you'll need to make some edits to CMakeLists.txt so the catkin build system knows where your source files are. Two edits need to be made.

The first edit is to add your executable to CMakeLists. Your executable should be named offb_node.cpp. Uncomment line 136 to add it:

The second edit is link your target libraries (roscpp, mavros_msgs, and geographic_msgs). Uncomment lines 149-151 to do so:

And that's all you need to do for now to set up your workspace!

hashtag
Building your ROS node

To build your ROS node, return to the root of your catkin_ws/ directory and run:

hashtag
Running your ROS node

To run our ROS node, we need to make sure that MAVROS is running. On the NavQ, run the following command:

This will start roscore and the mavros node with a pointer to the UART port /dev/ttymxc2 at a 921600 baud rate. To run the ROS node we created, run the following in an ssh terminal:

and your drone should take off to an altitude of 2 meters!

$ sudo apt install python3-opencv
import cv2
import numpy as np
# Open camera and capture image form it
cap = cv2.VideoCapture('v4l2src ! video/x-raw,width=640,height=480 ! decodebin ! videoconvert ! appsink', cv2.CAP_GSTREAMER)
ret,frame = cap.read()
# Resize to make processing faster
frame = cv2.resize(frame, (640,480), interpolation = cv2.INTER_AREA)

# Blur image to make contours easier to find
radius = 10
ksize = int(2 * round(radius) + 1)
image = cv2.blur(frame, (ksize, ksize))
# Convert to HSV color for filtering
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

# Filter out all colors except red
lower_red = np.array([0,87,211])
upper_red = np.array([36,255,255])

# Create binary mask to detect objects/contours
mask = cv2.inRange(hsv, lower_red, upper_red)
# Find contours and sort using contour area
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
    # Once we hit smaller contours, stop the loop
    if(cv2.contourArea(c) < 100):
        break

    # Draw bounding box around contours and write "Red Object" text
    x,y,w,h = cv2.boundingRect(c)
    cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
    font = cv2.FONT_HERSHEY_SIMPLEX
    cv2.putText(frame,'Red Object', (x,y), font, 1, (0, 255, 0), 2, cv2.LINE_AA)
# Write images to disk for debugging
cv2.imwrite('thresh.png', mask)
cv2.imwrite('image.png', frame)

# Close camera
cap.release()
$ python3 <file.py>
# Landon Haugh (NXP) 2020

import cv2
import numpy as np

# Load image, grayscale, Gaussian blur, and Otsu's threshold
cap = cv2.VideoCapture(0)
ret,frame = cap.read()

# Resize to make processing faster
frame = cv2.resize(frame, (640,480), interpolation = cv2.INTER_AREA)

# Blur image to make contours easier to find
radius = 10
ksize = int(2 * round(radius) + 1)
image = cv2.blur(frame, (ksize, ksize))

# Convert to HSV color for filtering
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)

# Filter out all colors except red
lower_red = np.array([0,87,211])
upper_red = np.array([36,255,255])

# Create binary mask to detect objects/contours
mask = cv2.inRange(hsv, lower_red, upper_red)

# Find contours and sort using contour area
cnts = cv2.findContours(mask, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
    # Once we hit smaller contours, stop the loop
    if(cv2.contourArea(c) < 100):
        break

    # Draw bounding box around contours and write "Red Object" text
    x,y,w,h = cv2.boundingRect(c)
    cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),2)
    font = cv2.FONT_HERSHEY_SIMPLEX
    cv2.putText(frame,'Red Object', (x,y), font, 1, (0, 255, 0), 2, cv2.LINE_AA)
    
    
# Write images to disk for debugging
cv2.imwrite('thresh.png', mask)
cv2.imwrite('image.png', frame)

# Close camera
cap.release()
https://index.ros.org/doc/ros2/Installation/Foxy/Linux-Install-Debians/#setup-sourcesindex.ros.orgchevron-right
https://docs.px4.io/master/en/middleware/micrortps.htmldocs.px4.iochevron-right

Software Support

We have pages for several common software packages. Click the links below or follow the guide on the left of your screen.

Package Managementchevron-rightROS1chevron-rightGStreamerchevron-rightOpenCVchevron-rightpyeIQchevron-rightGazebochevron-right

Streaming Video to QGroundControl using NavQ over WiFichevron-right
Mobile Hotspot setup in Windows with imx8mmnavq connected

ROS1

hashtag
ROS on NavQ

ROS on NavQ will allow you to interface with sensors, control your drone using MAVROS, and more. To get started, follow the install guide below and then continue to the next sections.

circle-info

NOTE: ROS1 support is good, but the Mobile Robotics team at NXP's focus is on ROS2. There is a lot more documentation on ROS1 than ROS2, but ROS2 may be easier to use in the long run. We suggest that you do not cross-polinate with ROS, i.e. only use ROS1 or ROS2, not both. Keep in mind that any documentation under the ROS1 section is for ROS1 only, and vice versa.

hashtag
Install guide by OS

hashtag
HoverGames-Demo image

circle-info

NOTE: HoverGames participants should be using the Demo image. If you flashed your NavQ with the image from the HoverGames website, or if you're using the image that came installed on the SD Card included in your kit, you're using the Demo image.

circle-exclamation

When you install ROS Noetic on your NavQ, make sure to install the base version of ROS and not the desktop version. If you install the desktop version, critical gstreamer packages for NavQ can be overwritten and therefore become non-functional.

To install ROS, you need to be on the Demo image. You can follow the guide for installing ROS Noetic Ninjemys at

hashtag
HoverGames-BSP image

circle-info

If you're using NavQ comercially and are running the HoverGames-BSP image, you'll follow these steps.

ROS Melodic is automatically installed on the HoverGames-BSP image. It includes MAVROS by default. You will need to do a little bit of setup, though, once you first boot your image.

Run the following commands to enable ROS on the HoverGames-BSP image:

You'll also want to download the following script and run it to install GPS geoids:

Now, you can continue with the ROS tutorials for setting up a build environment and installing your first package. We will go over this in the .

GStreamer

circle-check

There is an NXP community user guide for gstreamer available here: https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/i-MX-8-GStreamer-User-Guide/ta-p/1098942arrow-up-right

hashtag
Taking a picture

To take a picture on your NavQ using GStreamer, run the following command:

To take video, you can run the following pipeline:

Gazebo

Where to learn more about Gazebo

Gazebo arrow-up-rightis one of several simulators that work with PX4 and ROS.

Simulation is important in order to test code without risk of damaging real hardware. It can be critical in uncovering faults that would otherwise be very difficult to trigger. This is not a tutorial on Gazeboarrow-up-right, but a list of some resources to get started.

  • PX4.io Gazebo developer guide https://dev.px4.io/v1.9.0/en/simulation/gazebo.htmlarrow-up-right

  • Youtube videos e.g.

  • Read about .

  • Try out , which is a fun way to learn both Gazebo and ROS. (smile)

Program software using debugger | NXP HoverGamesnxp.gitbook.iochevron-right

pyeIQ

Python framework for eIQ on i.MX

circle-exclamation

This page is a work in progress. NOTE - THIS WILL NOT WORK ON NAVQ!! UPDATED 12/03/2020 - Updated with notes that this will not work as-is for NavQ using 8M Mini. Apologies for any confusion. These notes are here only for reference advanced developers. The 8M Mini does not have any NN acceleration and can only run using the processor cores.

triangle-exclamation

pyeIQ is not targeted at the i.MX Mini processor, but it may still work albeit with much lower performance than if an accelerator was available. We expect to use this more with the upcoming i.MX 8M Plus that includes a 2.25 TOPS neural net accelerator.

Please refer to the following pyeIQ documentation:

Note that eIQ support is only included on imx-image-full-imx8mpevk.wic pre-built image [1]. *** THIS IMAGE is only for 8M Plus!

Please take a look on switch_image application, we are using TFLite 2.1.0. This application offers a graphical interface for users to run an object classification demo using either CPU or NPU.

# pyeiq --run switch_image

We also have a TFLite example out of pyeIQ, please refer to instructions below. Details can be found on i.MX Linux User's Guide [2].

# cd /usr/bin/tensorflow-lite-2.1.0/examples

# ./label_image -m mobilenet_v1_1.0_224_quant.tflite -i grace_hopper.bmp -l labels.txt

The i.MX Linux User's Guide [2] also provides instructions on how to get our latest Linux BSP [1] up and running. *** NOTE FOR 8M Plus only!

[1]:

[2]:

%ProgramFiles%\QGroundControl\QGroundControl.exe
$ echo "source /home/<user>/catkin_ws/devel/setup.bash" >> ~/.bashrc
$ catkin_create_pkg offb roscpp mavros_msgs geometry_msgs
navq@imx8mmnavq:~/catkin_ws/src/offb$ tree
.
├── CMakeLists.txt
├── include
│   └── offb
├── package.xml
└── src
    └── offb_node.cpp

3 directories, 3 files
navq@imx8mmnavq:~/catkin_ws/src/offb$
136 add_executable(${PROJECT_NAME}_node src/offb_node.cpp)
149 target_link_libraries(${PROJECT_NAME}_node
150   ${catkin_LIBRARIES}
151 )
$ catkin_make && catkin_make install
$ roslaunch mavros px4.launch fcu_url:='/dev/ttymxc2:921600' &
$ rosrun offb offb_node &
Logo
https://pyeiq.dev/arrow-up-right
https://www.nxp.com/webapp/sps/download/license.jsp?colCode=L5.4.47_2.2.0_MX8MP-BETA2&appType=file1&DOWNLOAD_ID=nullarrow-up-right
https://www.nxp.com/docs/en/user-guide/IMX_LINUX_USERS_GUIDE.pdfarrow-up-right
http://wiki.ros.org/noetic/Installation/Ubuntuarrow-up-right
next section
$ gst-launch-1.0 -v v4l2src num-buffers=1 ! jpegenc ! filesink location=capture1.jpeg
$ gst-launch-1.0 v4l2src ! 'video/x-raw,width=1920,height=1080,framerate=30/1' ! vpuenc_h264 ! avimux ! filesink location='/home/navq/video.avi'
https://youtu.be/mranHM9wn0garrow-up-right
Gazebo on Wikipediaarrow-up-right
this simple Gazebo tutorial to control a differential drive robotarrow-up-right

Streaming Video to QGroundControl using NavQ over WiFi

hashtag
Prerequisites

hashtag
Devices required

In this guide, we need a few things:

  1. NavQ Companion Computer mounted with Google Coral Camera attached

  2. Laptop/Phone with QGroundControl Installed

  3. Both NavQ and mobile device connected to the same WiFi network

hashtag
Setting up QGroundControl

In QGroundControl, click the Q logo in the top left, and configure the video section as seen in the image below:

This will set up your QGroundControl instance to receive the UDP video stream from the NavQ.

hashtag
Connecting your NavQ to your router and getting IPs

Follow the WiFi setup guide using connman in the Quick Start guide to connect your NavQ to the same router as your mobile device. You will need to use the serial console to do this. Once you have your NavQ connected, you can run ifconfig in the serial console to find the IP address of your NavQ.

You can SSH into the NavQ to run the GStreamer pipeline once you have the IP.

hashtag
Running the GStreamer pipeline

With your NavQ on, SSH into it by using the IP address you noted when connected to the serial console. Once you're successfully SSHed in, you should note the IP address that you logged in from as seen here:

This is the IP of your computer that you should be sending the video stream to.

To run the GStreamer pipeline, run the following command:

circle-info

Make sure to replace the 'xxx.xxx.xxx.xxx' with the IP you noted when first SSHing into the NavQ.

Once you run that command, you should be able to see the video stream from your NavQ on QGroundControl!

$ sudo rosdep init
$ rosdep update
$ source /opt/ros/melodic/setup.bash
$ echo "source /opt/ros/melodic/setup.bash" >> ~/.bashrc
$ source ~/.bashrc
$ wget https://raw.githubusercontent.com/mavlink/mavros/master/mavros/scripts/install_geographiclib_datasets.sh
$ chmod a+x ./install_geographiclib_datasets.sh
$ ./install_geographiclib_datasets.sh
Your IP address should be next to 'inet' under 'wlan0' if connected over WiFi.
NavQ Streaming over UDP to QGroundControl
$ sudo gst-launch-1.0 v4l2src ! video/x-raw,width=640,height=480,framerate=30/1 ! vpuenc_h264 bitrate=500 ! rtph264pay ! udpsink host=xxx.xxx.xxx.xxx port=5600 sync=false
Quick start Guide | NXP 8MMNAVQ: NavQ Companion Computernxp.gitbook.iochevron-right
Logo
ROS/Tutorials/InstallingandConfiguringROSEnvironment - ROS Wikiwiki.ros.orgchevron-right
Logo
https://dev.px4.io/master/en/ros/mavros_installation.html#binary-installation-debian--ubuntudev.px4.iochevron-right
https://dev.px4.io/v1.9.0/en/ros/mavros_offboard.htmldev.px4.iochevron-right