Interfacing Depth Sensors with ROS on NXP i.MX6 boards

Document created by Bio_TICFSL Employee on Apr 4, 2016Last modified by Bio_TICFSL Employee on Oct 5, 2016
Version 1Show Document
  • View in full screen mode

About this document

This document describe the setup detail for Interfacing, Installing, programming (basis) and testing depth cameras with MX6QDL based boards using the Robotic Operating System (ROS). If you are not using ROS you can also install the proper drivers and compile, in your Ubuntu system as explained on document:

1. Software & Hardware requirements

Supported NXP HW boards:

  • i.MX 6QuadPlus SABRE-SD Board and Platform
  • i.MX 6Quad SABRE-SD Board and Platform
  • i.MX 6DualLite SABRE-SD Board
  • i.MX 6Quad SABRE-AI Board
  • i.MX 6DualLite SABRE-AI Board
  • Depth sensors tested: Microsoft Kinect, ASUS Xtion.

Software:   Gcc, Ubuntu 14.04v, OpenCV, Openni, Python, ROS.


2. Installation on ROS


For installation steps of ROS on iMX6 boards in your board, please follow up:

Before you can use ROS, you will need to initialize rosdep. It enables you to easily install system dependencies for source you want to compile and is required to run some core components in ROS.

$ sudo rosdep init
$ rosdep update

There are many different libraries and tools in ROS - not all compile fully on ARM. In this case we already have installed the ROS Base, however any other packages need to be installed individually. First install the following dependencies, which will take some time and space (~1.4 GB):

$ sudo apt-get install --no-install-recommends freeglut3-dev libfreenect-dev libusb-1.0-0-dev
 libudev-dev ros-indigo-camera-info-manager ros-indigo-dynamic-reconfigure 
ros-indigo-image-transport ros-indigo-image-proc ros-indigo-depth-image-proc ros-indigo-tf 
ros-indigo-openni-launch ros-indigo-freenect-* ros-indigo-depthimage-to-laserscan 
ros-indigo-image-view ros-indigo-camera-info-manager ros-indigo-dynamic-reconfigure 
libudev-dev doxygen graphviz openjdk-6-jdk ros-indigo-openni2-camera 
ros-indigo-openni2-launch ros-indigo-rqt-common-plugins ros-indigo-rqt-graph


There is no any additional installation to run kinect with ROS, If you are using kinect you can pass to part 4. However the packages used to run the PrimeSense / Asus Xtion on the i.Mx6 are not available over apt yet, so they need to be compiled from source. To use OpenNI2 with ROS, we only need the shared OpenNI2 libraries and the Drivers.

Clone OpenNI2

$ git clone
$ cd OpenNI2

Edit ThirdParty/PSCommon/BuildSystem/Platform.Arm

$ nano ThirdParty/PSCommon/BuildSystem/Platform.Arm

and replace

CFLAGS += -march=armv7-a -mtune=cortex-a9 -mfpu=neon -mfloat-abi=softfp #-mcpu=cortex-a8
CFLAGS += -march=armv7-a -mtune=cortex-a9 -mfpu=neon -mfloat-abi=hard 

Add support for pthread library:

$ nano ThirdParty/PSCommon/BuildSystem/CommonCppMakefile

Search the line 95 and add the code between the two lines:
OUTPUT_NAME = $(EXE_NAME)                                               
    # We want the executables to look for the .so's locally first:
    LDFLAGS += -Wl,-rpath ./
+   ifneq ("$(OSTYPE)","Darwin")
+       LDFLAGS += -lpthread
+   endif

Save the file and exit

Then run make to compile the OpenNI2 drivers and libraries



Once the compilation is done, run the linux install script

$ cd Packaging/Linux
$ sudo ./


Copy libraries and includes to the system paths

$ cd ../../
$ sudo cp -r Include /usr/include/openni2
$ sudo cp -r Bin/Arm-Release/OpenNI2 /usr/lib/
$ sudo cp Bin/Arm-Release/libOpenNI2.* /usr/lib/


Create a package config file

$ sudo nano /usr/lib/pkgconfig/libopenni2.pc

and fill it with this:

Name: OpenNI2
Description: A general purpose driver for all OpenNI cameras.
Cflags: -I${includedir}
Libs: -L${libdir} -lOpenNI2 -L${libdir}/OpenNI2/Drivers -lDummyDevice -lOniFile


This will enable ubuntu to find the location of the drivers, libraries and include files. To make sure it is correctly found, run

$ pkg-config --modversion libopenni2

Which should give the same version as defined in the file above ( Now the Xtion is ready to be used. Plug it in (if it is already, unplug it first), then run the sample program

$ ./Bin/Arm-Release/SimpleRead

Then create a catkin workspace as described here, and check out the following packages in the src folder of the catkin workspace:

$ cd ~/catkin_ws/src
$ git clone
$ git clone
$ git clone



Now the ros packages checked out above to the catkin workspace can be compiled with catkin_make

$ cd ~/catkin_ws
$ catkin_make

Once the packages are compiled, the Xtion is ready for use with ROS with

3. Testing The Installation

  1. Kinect. Open at least 3 bash terminals:

Terminal 1: Run ROS

$ roscore


Terminal 2:  launch the Freenect

$ roslaunch freenect_launch freenect.launch
Terminal 3: run the image capture
$ rosrun image_view image_view image:=camera/rgb/image_color
$ rosrun image_view image_view image:=camera/rgb/image_rect_mono
$ rosrun image_view disparity_view image:=camera/depth/disparity

It will open a new terminal with the rgb points, mono  and depth images  from the Kinect.



Terminal 1: Run ROS

$ roscore


Terminal 2:  launch Openni2

$ roslaun openi2_launch openni2.launch


Terminal 3: you can use rqt or Rviz session to visualize the sensor e.g:

$ rqt 

$ rosrun rqt_graph rqt_graph


In the “rqt” window select “Plugins” -> “Visualization” -> “Image View“


     (optional) Install PySide, in any case you get an error with python rqt graph:

$ pip install PySide
$ cd ~/


Note: rqt and Rviz demand a lot of i.MX GPU work, so general graphic functionality will be affected. For this case is suggested to run rviz in a remote Network ROS session.




1 person found this helpful