i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
i.MX6DQ HDMI dongle board uses BCM4330 which is SDIO interface as wireless module. When we try to run Ubuntu oneiric on HDMI dongle board, after correctly insmod bcm4330.ko, we found Ubuntu NetworkManger can't recognize this interface: the /var/log/syslog shows the following error: Jan  1 00:01:08 linaro-ubuntu-desktop NetworkManager[4787]:    SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/wlan0, iface: wlan0) Jan  1 00:01:08 linaro-ubuntu-desktop NetworkManager[4787]:    SCPlugin-Ifupdown: device added (path: /sys/devices/virtual/net/wlan0, iface: wlan0): no ifupdown configuration found. Jan  1 00:01:08 linaro-ubuntu-desktop NetworkManager[4787]: <warn> /sys/devices/virtual/net/wlan0: couldn't determine device driver; ignoring... After using Google search, we found /sys/devices/virtual/net/wlan0 directory dose not has directory "device", this "device" directory should be exist at network interface, without it, NetworkManager will get error "couldn't determine device driver; ignoring...",  the "device" is just this network interface come from, and it should link to the real device under one hardware bus. While the bcm4330 Linux driver from Broadcom does not setup network interface real "device" so we need add this real "device" before the driver registers a network interface. Refer to the attached diff file for this modification
View full article
In some cases it is desired to directly have progressive content available from a TV-IN interface through the V4L2 capture device. In the BSP, HW accelerated de-interlacing is only supported in the V4L2 output stream. Below is a patch created against a rather old BSP version that adds support for de-interlaced V4L2 capture. The patch might need to be adapted to newer BSPs, However, the logic and functionality is there and should shorten the development time. This patch adds another input device to the V4L2 framework that can be selected to perform the deinterlacing on the way to memory. The selection is done by passing the index “2” as an argument to the VIDIOC_S_INPUT  V4L2 ioctl. Attached is also a modified the tvin unit test to give an example of how to use the new driver. An example sequence for running the test is as follows: modprobe mxc_v4l2_capture ./mxc_v4l2_tvin_vdi.out -ow 720 -oh 480 -ol 10 -ot 20 -f YU12 Some key things to note: This driver does not support resize or color space conversion on the way to memory. The requested format and size should match what can be provided directly by the sensor. The driver was tested on a Sabre AI Rev A board running Linux 12.02. This code is not an official delivery and as such no guarantee of support for this code is provided by Freescale.
View full article
Here is an example for i.MX28 EVK board to support SPI NOR boot in uboot, kernel and MFGTool.   Attached files are the patches to support SPI NOR flash on i.MX28 EVK bord based on L2.6.35 ER11.09.01 BSP. It was verified on Spansion s25fl256s SPI NOR. "ER11.09.01_uboot_imx28_spi_nor.patch" is the Uboot patch. "ER11.09.01_kernel_imx28_spi_nor.patch" is the kernel patch. "ucl.xml" is the updated MFGTool config file, please update it to "Mfgtools-Rel\Profiles\MX28 Linux Update\OS Firmware\ucl.xml".   The uboot boot paramters for SPI: setenv bootargs_base 'setenv bootargs console=ttyAM0,115200' setenv loadaddr 0x42000000 setenv bootargs_spi 'setenv bootargs ${bootargs} root=/dev/mtdblock2 rootfstype=jffs2 rootwait rw ip=none' setenv bootcmd_spi 'run bootargs_base bootargs_spi;sf probe 2:0; sf read ${loadaddr} 0x100000 0x300000;bootm' setenv bootcmd 'run bootcmd_spi' saveenv   To boot the board from SPI NOR s25fl256s, the 4KB page region of the NOR should be put to top, the last 128KB of the NOR address space. The uboot.sb is about 220KB, it can't be put to 4KB and 64KB combined region. The IMX28 boot ROM can only handle simple page size for boot. All 4KB page region or all 64KB page region are both OK for boot, but combined region can't boot.   For default, the s25fl256s NOR's 128KB 4KB page size region is at the bottom of the NOR, we should update the OTP to set this region to TOP, in Uboot, we run the followed command to burn the OTP: MX28 U-Boot -> sf probe 2:0 MX28 U-Boot -> sf set_config_reg 0x04   To boot the i.MX28 EVK board from SPI2 NOR flash, the BM3~0 should be 0010.   In this example, we only used the JFFS2 file system. To support the UBIFS, there is a known issue, that the UBIFS will use vmalloc to alloc memory, and if SPI driver used the DMA, kernel will halt with error "kernel BUG at arch/arm/mm/dma-mapping.c:409!".   For 11.09.01 BSP, the default MFGTool rootfs "initramfs.cpio.gz" will be bigger than 4MB, but in i.MX28 bootlets code, the BSP only set ramdisk to 4MB, so we need modify this limitation for MFGTool.   Use command "./ltib -p imx-bootlets -m prep" to get the bootlets code, modify "ltib/rpm/BUILD/imx-bootlets-src-11.09.01/linux_prep/core/setup.c", function setup_initrd_tag(), change from "params->u.initrd.size =  0x00400000;" to "params->u.initrd.size =  0x00500000;". Modify "ltib/rpm/BUILD/imx-bootlets-src-11.09.01/updater.bd" and "updater_ivt.bd", change from "load 0.b    > 0x40800000..0x40c00000;" to "load 0.b    > 0x40800000..0x40d00000;".   Now the MFGTool rootfs size can be 5MB.   2013-05-09: Updated hardware rework: On iMX28 EVK board, rework J89 as followed and mount R320,R321,R322 and C178. MX28 U49 Pin1 /CS <-> NOR Pin7 CS# MX28 U49 Pin2  D0 <-> NOR Pin8 SI/IO1 MX28 U49 Pin5 DIO <-> NOR Pin15 SI/IO0 MX28 U49 Pin6 CLK<-> NOR pin 16 SCK. MX28 U49 Pin8 VCC <-> NOR Pin2 VCC                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 MX28 U49 Pin4 GND <-> NOR Pin10 VSS MX28 U49 Pin3 /WP <-> NOR Pin9 WP MX28 U49 Pin7 /Hold <-> NOR Pin pin1 hold   Software reset issue for 32MB SPI NOR: For 32MB SPI NOR, after booted into kernel, the kernel driver will set SPI NOR to 4 bytes address mode, but for iMX28 SPI boot, it can only boot with 3 bytes address mode, if reset the iMX28 board but SPI NOR was not reset, it will fail to reboot. Hardware solution: when iMX28 was reboot, reset the SPI NOR too, the SPI NOR will work in 3 bytes address mode as default. Software solution: In kernel SPI NOR driver, always switch SPI NOR to 3 bytes address mode after each SPI NOR access, and switch to 4 bytes address mode before each access. There is no such issue if the SPI NOR size is less than 32MB.    
View full article
The original implementation is from Frias Renato for Sabreauto board. How to define the booting time? The booting time we defined here is from the board be powered up to the main application working and main application be showed directly to the end user, for example: for the media play purpose board, the booting time count to the first video frame be shown on the screen. For minimizing the booting time, some methods be tried. Optimizing for performance. Remove unnecessary modules at boot time. Start main application at the first time after the kernel be boot up. Optimizing for performance: U-Boot:   1:Enable MMU and L2-Cache.   2:Optimizing memset and memcpy.   3:Implementation of SDMA, accelerate copying data from NOR flash to memory.   4:Implementation of uSDHC’s ADMA, improve performance for SD card read. Kernel:   1:Optimizing _memcpy_fromio function at  arch/arm/kernel/io.c Remove unnecessary modules: U-Boot:   1: Disable uart output at u-boot procedure and add quiet parameter to Kernel boot.   2: Remove boot up delay at u-boot.   3: Disable I2C, SPI, SPLASH_SCREEN at u-boot. Kernel: Below removing unnecessary modules just for Sabresd board boot up through SD card and MIPI camera overlay on LVDS screen application, for other special board and special board usage application please don’t use below directly.   1: Modify arch/arm/mach-mx6/board-mx6q_sabresd.c just keep necessary module initialization at  mx6_sabresd_board_init : iomux configuration, uart0, voda, ipu, ldb, v4l2, mipi-csi, i2c1, uSDHC1, pwm0, dvfs, mipi camera clock.   2: Update Linux kernel configuration file. Try to just keep necessary module and configuration to keep minim size. Build necessary modules from external to Kernel itself. Create uImage from Image instead of zImage to reduce Kernel self extraction time. Use ext4 file system on SD card to accelerate rootfs mounting.    Notice: Kernel configuration remove NETWORK support, it includes Unix Domain Socket, the udev mechanism need it, so this kernel configuration can't support rootfs udev dynamic /dev/ nodes and all /dev/ nodes must be created before boot up at rootfs. Start main application at the first time after the kernel boots up. As normal boot up procedure, the init process will handle sysinit script firstly, this script will do some initialization and preparation for most of the user process, But this script normally will be executed for about 1~5 seconds, so now try do main application before the sysinit, while the necessary preparation of main application will be handle by this application internally. See below example for MIPI camera overly on LVDS screen: /etc/inittab ::once:/unit_tests/mxc_v4l2_overlay.out -iw 640 -ih 480 -it 0 -il 0 -ow 1024 -oh 768 -ot 0 -ol 0 -r 0 -t -1 -d 0 -fg -fr 30 ::once:/etc/rc.d/rcS ::once:/sbin/getty -L ttymxc0 115200 vt100 # GENERIC_SERIAL Test result of fast boot on Sabresd board for MIPI camera overly on LVDS screen: The main application be executed from the board be powered up is about 958ms.    Running Bootloader [0.356417 0.356417] [ 0.046637] _regulator_get: get() with no identifier [ 0.958425  0.602008] starting pid 21, tty '': '/unit_tests/mxc_v4l2_overlay.out -iw 640 -ih 480 -it 0 -il 0 -ow 1024 -oh 768 -ot 0 -ol 0 -r 0 -t -^@ [0.969609 0.011184] starting pid 22, tty '': '/etc/rc.d/rcS' [0.973368 0.003759] g_display_width = 1024, g_display_height = 768 [0.977540 0.004172] g_display_top = 0, g_display_left = 0 [0.980927 0.003387] starting pid 23, tty '': '/sbin/getty -L ttymxc0 115200 vt100 ' [1.048454 0.067527] Mounting /proc and /sys [1.089526 0.041072] Setting the hostname to freescale [1.116635 0.027109] Mounting filesystems [1.527320 0.410685] sensor chip is ov5640_mipi_camera [1.530627 0.003307] sensor frame size is 640x480 [1.533482 0.002855] sensor frame format is UYVY [1.640221 0.106739] frame_rate is 30 [1.642249 0.002028] [1.642270 0.000021] frame buffer width 0, height 0, bytesperline 0 [1.989728 0.347458] [1.990761 0.001033] arm-none-linux-gnueabi-gcc (Freescale MAD -- Linaro 2011.07 -- Built at 2011/08/10 09:20) 4.6.2 20110630 (prerelease) [2.001161 0.010400] root filesystem built on Tue, 11 Sep 2012 11:43:24 +0800 [2.006249 0.005088] Freescale Semiconductor, Inc. [2.009394 0.003145] [2.009531 0.000137] freescale login: Please see below fast boot video. I also attached sample code for U-boot and kernel for your reference. Patch code based on L3.0.35_12.09.01_GA.
View full article
In this article, some experiments are done to verify the capability of i.MX6DQ on video playback under different VPU clocks. 1. Preparation Board: i.MX6DQ SD Bitstream: 1080p sunflower with 40Mbps, it is considered as the toughest H264 clip. The original clip is copied 20 times to generate a new raw video (repeat 20 times of sun-flower clip) and then encapsulate into a mp4 container. This is to remove and minimize the influence of startup workload of gstreamer compared to vpu unit test. Kernels: Generate different kernel with different VPU clock setting: 270MHz, 298MHz, 329MHz, 352MHz, 382MHz. test setting: 1080p content decoding and display with 1080p device. (no resize) 2. Test command for VPU unit test and Gstreamer The tiled format video playback is faster than NV12 format, so in below experiment, we choose tiled format during video playback. Unit test command: (we set the frame rate -a 70, higher than 1080p 60fps HDMI refresh rate)     /unit_tests/mxc_vpu_test.out -D "-i /media/65a78bbd-1608-4d49-bca8-4e009cafac5e/sunflower_2B_2ref_WP_40Mbps.264 -f 2 -y 1 -a 70" Gstreamer command: (free run to get the highest playback speed)     gst-launch filesrc location=/media/65a78bbd-1608-4d49-bca8-4e009cafac5e/sunflower_2B_2ref_WP_40Mbps.mp4 typefind=true ! aiurdemux ! vpudec framedrop=false ! queue max-size-buffers=3 ! mfw_v4lsink sync=false 3. Video playback framerate measurement During test, we enter command "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor" to make sure the CPU always work at highest frequency, so that it can respond to any interrupt quickly. For each testing point with different VPU clock, we do 5 rounds of tests. The max and min values are removed, and the remaining 3 data are averaged to get the final playback framerate. #1 #2 #3 #4 #5 Min Max Avg Dec Playback Dec Playback Dec Playback Dec Playback Dec Playback Playback Playback Playback 270M unit test 57.8 57.3 57.81 57.04 57.78 57.3 57.87 56.15 57.91 55.4 55.4 57.3 56.83 GST 53.76 54.163 54.136 54.273 53.659 53.659 54.273 54.01967 298M unit test 60.97 58.37 60.98 58.55 60.97 57.8 60.94 58.07 60.98 58.65 57.8 58.65 58.33 GST 56.755 49.144 53.271 56.159 56.665 49.144 56.755 55.365 329M unit test 63.8 59.52 63.92 52.63 63.8 58.1 63.82 58.26 63.78 59.34 52.63 59.52 58.56667 GST 57.815 55.857 56.862 58.637 56.703 55.857 58.637 57.12667 352M unit test 65.79 59.63 65.78 59.68 65.78 59.65 66.16 49.21 65.93 57.67 49.21 59.68 58.98333 GST 58.668 59.103 56.419 58.08 58.312 56.419 59.103 58.35333 382M unit test 64.34 56.58 67.8 58.73 67.75 59.68 67.81 59.36 67.77 59.76 56.58 59.76 59.25667 GST 59.753 58.893 58.972 58.273 59.238 58.273 59.753 59.03433 Note: Dec column means the vpu decoding fps, while Playback column means overall playback fps. Some explanation: Why does the Gstreamer performance data still improve while unit test is more flat? On Gstreamer, there is a vpu wrapper which is used to make the vpu api more intuitive to be called. So at first, the overall GST playback performance is constrained by vpu (vpu dec 57.8 fps). And finally, as vpu decoding performance goes to higher than 60fps when vpu clock increases, the constraint becomes the display refresh rate 60fps. The video display overhead of Gstreamer is only about 1 fps, similar to unit test. Based on the test result, we can see that for 352MHz, the overall 1080p video playback on 1080p display can reach ~60fps. Or if time sharing by two pipelines with two displays, we can do 2 x 1080p @ 30fps video playback. However, this experiment is valid for 1080p video playback on 1080p display. If for interlaced clip and display with size not same as 1080p, the overall playback performance is limited by some postprocessing like de-interlacing and resize.
View full article
Multiple-Display means video playback on multiple screens. In case playback needs to be in a unique screen, the mfw_isink element must be used and some pipelines examples can be found on this link: GStreamer iMX6 Multi-Overlay. Number of Displays Display type Kernel parameters Pipelines # Set these shells variables before running the pipelines alias gl=gst-launch SINK_1="\"mfw_v4lsink device=/dev/video17\"" SINK_2="\"mfw_v4lsink device=/dev/video18\"" SINK_3="\"mfw_v4lsink device=/dev/video20\"" media1=file:///root/media1 media2=file:///root/media2 media3=file:///root/media3 2 hdmi + lvds video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 2 lvds + lvds video=mxcfb0:dev=ldb,LDB-XGA,if=RGB666 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 2 lcd + lvds video=mxcfb0:dev=lcd,800x480M@55,if=RGB565 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 3 hdmi + lvds + lvds video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB6 video=mxcfb2:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 playbin2 uri=$media3 video-sink=$SINK_3
View full article
Audio, from a file gst-launch filesrc location=test.wav ! wavparse ! mfw_mp3encoder ! filesink location=output.mp3 Audio Recording gst-launch alsasrc num-buffers=$NUMBER blocksize=$SIZE ! mfw_mp3encoder ! filesink location=output.mp3 # where #     duration = $NUMBER*$SIZE*8 / (samplerate *channel *bitwidth) # Example: 60 seconds recording # gst-launch alsasrc num-buffers=240 blocksize=44100 ! mfw_mp3encoder ! filesink location=output.mp3 # # To verify that is correct, do a normal audio playback gst-launch filesrc location=output.mp3 typefind=true ! beepdec ! audioconvert ! 'audio/x-raw-int,channels=2' ! alsasink Video, from a test source gst-launch videotestsrc ! queue ! vpuenc ! matroskamux ! filesink location=./test.avi Video, from a file gst-launch filesrc location=sample.yuv blocksize=$BLOCK_SIZE ! 'video/x-raw-yuv,format=(fourcc)I420, width=$WIDTH, height=$HEIGHT, framerate=(fraction)30/1' ! vpuenc codec=$CODEC ! matroskamux ! filesink location=output.mkv sync=false # where #     BLOCK_SIZE = WIDTH * HEIGHT * 1.5 #     CODEC = 0(MPEG4), 5(H263), 6(H264) or 12(MJPG). # # For example, encoding a CIF raw file gst-launch filesrc location=sample.yuv blocksize=152064 ! 'video/x-raw-yuv,format=(fourcc)I420, width=352, height=288, framerate=(fraction)30/1' ! vpuenc codec=0 ! matroskamux ! filesink location=sample.mkv sync=false Video, from Web camera # when the web cam is connected, the device node /dev/video0 should be present. In order to test the camera, without encoding gst-launch v4l2src ! mfw_v4lsink # in recording, run: # gst-launch v4l2src num-buffers=-1 ! queue max-size-buffers=2 ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false # # where sync=false indicates filesink to to use a clock sync # # In case a specific width/height is needed, just add the filter caps gst-launch v4l2src num-buffers=-1  ! 'video/x-raw-yuv,format=(fourcc)I420, width=352, height=288, framerate=(fraction)30/1' ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false # # In case you want to see in the screen what the camera is capturing, add a tee element # gst-launch v4l2src num-buffers=-1 ! tee name=t ! queue ! mfw_v4lsink t. ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false Video, from Parallel/MIPI camera # The camera driver needs to be loaded before executing the pipeline, refer to the BSP document to see which driver to load # MIPI (J5 port): modprobe ov5640_camera_mipi modprobe mxc_v4l2_capture   # Parallel (J9 port): modprobe ov5642_camera modprobe mxc_v4l2_capture   gst-launch mfw_v4lsrc ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false   # Do a 'gst-inspect mfw_v4lsrc' or 'gst-inspect vpuenc' to see other possible settings (resolution, fps, codec, etc.)
View full article
Video, bad performance gst-launch filesrc location=test.mp4 typefind=true ! aiurdemux ! vpudec ! mfw_v4lsink Video, better performance gst-launch filesrc location=sample.mp4 typefind=true ! aiurdemux ! queue max-size-time=0 ! vpudec ! mfw_v4lsink # typefind=true allows to 'type find' the source file before negotiating # max-size-time=0 indicates to ignore possible blocking issues # In case of ASF files gst-launch filesrc location=sample.asf typefind=true ! aiurdemux ! queue max-size-time=0 ! mfw_wmvdecoder ! mfw_v4lsink Audio gst-launch filesrc location=sample.mp3  typefind=true ! beepdec ! audioconvert  ! 'audio/x-raw-int, channels=2' ! alsasink Audio with visualization gst-launch filesrc location=sample.mp3 typefind=true ! beepdec ! tee name=t ! queue ! audioconvert  ! 'audio/x-raw-int, channels=2' ! alsasink t. ! queue ! audioconvert ! goom ! autovideoconvert ! autovideosink Video/Audio long version gst-launch filesrc location=sample.avi typefind=true ! aiurdemux name=demux demux. ! queue max-size-buffers=0 max-size-time=0 ! vpudec ! mfw_v4lsink demux. ! queue max-size-buffers=0 max-size-time=0 ! beepdec ! audioconvert ! 'audio/x-raw-int, channels=2' ! alsasink # queue properties, max-size-buffers=0 and max-size-time=0, allows a smoother playback; type 'gst-inspect queue' for more info VA short version gplay sample.avi VA short version gst-launch playbin2 uri=file://<full path to sample file>
View full article
Header 1 Header 2 Video rendering gst-launch videotestsrc ! mfw_v4lsink Audio rendering gst-launch audiotestsrc ! alsasink WAV Audio rendering gst-launch filesrc location=test.wav ! wavparse ! alsasink Video rendering selecting caps gst-launch videotestsrc ! capsfilter name='video/x-raw-yuv,format=(fourcc)I420' ! mfw_v4lsink gst-launch videotestsrc ! 'video/x-raw-yuv,format=(fourcc)I420' ! mfw_v4lsink
View full article
gst-inspect is a tool to to get documentation about GStreamer elements. Pipeline Check installed GST elements gst-inspect | tail -1 Check installed FSL GST elements gst-inspect | grep imx Element documentation gst-inspect <gst element>
View full article
gst-launch is the tool to execute GStreamer pipelines. Task Pipeline Looking at caps gst-launch -v  <gst elements> Enable log gst-launch --gst-debug=<element>:<level> gst-launch --gst-debug=videotestsrc:5 videotestsrc ! filesink location=/dev/null
View full article
Fast GPU Image Processing in the i.MX 6x by Guillermo Hernandez, Freescale Introduction Color tracking is useful as a base for complex image processing use cases, like determining what parts of an image belong to skin is very important for face detection or hand gesture applications. In this example we will present a method that is robust enough to take some noise and blur, and different lighting conditions thanks to the use of OpenGL ES 2.0 shaders running in the i.MX 6X  multimedia processor. Prerequisites This how-to assumes that the reader is an experienced i.mx developer and is familiar with the tools and techniques around this technology, also this paper assumes the reader has intermediate graphics knowledge and experience such as the RGBA structure of pictures and video frames and programming OpenGL based applications, as we will not dig in the details of the basic setup. Scope Within this paper, we will see how to implement a very fast color tracking application that uses the GPU instead of the CPU using OpenGL ES 2.0 shaders. Step 1: Gather all the components For this example we will use: 1.      i.MX6q ARD platform 2.      Linux ER5 3.      Oneric rootfs with ER5 release packages 4.      Open CV 2.0.0 source Step 2: building everything you need Refer to ER5 User´s Guide and Release notes on how to build and boot the board with the Ubuntu Oneric rootfs. After you are done, you will need to build the Open CV 2.0.0 source in the board, or you could add it to the ltib and have it built for you. NOTE: We will be using open CV only for convenience purposes, we will not use any if its advanced math or image processing  features (because everything happens on the CPU and that is what we are trying to avoid), but rather to have an easy way of grabbing and managing  frames from the USB camera. Step 3: Application setup Make sure that at this point you have a basic OpenGL Es 2.0 application running, a simple plane with a texture mapped to it should be enough to start. (Please refer to Freescale GPU examples). Step 4: OpenCV auxiliary code The basic idea of the workflow is as follows: a)      Get the live feed from the USB camera using openCV function cvCapture() and store into IplImage structure. b)      Create an OpenGL  texture that reads the IplImage buffer every frame and map it to a plane in OpenGL ES 2.0. c)      Use the Fragment Shader to perform fast image processing calculations, in this example we will examine the Sobel Filter and Binary Images that are the foundations for many complex Image Processing algorithms. d)      If necessary, perform multi-pass rendering to chain several image processing shaders  and get an end result. First we must import our openCV relevant headers: #include "opencv/cv.h" #include "opencv/cxcore.h" #include "opencv/cvaux.h" #include "opencv/highgui.h" Then we should define a texture size, for this example we will be using 320x240, but this can be easily changed to 640 x 480 #define TEXTURE_W 320 #define TEXTURE_H 240 We need to create an OpenCV capture device to enable its V4L camera and get the live feed: CvCapture *capture; capture = cvCreateCameraCapture (0); cvSetCaptureProperty (capture, CV_CAP_PROP_FRAME_WIDTH,  TEXTURE_W); cvSetCaptureProperty (capture, CV_CAP_PROP_FRAME_HEIGHT, TEXTURE_H); Note: when we are done, remember to close the camera stream: cvReleaseCapture (&capture); OpenCV has a very convenient structure used for storing pixel arrays (a.k.a. images) called IplImage IplImage *bgr_img1; IplImage *frame1; bgr_img1 = cvCreateImage (cvSize (TEXTURE_W, TEXTURE_H), 8, 4); OpenCV has a very convenient function for capturing a frame from the camera and storing it into a IplImage frame2 = cvQueryFrame(capture2); Then we will want to separate the camera capture process from the pos-processing filters and final rendering; hence, we should create a thread to exclusively handle the camera: #include <pthread.h> pthread_t camera_thread1; pthread_create (&camera_thread1, NULL, UpdateTextureFromCamera1,(void *)&thread_id); Your UpdateTextureFromCamera() function should be something like this: void *UpdateTextureFromCamera2 (void *ptr) {       while(1)       {             frame2 = cvQueryFrame(capture);             //cvFlip (frame2, frame2, 1);  // mirrored image             cvCvtColor(frame2, bgr_img2, CV_BGR2BGRA);       }       return NULL;    } Finally, the rendering loop should be something like this: while (! window->Kbhit ())       {                         tt = (double)cvGetTickCount();             Render ();             tt = (double)cvGetTickCount() - tt;             value = tt/(cvGetTickFrequency()*1000.);             printf( "\ntime = %gms --- %.2lf FPS", value, 1000.0 / value);             //key = cvWaitKey (30);       }       Step 5: Map the camera image to a GL Texture As you can see, you need a Render function call every frame, this white paper will not cover in detail the basic OpenGL  or EGL setup of the application, but we would rather focus on the ES 2.0 shaders. GLuint _texture; GLeglImageOES g_imgHandle; IplImage *_texture_data; The function to map the texture from our stored pixels in IplImage is quite simple: we just need to get the image data, that is basically a pixel array void GLCVPlane::PlaneSetTex (IplImage *texture_data) {       cvCvtColor (texture_data, _texture_data, CV_BGR2RGB);       glBindTexture(GL_TEXTURE_2D, _texture);       glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, _texture_w, _texture_h, 0, GL_RGB, GL_UNSIGNED_BYTE, _texture_data->imageData); } This function should be called inside our render loop: void Render (void) {   glClearColor (0.0f, 0.0f, 0.0f, 0.0f);   glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);   PlaneSetTex(bgr_img1); } At this point the OpenGL texture is ready to be used as a sampler in our Fragment Shader  mapped to a 3D plane Lastly,  when you are ready to draw your plane with the texture in it: // Set the shader program glUseProgram (_shader_program); … // Binds this texture handle so we can load the data into it /* Select Our Texture */ glActiveTexture(GL_TEXTURE0); //Select eglImage glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, g_imgHandle); glDrawArrays (GL_TRIANGLES, 0, 6); Step 6: Use the GPU to do Image Processing First we need to make sure we have the correct Vertex Shader and Fragment shader, we will  focus only in the Fragment Shader, this is where we will process our image from the camera. Below you will find the most simple fragment shader, this one only colors pixels from the sample texture const char *planefrag_shader_src =       "#ifdef GL_FRAGMENT_PRECISION_HIGH                    \n"       "  precision highp float;                            \n"       "#else                                          \n"       "  precision mediump float;                    \n"       "#endif                                        \n"       "                                              \n"       "uniform sampler2D s_texture;                  \n"       "varying  vec3      g_vVSColor;                      \n"       "varying  vec2 g_vVSTexCoord;                        \n"       "                                              \n"       "void main()                                    \n"       "{                                              \n"       "    gl_FragColor = texture2D(s_texture,g_vVSTexCoord);    \n"       "}                                              \n"; Binary Image The most Simple Image Filter is the Binary Image, this one converts a source image to a black/white output, to decide if a color should be black or white we need a threshold,  everything below that threshold will be black, and any color above should be white.               The shader code is as follows: const char* g_strRGBtoBlackWhiteShader =     #ifdef GL_FRAGMENT_PRECISION_HIGH                            precision highp float;                            #else                                            precision mediump float;                          #endif                                            varying  vec2 g_vVSTexCoord;                  uniform sampler2D s_texture;                    uniform float threshold;                                                                        void main() {                                    vec3 current_Color = texture2D(s_texture,g_vVSTexCoord).xyz;         float luminance = dot (vec3(0.299,0.587,0.114),current_Color);         if(luminance>threshold)                      \n"             gl_FragColor = vec4(1.0);                \n"           else                                  \n"                          gl_FragColor = vec4(0.0);                \n"       }                                        \n"; You can notice that the main operation is to get a luminance value of the pixel, in order to achieve that we have to multiply a known vector (obtained empirically) by the current pixel, then we simply compare that luminance value with a threshold. Anything below that threshold will be black, and anything above that threshold will be considered a white pixel. SOBEL Operator Sobel is a very common filter, since it is used as a foundation for many complex Image Processing processes, particularly in edge detection algorithms. The sobel operator is based in convolutions, the convolution is made of a particular mask, often called a kernel (on common therms, usually a 3x3 matrix). The sobel operator calculates the gradient of the image at each pixel, so it tells us how it changes from the pixels surrounding the current pixel , meaning how it increases or decreases (darker to brighter values).           The shader is a bit long, since several operations must be performed, we shall discuss each of its parts below: First we need to get the texture coordinates from the Vertex Shader: const char* plane_sobel_filter_shader_src = #ifdef GL_FRAGMENT_PRECISION_HIGH                    precision highp float;                          #else                                    precision mediump float;                        #endif                                          varying  vec2 g_vVSTexCoord;                  uniform sampler2D s_texture;                    Then we should define our kernel, as stated before, a 3x3 matrix should be enough, and the following values have been tested with good results: mat3 kernel1 = mat3 (-1.0, -2.0, -1.0,                                          0.0, 0.0, 0.0,                                              1.0, 2.0, 1.0);    We also need a convenient way to convert to grayscale, since we only need grayscale information for the Sobel operator, remember that to convert to grayscale you only need an average of the three colors: float toGrayscale(vec3 source) {                    float average = (source.x+source.y+source.z)/3.0;        return average;              } Now we go to the important part, to actually perform the convolutions. Remember that by the OpenGL ES 2.0 spec, nor recursion nor dynamic indexing is supported, so we need to do our operations the hard way: by defining vectors and multiplying them. See the following code:   float doConvolution(mat3 kernel) {                              float sum = 0.0;                                    float current_pixelColor = toGrayscale(texture2D(s_texture,g_vVSTexCoord).xyz); float xOffset = float(1)/1024.0;                    float yOffset = float(1)/768.0; float new_pixel00 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-  xOffset,g_vVSTexCoord.y-yOffset)).xyz); float new_pixel01 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y-yOffset)).xyz); float new_pixel02 = toGrayscale(texture2D(s_texture,  vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y-yOffset)).xyz); vec3 pixelRow0 = vec3(new_pixel00,new_pixel01,new_pixel02); float new_pixel10 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-xOffset,g_vVSTexCoord.y)).xyz);\n" float new_pixel11 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y)).xyz); float new_pixel12 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y)).xyz); vec3 pixelRow1 = vec3(new_pixel10,new_pixel11,new_pixel12); float new_pixel20 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-xOffset,g_vVSTexCoord.y+yOffset)).xyz); float new_pixel21 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y+yOffset)).xyz); float new_pixel22 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y+yOffset)).xyz); vec3 pixelRow2 = vec3(new_pixel20,new_pixel21,new_pixel22); vec3 mult1 = (kernel[0]*pixelRow0);                  vec3 mult2 = (kernel[1]*pixelRow1);                  vec3 mult3 = (kernel[2]*pixelRow2);                  sum= mult1.x+mult1.y+mult1.z+mult2.x+mult2.y+mult2.z+mult3.x+     mult3.y+mult3.z;\n"     return sum;                                      } If you see the last part of our function, you can notice that we are adding the multiplication values to a sum, with this sum we will see the variation of each pixel regarding its neighbors. The last part of the shader is where we will use all our previous functions, it is worth to notice that the convolution needs to be applied horizontally and vertically for this technique to be complete: void main() {                                    float horizontalSum = 0.0;                            float verticalSum = 0.0;                        float averageSum = 0.0;                        horizontalSum = doConvolution(kernel1);        verticalSum = doConvolution(kernel2);            if( (verticalSum > 0.2)|| (horizontalSum >0.2)||(verticalSum < -0.2)|| (horizontalSum <-0.2))                        averageSum = 0.0;                      else                                                    averageSum = 1.0;                    gl_FragColor = vec4(averageSum,averageSum,averageSum,1.0);                }    Conclusions and future work At this point, if you have your application up and running, you can notice that Image Processing can be done quite fast, even with images larger than 640 480. This approach can be expanded to a variety of techniques like Tracking, Feature detection and Face detection. However, these techniques are out of scope for now, because this algorithms need multiple rendering passes (like face detection), where we need to perform an operation, then write the result to an offscreen buffer and use that buffer as an input for the next shader and so on.  But Freescale is planning to release an Application Note in Q4 2012 that will expand this white paper and cover these techniques in detail.
View full article
Dithering Implementation for Eink Display Panel by Daiyu Ko, Freescale Dithering a.          Dithering in digital image processing Dithering is a technique used in computer graphics to create the illusion of color depth in images with a limited color palette (color quantization). In a dithered image, colors not available in the palette are approximated by a diffusion of colored pixels from within the available palette. The human eye perceives the diffusion as a mixture of the colors within it (see color vision). Dithered images, particularly those with relatively few colors, can often be distinguished by a characteristic graininess, or speckled appearance. Figure 1. Original photo; note the smoothness in the detail http://en.wikipedia.org/wiki/File:Dithering_example_undithered_web_palette.png Figure 2.Original image using the web-safe color palette with no dithering applied. Note the large flat areas and loss of detail. http://en.wikipedia.org/wiki/File:Dithering_example_dithered_web_palette.png Figure 3.Original image using the web-safe color palette with Floyd–Steinberg dithering. Note that even though the same palette is used, the application of dithering gives a better representation of the original b.         Applications Display hardware, including early computer video adapters and many modern LCDs used in mobile phonesand inexpensive digital cameras, show a much smaller color range than more advanced displays. One common application of dithering is to more accurately display graphics containing a greater range of colors than the hardware is capable of showing. For example, dithering might be used in order to display a photographic image containing millions of colors on video hardware that is only capable of showing 256 colors at a time. The 256 available colors would be used to generate a dithered approximation of the original image. Without dithering, the colors in the original image might simply be "rounded off" to the closest available color, resulting in a new image that is a poor representation of the original. Dithering takes advantage of the human eye's tendency to "mix" two colors in close proximity to one another. For Eink panel, since it is grayscale image only, we can use the dithering algorism to reduce the grayscale level even to black/white only but still get better visual results. c.          Algorithm There are several algorithms designed to perform dithering. One of the earliest, and still one of the most popular, is the Floyd–Steinberg dithering algorithm, developed in 1975. One of the strengths of this algorithm is that it minimizes visual artifacts through an error-diffusion process; error-diffusion algorithms typically produce images that more closely represent the original than simpler dithering algorithms. (Original) Threshold Bayer   (ordered)                                     Example (Error-diffusion): Error-diffusion dithering is a feedback process that diffuses the quantization error to neighboring pixels. Floyd–Steinberg dithering only diffuses the error to neighboring pixels. This results in very fine-grained dithering. Jarvis, Judice, and Ninke dithering diffuses the error also to pixels one step further away. The dithering is coarser, but has fewer visual artifacts. It is slower than Floyd–Steinberg dithering because it distributes errors among 12 nearby pixels instead of 4 nearby pixels for Floyd–Steinberg. Stucki dithering is based on the above, but is slightly faster. Its output tends to be clean and sharp. Floyd–Steinberg Jarvis,   Judice & Ninke Stucki                         Error-diffusion dithering (continued): Sierra dithering is based on Jarvis dithering, but it's faster while giving similar results. Filter Lite is an algorithm by Sierra that is much simpler and faster than Floyd–Steinberg, while still yielding similar (according to Sierra, better) results. Atkinson dithering, developed by Apple programmer Bill Atkinson, resembles Jarvis dithering and Sierra dithering, but it's faster. Another difference is that it doesn't diffuse the entire quantization error, but only three quarters. It tends to preserve detail well, but very light and dark areas may appear blown out. Sierra Sierra   Lite Atkinson                              2.     Eink display panel characteristic a.       Low resolution Eink only has couple resolution modes for display      DU                  (1bit, Black/White)      GC4                (2bit, Gray scale)      GC16              (4bit, Gray scale)      A2                   (1bit, Black/White, fast update mode) b.      Slow update time For 800x600 panel size (per frame)      DU                  300ms                              GC4                450ms                              GC16              600ms                               A2                   125ms 3.       3.     Effect by doing dithering for Eink display panel a.       Low resolution with better visual quality By doing dithering to the original grayscale image, we can get better visual looking result. Even if the image becomes black and white image, with the dithering algorism, you will still get the feeling of grayscale image. b.      Faster update with Eink’s animation waveform Since the DU/A2 mode could update the Eink panel faster than grayscale mode, with dithering, we can get no only the better visual looking result, but also we can use DU/A2 fast update mode to show animation or even normal video files. 4.       4.     Our current dithering implementation a.       Choose a simple and effective algorism Considering Eink panel’s characteristics, we compared couple dithering algorism and decide to use Atkinson dithering algorism. It is simple and the result is better especially for Einkblack/white display case. b.      Made a lot of optimization so that it will not affect update time too much With the simplicity of the Atkinson dithering algorism, we can also put a lot of effort to do the optimization in order to reduce the dithering processing time and make it practical for actual use. c.       Current algorism performance and result Currently, with Atkinson dithering algorism, our processing time is about 70ms. 5.       5.     Availability a.       We implemented both Y8->Y1 and Y8->Y4 dithering with the same dithering algorism. b.      Implemented into our EPDC driver with i.MX6SL Linux 3.0.35 version release. c.       Also implemented in our Video for Eink demo 6.       6.     References a.       Part of dithering introduction from www.wikipedia.org
View full article
Note: All these gstreamer pipelines have been tested using a i.MX6Q board with a kernel version 3.0.35-2026-geaaf30e. Tools: gst-launch gst-inspect FSL Pipeline Examples: GStreamer i.MX6 Decoding GStreamer i.MX6 Encoding GStreamer Transcoding and Scaling GStreamer i.MX6 Multi-Display GStreamer i.MX6 Multi-Overlay GStreamer i.MX6 Camera Streaming GStreamer RTP Streaming Other plugins: GStreamer ffmpeg GStreamer i.MX6 Image Capture GStreamer i.MX6 Image Display Misc: Testing GStreamer Tracing GStreamer Pipelines GStreamer miscellaneous
View full article
Detailed Features List of i.MX35 PDK board I.MX35 CPU Card Additional Resources I.MX35 PDK Board Flashing SD Card i.MX35 PDK Board Flashing NAND i.MX35 PDK Linux Booting SD Loading Redboot Binary Directly to RAM Fixing Redboot RAM Bug Fixing Redboot RAM bug (CSD1 not activated)
View full article
If you are a Windows user and don't want to install Linux on your machine, VMware is a virtual machine used to install Linux under Windows. It's a good way to start with Linux (if you're unfamiliar with it) and also start your i.MX development. Installing VMWare - VMWare Workstation [VMWare Workstation (Click here to go to Download page)] VMWare Workstation is available in commercial and trial versions. With Workstation is possible to create your own installation image—installing a new operating system as you would install it in a new machine. - VMWare Player [VMWare Player (Click here to go to Download page)] VMWare Player is available in a free version. With Player is only possible to run images previously made. - VMWare Images at ThoughtPolice site [ThoughtPolice site (Click here to go to Download page)] This site has many ready VMWare images from many Linux distributions. It just needs to be downloaded, unziped and it's ready to be used with VMware. Workstation or Player.
View full article
on Host: libx11-dev libpng-dev libjpeg-dev libxext-dev x11proto-xext-dev qt3-dev-tools-embedded libxtst-dev On Target (i.MX device) alsa-utils libpng tslib
View full article
OpenGL OpenGL is the premier environment for developing portable, interactive 2D and 3D graphics applications. Since its introduction in 1992, OpenGL has become the industry's most widely used and supported 2D and 3D graphics application programming interface (API), bringing thousands of applications to a wide variety of computer platforms. OpenGL fosters innovation and speeds application development by incorporating a broad set of rendering, texture mapping, special effects, and other powerful visualization functions. Developers can leverage the power of OpenGL across all popular desktop and workstation platforms, ensuring wide application deployment. Source: http://www.opengl.org/about/overview/ On i.MX processors, OpenGL takes the advantage of GPU (Graphics Processing Unit) block to improve 3D performance. Installing and running Demos Get more information on how to install and run demos using OpenGL on i.MX31 on this application note: AN3723 - Using OpenGL Applications on the i.MX31 ADS Board - http://www.freescale.com/files/dsp/doc/app_note/AN3723.pdf?fsrch=1 Develop a simple OpenGL ES 2.0 application under Linux This tutorial shows how to develop a simple OpenGL ES 2.0 application with LTIB and an i.MX51 EVK board. This tutorial can be adapted to i.MX53 that share the same 3D GPU core (Z430) and API (OpenGL ES 2.0).
View full article
NFS Network File System (NFS) is a network file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks. The use of NFS makes the development work of user space applications easy and fast since all target root file system is located into host (PC) where the applications can be developed and crosscompiled to target system. The target system will use this file system located on host as if it is located on target. NFS service will be used to transfer the root file system from host to target. NFS resources are listed below: All Boards Deploy NFS All Boards NFS on Fedora NFS on Fedora All Boards NFS on Slackware NFS on Slackware All Boards NFS on Ubuntu NFS on Ubuntu
View full article
On power-up of a system, the bootloader performs initial hardware configuration, and is responsible for loading the Linux kernel in memory. Several bootloaders are available which support i.MX SoCs: Barebox (http://www.barebox.org/) RedBoot (http://ecos.sourceware.org/redboot/) U-Boot (http://www.denx.de/wiki/U-Boot/) Qi (http://wiki.openmoko.org/wiki/Qi)
View full article