i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
Use the EPDC Unit Test to exercise your EPD panel 1.  Introduction The i.MX 6Solo/6DualLite and i.MX 6SoloLite processors contain an Electrophoretic Display Controller (EPDC) designed to drive E-INK(TM) EPD panels supporting a wide variety of TFT backplanes. Detailed information about the EPDC is available in the i.MX 6Solo/6DualLite and i.MX 6SoloLite Reference Manuals. Information about programming the EPDC and integrating support for a particular panel into the Linux BSP is provided in the Linux BSP Reference Manual. Now that you have your panel hardware and software integrated, how can you tell if it is all working? Or perhaps you're using a standard panel that doesn't seem to be working - how can you test it at the lowest level? The answer is the EDPC Unit Test. 2. Running the EPDC Unit Test Most i.MX processor hardware modules have an associated unit test in the unit_tests directory of the root file system image. The basics of running the EPDC unit test are simple. After logging in as root on the debug console, change to the unit tests directory and execute the test as follows: $ cd /unit_tests $ ./mxc_epdc_fb_test.out Without any arguments, the unit test runs thirteen individual tests and takes about 8 minutes to complete. You will get a lot of output on the panel and it may not be obvious that the images are displaying correctly. Running the unit test with no arguments is a nice automated way to make sure your panel is at least displaying something, and is not causing a crash or hang. But to really verify correctness each test should be run individually and the output carefully reviewed. This can be accomplished by passing a test number option as follows: $ ./mxc_epdc_fb_test.out -n 1 To show a list of all the available options, as well as a list of the individual test numbers, use the "-h" argument. The helpful output is shown below. $ ./mxc_epdc_fb_test.out -h EPDC framebuffer driver test program. Usage: mxc_epdc_fb_test [-h] [-a] [-p delay] [-u s/q/m] [-n ]         -h        Print this message         -a        Enabled animation waveforms for fast updates (tests 8-9)         -p        Provide a power down delay (in ms) for the EPDC driver                   0 - Immediate (default)                   -1 - Never                    ms - After ms milliseconds         -u        Select an update scheme                   s - Snapshot update scheme                   q - Queue update scheme                   m - Queue and merge update scheme (default)         -n        Execute the tests specified in expression                   Expression is a set of comma-separated numeric ranges                   If not specified, tests 1-13 are run Example: ./mxc_epdc_fb_test.out -n 1-3,5,7 EPDC tests: 1 - Basic Updates 2 - Rotation Updates 3 - Grayscale Framebuffer Updates 4 - Auto-waveform Selection Updates 5 - Panning Updates 6 - Overlay Updates 7 - Auto-Updates 8 - Animation Mode Updates 9 - Fast Updates 10 - Partial to Full Update Transitions 11 - Test Pixel Shifting Effect 12 - Colormap Updates 13 - Collision Test Mode 14 - Stress Test In addition to "-n" to run individual tests, the "-a" and "-u" options are provided to set animation mode and waveform used, respectively. These options make sense only for some of the individual tests, as noted in the next section. The "-u s" (snapshot) mode purposely does not allow pending updates, so some of the tests will cause the driver to issue the following warning in snapshot mode: imx_epdc_fb: No free intermediate buffers available. The "-p" (power down delay) option allows you to specify how soon the device driver automatically powers down the controller after all pending updates have completed. The default is 0 (zero), meaning power down immediately. Use "-1" to disable power down (i.e. never power down). Sidebar: You can use another unit test, "dump-clocks.sh", to view the state of the EPDC clocks (i.e. the power state of the module). In the example below, a "1" in the third column indicates that the clock is running; a "0" indicates the clock is gated: $ ./dump-clocks.sh | grep epdc epdc_pix_clk             pll5_video_main_clk       1   26666667 epdc_axi_clk             pll2_pfd2_400M             1  198000000 3.  Individual Test Details Important! The notes below assume you are running the version of the unit test binary in the tar file attached to this How-to. Please extract the file mxc_epdc_fb_tests.out from the tar file into your rootfs unit_tests directory before running the test. Most tests begin with a screen blank operation (all white pixels). Each test works with all three update schemes (snapshot, queue, queue and merge) unless otherwise noted. 3.1 Test 1: Basic Updates Draws the following patterns: Crosshatch Squares Text Ginger image Colorbar 3.2 Test 2: Rotation Updates Draws text, squares, crosshatch, and square outlines in all rotation modes: No rotation Clockwise (90 degrees) Upside down (180 degrees) Counterclockwise (270 degrees) The square outlines are drawn first in RGB format and then in Y8 format. 3.3 Test 3: Grayscale Framebuffer Updates Draws a top half black screen and then a colorbar, rotated clockwise. First draws in normal Y8 format and then in inverted Y8 format. 3.4 Test 4: Auto-waveform Selection Updates Draws several small squares using auto-waveform selection. Note: unlike the i.MX508 EPDC driver, the i.MX 6 driver does not report the final waveform selection to user-level applications. This test also verifies that squares at non-8-bit aligned pixels can be drawn. 3.5 Test 5: Panning Updates Draws a colorbar to an off-screen region. Draws the Ginger image to the frame buffer. Sets the focus (pan) to the colorbar, then updates portions of the screen. You should see the colorbar poke through. Slowly pans from Ginger to the entire colorbar. Use pan to flip between black and white buttons. Flashes buttons using pixel inversion. Flashes buttons using panning. 3.6 Test 6: Overlay Updates Switches between the frame buffer (FB) and the alternate (overlay) frame buffer as follows: Draws Ginger to the FB. Draws the colorbar to the alternate frame buffer. Shows the FB (Ginger). Shows the alternate FB (colorbar). Shows FB again (Ginger). Shows half FB, half alternate FB. Shows cutout region of alternate FB. Shows cutout in upper corner. Shows black screen. Shows clockwise-rotated text overlay in center. 3.7 Test 7: Auto-Updates Important! The auto-update mechanism must be enabled in the kernel configuration for this test to work. Enable it using the LTIB Kernel configuration menu item "Device Drivers->Graphics support->E-Ink Auto-update Mode Support." Also, please check the Linux BSP Release Notes for any issues with this feature. 3.8 Test 8: Animation Mode Updates Shows how normal (gray level) and monochrome (black and white) updates compare in appearance and performance. Quickly flashes back and forth between black and white screens. Draws normal squares. Draws black and white squares. Draws Ginger in gray scale and monochrome. Draws colorbar in gray scale and monochrome. Draws normal Y8 colorbar and monochrome colorbar. Draws inverted normal Y8 colorbar and inverted monochrome colorbar. You can run this test in animation mode using the "-a" command line option. Animation mode only updates monochrome pixels, so you will notice a drawing speed improvement. Also, you will see the implementation of one of the "rules" of using this mode: The screen must be blanked (all white or all black) when switching in and out of animation mode. 3.9 Test 9: Fast Updates Animates a square across the screen and down one side. This test can also be run in animation mode using "-a". See the animation mode notes in the Test 8 description above. 3.10 Test 10: Partial to Full Update Transitions Draws small gray squares using separate updates in partial update mode (only pixels that change are updated). Then re-draws the entire screen in one update using full update mode. In full update mode, you will notice the entire screen transition from black to the final grey value. 3.11 Test 11: Test Pixel Shifting Effect Draws a short, two pixel line and then sends two updates one pixel apart in distance and 5 seconds apart in time. Nothing much to see here; just verifies that a one pixel update shift works. 3.12 Test 12: Colormap Updates Creates a colormap and uses it to draw full screen blanks and to draw a color bar. In this test, you should follow along with the text printed in the debug console and verify each state: Screen should be black. Screen should be white now. Screen should still be white. Should be inverted color bar (white to black, left to right). Colorbar again, with no CMAP (black to white, left to right). Posterized colorbar. In the above output, "CMAP" means colormap and "Posterized colorbar" is a colorbar drawn from only black and white components. 3.13 Test 13: Collision Test Mode Draws two overlapping rectangles. Tests for collision on the first rectangle, which should result in the message: Collision test result = 0 Then draws the same overlapping rectangles, this time testing for collision on the second rectangle. The result should be: Collision test result = 1 Note: This test cannot be run using the snapshot scheme. If you try to use "-u s" it will print a message and return. 3.14 Test 14: Stress Test Draws thousands of random rectangles on the screen in different rotations. Runs for about 8 minutes. This test must be explicitly specified on the command line; it does not run by default. For example: $ ./mxc_epdc_fb_test.out -n 14 Note: This test cannot be run using the snapshot scheme. If you try to use "-u s" it will print a message and return. 4. Customizing A great way to know what's really going on in each test is to look at the source code. You may want to customize the source as well - say to add a new test that exercises some cool feature of your panel. Fortunately, the source is included in the BSP distribution so you can extract, customize, and rebuild it as needed. The magic of LTIB is beyond the scope of this article, but here are some hints: # Extract the unit test source from the imx-test package: $ ./ltib -m prep -p imx-test # Source is now in rpm/BUILD/imx-test-12.08.00/test/mxc_fb_test/mxc_epdc_fb_test.c (your package version number my be different). # Build from source: $ ./ltib -m scbuild -p imx-test # Deploy all unit test binaries to ltib rootfs directory: $ ./ltib -m scdeploy -p imx-test # Alternatively you can copy just the epdc unit test binary as follows: $ sudo cp rpm/BUILD/imx-test-12.08.00/platform/IMX6S/mxc_epdc_fb_test.out rootfs/unit_tests/ As noted in the Individual Test Details section, you should replace the file mxc_epdc_fb_test.c referenced above with the one found in the tar file attached to this How-to. 5. Something is wrong! Of course there are many things that can go wrong that are beyond the scope of this article, but assuming your hardware is working and the panel options are correctly configured in the BSP (again, beyond our scope), here are some things to check: Is the kernel configured correctly for EPDC support? Check that the following item is enabled in the LTIB Kernel configuration menu: "Device Drivers->Graphics support->E-Ink Panel Framebuffer." Are the U-boot kernel command line arguments set correctly for EPDC? For example, "video=mxcepdcfb:E060SCM consoleblank=0". The "consoleblank=0" command is useful to prevent entering low power suspend mode while you are testing. Does the "Tux" image display correctly on your panel after system boot? If so, the EPDC driver was a least able to perform the initial framebuffer update to your panel. Note: for Tux to display at boot, you need to disable LCDIF support at "Device Drivers->Graphics support->Support MXC ELCDIF framebuffer." Are there any EPDC-related error messages in the kernel log after boot? You can check with "dmesg | grep epdc".
View full article
In some cases it is desired to directly have progressive content available from a TV-IN interface through the V4L2 capture device. In the BSP, HW accelerated de-interlacing is only supported in the V4L2 output stream. Below is a patch created against a rather old BSP version that adds support for de-interlaced V4L2 capture. The patch might need to be adapted to newer BSPs, However, the logic and functionality is there and should shorten the development time. This patch adds another input device to the V4L2 framework that can be selected to perform the deinterlacing on the way to memory. The selection is done by passing the index “2” as an argument to the VIDIOC_S_INPUT  V4L2 ioctl. Attached is also a modified the tvin unit test to give an example of how to use the new driver. An example sequence for running the test is as follows: modprobe mxc_v4l2_capture ./mxc_v4l2_tvin_vdi.out -ow 720 -oh 480 -ol 10 -ot 20 -f YU12 Some key things to note: This driver does not support resize or color space conversion on the way to memory. The requested format and size should match what can be provided directly by the sensor. The driver was tested on a Sabre AI Rev A board running Linux 12.02. This code is not an official delivery and as such no guarantee of support for this code is provided by Freescale.
View full article
The original implementation is from Frias Renato for Sabreauto board. How to define the booting time? The booting time we defined here is from the board be powered up to the main application working and main application be showed directly to the end user, for example: for the media play purpose board, the booting time count to the first video frame be shown on the screen. For minimizing the booting time, some methods be tried. Optimizing for performance. Remove unnecessary modules at boot time. Start main application at the first time after the kernel be boot up. Optimizing for performance: U-Boot:   1:Enable MMU and L2-Cache.   2:Optimizing memset and memcpy.   3:Implementation of SDMA, accelerate copying data from NOR flash to memory.   4:Implementation of uSDHC’s ADMA, improve performance for SD card read. Kernel:   1:Optimizing _memcpy_fromio function at  arch/arm/kernel/io.c Remove unnecessary modules: U-Boot:   1: Disable uart output at u-boot procedure and add quiet parameter to Kernel boot.   2: Remove boot up delay at u-boot.   3: Disable I2C, SPI, SPLASH_SCREEN at u-boot. Kernel: Below removing unnecessary modules just for Sabresd board boot up through SD card and MIPI camera overlay on LVDS screen application, for other special board and special board usage application please don’t use below directly.   1: Modify arch/arm/mach-mx6/board-mx6q_sabresd.c just keep necessary module initialization at  mx6_sabresd_board_init : iomux configuration, uart0, voda, ipu, ldb, v4l2, mipi-csi, i2c1, uSDHC1, pwm0, dvfs, mipi camera clock.   2: Update Linux kernel configuration file. Try to just keep necessary module and configuration to keep minim size. Build necessary modules from external to Kernel itself. Create uImage from Image instead of zImage to reduce Kernel self extraction time. Use ext4 file system on SD card to accelerate rootfs mounting.    Notice: Kernel configuration remove NETWORK support, it includes Unix Domain Socket, the udev mechanism need it, so this kernel configuration can't support rootfs udev dynamic /dev/ nodes and all /dev/ nodes must be created before boot up at rootfs. Start main application at the first time after the kernel boots up. As normal boot up procedure, the init process will handle sysinit script firstly, this script will do some initialization and preparation for most of the user process, But this script normally will be executed for about 1~5 seconds, so now try do main application before the sysinit, while the necessary preparation of main application will be handle by this application internally. See below example for MIPI camera overly on LVDS screen: /etc/inittab ::once:/unit_tests/mxc_v4l2_overlay.out -iw 640 -ih 480 -it 0 -il 0 -ow 1024 -oh 768 -ot 0 -ol 0 -r 0 -t -1 -d 0 -fg -fr 30 ::once:/etc/rc.d/rcS ::once:/sbin/getty -L ttymxc0 115200 vt100 # GENERIC_SERIAL Test result of fast boot on Sabresd board for MIPI camera overly on LVDS screen: The main application be executed from the board be powered up is about 958ms.    Running Bootloader [0.356417 0.356417] [ 0.046637] _regulator_get: get() with no identifier [ 0.958425  0.602008] starting pid 21, tty '': '/unit_tests/mxc_v4l2_overlay.out -iw 640 -ih 480 -it 0 -il 0 -ow 1024 -oh 768 -ot 0 -ol 0 -r 0 -t -^@ [0.969609 0.011184] starting pid 22, tty '': '/etc/rc.d/rcS' [0.973368 0.003759] g_display_width = 1024, g_display_height = 768 [0.977540 0.004172] g_display_top = 0, g_display_left = 0 [0.980927 0.003387] starting pid 23, tty '': '/sbin/getty -L ttymxc0 115200 vt100 ' [1.048454 0.067527] Mounting /proc and /sys [1.089526 0.041072] Setting the hostname to freescale [1.116635 0.027109] Mounting filesystems [1.527320 0.410685] sensor chip is ov5640_mipi_camera [1.530627 0.003307] sensor frame size is 640x480 [1.533482 0.002855] sensor frame format is UYVY [1.640221 0.106739] frame_rate is 30 [1.642249 0.002028] [1.642270 0.000021] frame buffer width 0, height 0, bytesperline 0 [1.989728 0.347458] [1.990761 0.001033] arm-none-linux-gnueabi-gcc (Freescale MAD -- Linaro 2011.07 -- Built at 2011/08/10 09:20) 4.6.2 20110630 (prerelease) [2.001161 0.010400] root filesystem built on Tue, 11 Sep 2012 11:43:24 +0800 [2.006249 0.005088] Freescale Semiconductor, Inc. [2.009394 0.003145] [2.009531 0.000137] freescale login: Please see below fast boot video. I also attached sample code for U-boot and kernel for your reference. Patch code based on L3.0.35_12.09.01_GA.
View full article
Graphics are a big topic in the Android platform, containing java/jni graphic framework and 2d/3d graphic engines (skia, OpenGL-ES, renderscript). This document describes the general Android graphic stack and UI features on Freescale devices. 1. Android Graphic Stacks All Android 3D apps and games have the following graphic stack: Android system UI and all Apps UI follow 2D graphic stack as below, the hardware render will accelerate Android 2D UI with GPU HW OpenGL-ES 2.0 to improve the whole UI performance. Hardware acceleration can be disabled on i.mx6 in device/fsl/imx6/soc/imx6dq.mk USE_OPENGL_RENDERER := false Then rebuild frameworks/base/core/jni, and replace libandroid_runtime.so Surfaceflinger is responsible of all surface layers composition, and  then generate the framebuffer pixmap for display devices. these graphic surface layers are from 2D/3D apps. Hwcomposer is the alternative module of Surfaceflinger with OpenGL-ES. Hwcomposer is used to combine the specific surface layers supported by specific vendor devices. Freescale i.MX6 devices use GPU 2D to combine most surface layers, and the system power can be reduced with GPU 2D instead of GPU 3D. The typical power saving case is video playback. Hwcomposer with GPU 2D can offload GPU 3D task when running game and benchmarks, it is proved to improve the overall system performance about 20%. 2. Performance measurment Show FPS for Android system performance For NFS boot you can set “debug.sf.showfps” to 1 in init.freescale.rc (“setprop debug.sf.showfps 1”) and then reboot the system. For SD or EMMC boot, you can issue command “setprop debug.sf.showfps 1” in console, then find system_server thread by top and kill it to reset the system. Graphic benchmarks for 3D capability measurement Quadrant Full test benchmark cover CPU, Memory, IO, 2D and 3D GLBenchmark http://www.glbenchmark.com/ NenaMark2 https://market.android.com/details?id=se.nena.nenamark2 An3DBench http://www.androidzoom.com/android_applications/tools/an3dbench_hnog.html AnTutu http://www.antutu.com/software.html 3DMark http://www.futuremark.com/benchmarks/3dmark06/introduction/ Browser benchmarks http://www.webkit.org/perf/sunspider/sunspider.html http://v8.googlecode.com/svn/data/benchmarks/current/run.html http://www.craftymind.com/guimark2/ http://www.craftymind.com/factory/guimark/GUIMark_HTML4.html http://themaninblue.com/writing/perspective/2010/03/22/ 3.  Android UI features Dual display with same content This feature is supported in the default image in Android i.MX 6 release package. In this feature, LVDS panel and HDMI output can be supported simultaneously. It is only enabled when the HDMI TV has been connected with the board. Overscan for TV devices Some TVs may miss display the contents in overscan area. To avoid the contents in overscan area being lost, the common implement is by underscanning with an adjustable black border and letitng the viewer adjust the width of the black border. The downscan operation is done by surfaceflinger when it does surface composition through HW OpenGL ES. There is no performance impact since all the work is done by GPU HW. Overscan can be configured in display setting in visual mode: 32 bits color depth 32bpp UI can be supported by adding “bpp=32” in uboot as below: setenv bootargs ‘… video=mxcdi1fb:RGB666,XGA,bpp=32 …’, also can configure it in display setting. Enable 32bpp frame buffer and application surface buffer will be allocate to RGBA8888 format instead of default RGB565 format, that means more system memory is allocated. After enabling 32bpp, if some applications still don't have better UI quality, check to see if  there is hard code to request RGB565 format surface (should request RGBA8888 format to get better quality). Sample code is attached to test for 32bpp (left is on 16bpp, right is on 32bpp) Display Visual Setting The display setting is the add-on feature in FSL Android release, it is very convenient for end-users to change display property, mostly for the following features: Dual display enablement Display color depth setting(16bpp, 32bpp) Overscan adjustment in horizontal and vertical orientation 4. Issue Diagnosis Application Compatibility Some Android applications may not run correctly on some Android releases. It may cause application compatibility, so check the application in other platforms. For example Neocore and Asphalt 5 can run on Eclair, Froyo, and Gingerbread, but will not correctly run on Honeycomb. GPU Compatibility Some game UIs may not correctly display on our Android release. When encountering this kind of issue, the customer can check whether it is caused by the game using an OpenGL extension which our GPU does not support. They can download another data package (for example not extension data package) to have a check. Others Enlarge GPU memory if you encounter UI abnormally displaying after running an application for a while. Some applications need Wifi connections, so monitor the console log to see whether there are any error reports.
View full article
Trace the malloc and expose violate access to freed memory Introduction Libc has a malloc debug framework for difference debugger. Each debugger takes as a libraries, and override the default malloc/free/calloc/realloc/, hooked before calling the real functions. NOTE: This tip assume that you are working with an eng or userdebug build of the platform, not on a production device. Trace the low level malloc/free in bionic Bionic has a malloc debugger called leak debugger, which can record all the malloc/free in low level. And developers can use ddms on host to check each block of memory on heap by malloc. And ddms support convert caller function address to name conver. That makes easy for us to check which component, which function allocated for how many memories. You can turn on memory tracking with debug level 1: $ adb shell setprop libc.debug.malloc 1 $ adb shell stop $ adb shell start You need to restart the runtime so that zygote and all processes launched from it are restarted with the property set. Now all Dalvik processes have memory tracking turned on. You can look at these with DDMS, but first you need to turn on its native memory UI: Open ~/.android/ddms.cfg Add a line "native=true" Upon relaunching DDMS and selecting a process, you can switch to the new native allocation tab and populate it with a list of allocations. This is especially useful for debugging memory leaks. NOTE: to solve the module symbols, please export two env on HOST: $ export PATH=$PATH:<android src>/prebuilt/linux-x86/toolchain/arm-linux-androideabi-4.4.x/bin $ export ANDROID_PRODUCT_OUT=<android src>/out/target/product/<platform> Expose the memory access to freed area In this release he Electric Fence 2.2.0 has been ported to Android. it's also a memory debugger tool as above leak debugger. It helps you detect two common programming bugs: software that overruns the boundaries of a malloc() memory allocation, and software that touches a memory allocation that has been released by free(). It will dump the call stack and mmap of the process, if the malloc/free is not called with correct parameters. It will also make a segment fault, when applications want to access the address which is freed. The usage of efence is almost same as above, which we define it's debug level to 15: $ adb shell setprop libc.debug.malloc 15 After setting this property, you can run your applications, and if there's any memory leakage, logcat will show information. Example: Access the memory, which has been freed already: root@android:/data # setprop libc.debug.malloc 15 I/libc    ( 4136): setprop using MALLOC_DEBUG = 1 (leak checker) root@android:/data # ./memtest I/libc    ( 4138): ./memtest using MALLOC_DEBUG = 15 (efence) F/libc    ( 4138): Fatal signal 11 (SIGSEGV) at 0x4015bff4 (code=2) I/DEBUG  ( 3856): *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** I/DEBUG  ( 3856): Build fingerprint: 'freescale/sabresd_6dq/sabresd_6dq:4.0.4/R13.5-rc1/eng.b03824.20120711.11133 8:eng/test-keys' I/DEBUG  ( 3856): pid: 4138, tid: 4138  >>> ./memtest <<< I/DEBUG  ( 3856): signal 11 (SIGSEGV), code 2 (SEGV_ACCERR), fault addr 4015bff4 I/DEBUG  ( 3856):  r0 00000000  r1 00000002  r2 40151c6c  r3 00000000 I/DEBUG  ( 3856):  r4 4015bff4  r5 bec75a54  r6 00000001  r7 bec75a5c I/DEBUG  ( 3856):  r8 00000000  r9 00000000  10 00000000  fp 00000000 I/DEBUG  ( 3856):  ip 00000000  sp bec75a20  lr 40133f8f  pc 0000a554  cpsr 60000010 I/DEBUG  ( 3856):  d0  203f810033915fe5  d1  0000000000000000 I/DEBUG  ( 3856):  d2  0000000000000000  d3  0000000000000000 I/DEBUG  ( 3856):  d4  0000000000000000  d5  0000000000000000 I/DEBUG  ( 3856):  d6  0000000000000000  d7  204cb48d00000000 I/DEBUG  ( 3856):  d8  0000000000000000  d9  0000000000000000 I/DEBUG  ( 3856):  d10 0000000000000000  d11 0000000000000000 I/DEBUG  ( 3856):  d12 0000000000000000  d13 0000000000000000 I/DEBUG  ( 3856):  d14 0000000000000000  d15 0000000000000000 I/DEBUG  ( 3856):  d16 41c0265a46a47ae1  d17 3f50624dd2f1a9fc I/DEBUG  ( 3856):  d18 41c9c8aff2800000  d19 0000000000000000 I/DEBUG  ( 3856):  d20 0000000000000000  d21 0000000000000000 I/DEBUG  ( 3856):  d22 0000000000000000  d23 0000000000000000 I/DEBUG  ( 3856):  d24 0000000000000000  d25 0000000000000000 I/DEBUG  ( 3856):  d26 0000000000000000  d27 0000000000000000 I/DEBUG  ( 3856):  d28 0000000000000000  d29 0000000000000000 I/DEBUG  ( 3856):  d30 0000000000000000  d31 0000000000000000 I/DEBUG  ( 3856):  scr 00000010 I/DEBUG  ( 3856): I/DEBUG  ( 3856):          #00  pc 0000a554  /data/memtest I/DEBUG  ( 3856):          #01  pc 00016834  /system/lib/libc.so (__libc_init) I/DEBUG  ( 3856): I/DEBUG  ( 3856): code around pc: I/DEBUG  ( 3856): 0000a534 e3a0000a ebfff8d1 e3a01001 e1a00004  ................ I/DEBUG  ( 3856): 0000a544 e5c4100a ebfff8d9 e3560001 e3a03000  ..........V..0.. I/DEBUG  ( 3856): 0000a554 e5c43000 0a000064 e59f21f0 e2857004  .0..d....!...p.. I/DEBUG  ( 3856): 0000a564 e5954004 e08f1002 e1a00004 ebfff8c0  .@.............. I/DEBUG  ( 3856): 0000a574 e3500000 0a00003f e59fc1d4 e1a00004  ..P.?........... I/DEBUG  ( 3856): I/DEBUG  ( 3856): code around lr: I/DEBUG  ( 3856): 40133f6c 68200701 0001f020 454e1046 2e00d018  .. h ...F.NE.... I/DEBUG  ( 3856): 40133f7c 2102da01 1c81e000 43394338 f7eb4622  ...!....8C9C"F.. I/DEBUG  ( 3856): 40133f8c 4605ed56 d1ec2800 da062e00 0101f008  V..F.(.......... I/DEBUG  ( 3856): 40133f9c f06f4620 f7ea4200 4628e8d4 87f0e8bd  Fo..B....(F.... I/DEBUG  ( 3856): 40133fac eff0f7e9 6003234b 30fff04f bf00e7f6  ....K#.`O..0.... I/DEBUG  ( 3856): I/DEBUG  ( 3856): memory map around addr 4015bff4: I/DEBUG  ( 3856): 40150000-4015a000 I/DEBUG  ( 3856): 4015a000-4015d000 I/DEBUG  ( 3856): 4015d000-4015e000 I/DEBUG  ( 3856): I/DEBUG  ( 3856): stack: I/DEBUG  ( 3856):    bec759e0  00000000 I/DEBUG  ( 3856):    bec759e4  00000000 I/DEBUG  ( 3856):    bec759e8  00000000 I/DEBUG  ( 3856):    bec759ec  4012090f  /system/lib/libefence.so I/DEBUG  ( 3856):    bec759f0  4015a014 I/DEBUG  ( 3856):    bec759f4  00002000 I/DEBUG  ( 3856):    bec759f8  00000004 I/DEBUG  ( 3856):    bec759fc  40120df1  /system/lib/libefence.so I/DEBUG  ( 3856):    bec75a00  4015bff4 I/DEBUG  ( 3856):    bec75a04  bec75a54  [stack] I/DEBUG  ( 3856):    bec75a08  00000001 I/DEBUG  ( 3856):    bec75a0c  bec75a5c  [stack] I/DEBUG  ( 3856):    bec75a10  00000000 I/DEBUG  ( 3856):    bec75a14  400d9167  /system/lib/libc.so I/DEBUG  ( 3856):    bec75a18  df0027ad I/DEBUG  ( 3856):    bec75a1c  00000000 I/DEBUG  ( 3856): #00 bec75a20  00008924  /data/memtest I/DEBUG  ( 3856):    bec75a24  bec75a54  [stack] I/DEBUG  ( 3856):    bec75a28  00000001 I/DEBUG  ( 3856):    bec75a2c  bec75a5c  [stack] I/DEBUG  ( 3856):    bec75a30  00000000 I/DEBUG  ( 3856):    bec75a34  400d9837  /system/lib/libc.so I/DEBUG  ( 3856): #01 bec75a38  00000000 I/DEBUG  ( 3856):    bec75a3c  00000000 I/DEBUG  ( 3856):    bec75a40  00000000 I/DEBUG  ( 3856):    bec75a44  00000000 I/DEBUG  ( 3856):    bec75a48  00000000 I/DEBUG  ( 3856):    bec75a4c  b00046ef  /system/bin/linker I/DEBUG  ( 3856):    bec75a50  00000001 I/DEBUG  ( 3856):    bec75a54  bec75b79  [stack] I/DEBUG  ( 3856):    bec75a58  00000000 I/DEBUG  ( 3856):    bec75a5c  bec75b83  [stack] I/DEBUG  ( 3856):    bec75a60  bec75b8f  [stack] I/DEBUG  ( 3856):    bec75a64  bec75ba2  [stack] I/DEBUG  ( 3856):    bec75a68  bec75bc5  [stack] I/DEBUG  ( 3856):    bec75a6c  bec75bde  [stack] I/DEBUG  ( 3856):    bec75a70  bec75c08  [stack] I/DEBUG  ( 3856):    bec75a74  bec75c20  [stack] I/DEBUG  ( 3856):    bec75a78  bec75c57  [stack] I/DEBUG  ( 3856):    bec75a7c  bec75c61  [stack] I/BootReceiver( 3551): Copying /data/tombstones/tombstone_00 to DropBox (SYSTEM_TOMBSTONE) D/dalvikvm( 3551): GC_CONCURRENT freed 398K, 10% free 8805K/9735K, paused 3ms+5ms [2] + Segmentation fault  ./memtest
View full article
Introduction This guide provides a step by step explanation of what is involved in adding a new WiFi driver and making a new WiFi card work well in a custom Android build. (This guide was written for Android 4.1 but should be applicable to previous Android releases and hopefully future releases.) Contents Understand how Android WiFi works Port WiFi driver. Compile a proper wpa_supplicant in your BoardConfig.mk Modify your wifi.c in HAL. Launch wpa_supplicant and dhcpcd services in init.rc. Several debug tips. Understand How Android WiFi Works As the following figure, Android wireless architecture can be divided into three parts: Java Framework(WifiManager, WifiMonitor etc..), HAL(wifi.c,wpa_supplicant,netd) kernel space modules(wireless stack, wifi drivers) Java Framework communicate with wpa_supplicant using native interface (wifi.c). Wpa_supplicant and netd uses wireless extension or nl80211 to control WiFi drivers. Port WiFi driver Usually WiFi driver is provided as a kernel module. There are mainly two types of Android WiFi architecture:nl80211 and wext. With the implementation of nl80211/cfg80211 many wireless drivers in main line kernel  support nl80211 interface instead of wireless extension. For different vendors’ WiFi drivers, writing one Android.mk to add its compile into Android is what you should do. Here take atheros’s AR6kl as an example: ath6kl_module_file :=drivers/net/wireless/ath/ath6kl/ath6kl_sdio.ko $(ATH_ANDROID_SRC_BASE)/$(ath6kl_module_file):$(mod_cleanup) $(TARGET_PREBUILT_KERNEL) $(ACP)         $(MAKE) -C $(ATH_ANDROID_SRC_BASE) O=$(ATH_LINUXPATH) ARCH=arm CROSS_COMPILE=$(ARM_EABI_TOOLCHAIN)/arm-eabi- KLIB=$(ATH_\ LINUXPATH) KLIB_BUILD=$(ATH_LINUXPATH)         $(ACP) -fpt $(ATH_ANDROID_SRC_BASE)/compat/compat.ko $(TARGET_OUT)/lib/modules/         $(ACP) -fpt $(ATH_ANDROID_SRC_BASE)/net/wireless/cfg80211.ko $(TARGET_OUT)/lib/modules/ include $(CLEAR_VARS) LOCAL_MODULE := ath6kl_sdio.ko LOCAL_MODULE_TAGS := optional LOCAL_MODULE_CLASS := ETC LOCAL_MODULE_PATH := $(TARGET_OUT)/lib/modules LOCAL_SRC_FILES := $(ath6kl_module_file) include $(BUILD_PREBUILT) Compile a proper wpa_supplicant in your BoardConfig.mk In Android’s external directory, there are two wpa_supplicant_* projects. For wext-based wifi driver, wpa_supplicant_6 can be used. For nl80211-based WiFi driver, wpa_supplicnat_8 can only be used. But if WiFi vendors supply their own customized wpa_supplicant, it will be much easier to debug the communication between wpa_supplicant and WiFi drivers. No matter which supplicant  you choose, just control their compile in your BoardConfig.mk. Take atheros’s ath6kl as an example: ifeq ($(BOARD_WLAN_VENDOR),ATHEROS) BOARD_WLAN_DEVICE                        := ar6003 BOARD_HAS_ATH_WLAN                      := true WPA_SUPPLICANT_VERSION                  := VER_0_8_ATHEROS WIFI_DRIVER_MODULE_PATH                  := "/system/lib/modules/ath6kl_sdio.ko" WIFI_DRIVER_MODULE_NAME                  := "ath6kl_sdio" WIFI_DRIVER_MODULE_ARG                  := "suspend_mode=3 wow_mode=2 ar6k_clock=26000000 ath6kl_p2p=1" WIFI_DRIVER_P2P_MODULE_ARG              := "suspend_mode=3 wow_mode=2 ar6k_clock=26000000 ath6kl_p2p=1 debug_mask=0x2413" WIFI_SDIO_IF_DRIVER_MODULE_PATH          := "/system/lib/modules/cfg80211.ko" WIFI_SDIO_IF_DRIVER_MODULE_NAME          := "cfg80211" WIFI_SDIO_IF_DRIVER_MODULE_ARG          := "" WIFI_COMPAT_MODULE_PATH                  := "/system/lib/modules/compat.ko" WIFI_COMPAT_MODULE_NAME                  := "compat" WIFI_COMPAT_MODULE_ARG                  := "" endif then you need to provide a proper wpa_supplicant.conf  for your device. wpa_supplicant.conf  is very important because the control socket for android is specified in this file(ctrl_interface=). This file should be copied to /system/etc/wifi. Minimum required config options in wpa_supplicant.conf : There are two different ways in which wpa_supplicant can be configured, one is to use a "private" socket in android namespace, created by socket_local_client_connect() function in wpa_ctrl.c and another is by using a standard UNIX socket. Android private socket ctrl_interface=wlan0 update_config=1 - Unix standard socket ctrl_interface=DIR=/data/system/wpa_supplicant GROUP=wifi update_config=1 Modify your wifi.c in HAL Here what you should do is modifying some codes like wifi_load_driver and wifi_unload_driver. For Broadcom or CSR’s wifi driver, you can directly use the original wifi.c. But for atheros’s ath6kl driver, there are total three  .ko modules to install. So some micro variables and codes need to be changed to adapt it. Launch wpa_supplicant and dhcpcd services in init.rc If you have configured to use android private socket, you should do like this: service wpa_supplicant /system/bin/wpa_supplicant -Dwext -iwlan0 -c / data/misc/wifi /wpa_supplicant.conf socket wpa_wlan0 dgram 660 wifi wifi disabled oneshot or if you have configured to use unix standard socket, you should do like this: service wpa_supplicant /system/bin/wpa_supplicant -Dwext -iwlan0  -c/data/misc/wifi/wpa_supplicant.conf disabled oneshot If WiFi driver is not “wext” but “nl80211”, you should change it to –Dnl80211. For dhcpcd, you should lunch it like the following: service dhcpcd_wlan0 /system/bin/dhcpcd -ABKL     class late_start     disabled oneshot The parameters “-ABKL” can largely enhance wifi connection speed.  About what “ABKL” stand for, you can refer to dhcpcd’s GNU manual. Several debug tips Incorrect permissions will result in wpa_supplicant not being able to create/open the control socket andlibhardware_legacy/wifi/wifi.c won't connect. Since Google modified wpa_supplicant to run as wifi user/group the directory structure and file ownership should belong to wifi user/group (see os_program_init() function in wpa_supplicant/os_unix.c ). Otherwise errors like: E/WifiHW  (  😞 Unable to open connection to supplicant on "/data/system/wpa_supplicant/wlan0": No such file or directory will appear. Also wpa_supplicant.conf should belong to wifi user/group because wpa_supplicant will want to modify this file. How to Enable debug for wpa_supplicant.               By default wpa_supplicant is set to MSG_INFO that doesn't tell much.                    To enable more messages:                 modify common.c and set wpa_debug_level = MSG_DEBUG                 modify common.h and change #define wpa_printf from if ((level) >= MSG_INFO) to if ((level) >= MSG_DEBUG)         3. WiFi driver’s softmac.               For most vendors’ WiFi driver, the mac address is fixed. We should add one softmac rule to let WiFi driver’s mac is unique for each board.
View full article
System Memory Usage and Configuration Introduction This document describes i.MX android memory usage, layout and configuration for the entire system. Total DDR memory usage When i.MX Android is running, the DDR memory will be used by the following components: Linux Kernel reserved space, including: kernel text, data section and initrd kernel page tables       Normal zone space managed by kernel’s MM (high memory zone is also included) Used by application by brk() or malloc() in libc Used by kernel by mm api, like: kmalloc, dma_alloc, vmalloc       Reserved memory for GPU drivers, used by GPU libs, drivers Android surface view, normal surface buffers VPUs working buffer and bitstream (we allocate the VPU buffer from GPU driver to make a unify method of allocation) Reserved space for framebuffer BG triple buffers Framebuffer display are always required to have triple and large buffers       Memory layout The following diagram shows the default memory usage and layout on i.MX6Q/DL platform. Memory configuration According to different type of product and different hardware configurations (ddr size, screen resolution, camera), customer may do some configurations to the memory layout and usage to optimize their system. Some memory reservation can be configured by command line or modifying the code. The kernel reserved space cannot be adjusted. It is controlled by the kernel and the Normal zone size and it depends on the total DDR size and the reserved spaces. Reserved GPU memory size can be adjusted by adding "gpumem=" parameters in kernel commandline. It's size is highly depends on the screen resolution, the video stream decoding requirement and the camera resolution, fps. gpumem=<size>M Reserved memory size for BG (background) framebuffer can be configured by command line fbmem=<fb0 size>,<fb2 size>,<fb4 size>,<fb5 size> For example: If you have two display devices, one is XGA LVDS, the other is HDMI 1080p device (default 32bpp), you have to specify the BG buffer size for them: fbmem=10M,24M The size is calculated by xres*yres*bpp*3: 10M ~= 1024x768x4(32bpp)x3(triple buffer) 24M ~= 1920x1080x4(32bpp)x3(triple buffer)
View full article
Dumping the pipeline elements into a image file # On target, run the pipeline $ export GST_DEBUG_DUMP_DOT_DIR=<folder where dot files are created> $ gst-launch playbin2 uri=file://${avi} $ # Move the .dot files to a host machine (scp, etc) # On Host dot <dot file> -Tpng -o out.png # dot command is part the the graphviz package Querying which elements are being used on a gst-launch command GST_DEBUG=GST_ELEMENT_FACTORY:3 gst-launch playbin2 uri=file://`pwd`/<media file> Interrupting a gst-launch process running in the background kill -INT $PID # where $PID is the process ID Using only SW codecs # Backup and remove $ find /usr/lib/gstreamer-0.10 -name "libmfw*" | grep -v sink | xargs tar cvf /libmfw_gst.tar $ find /usr/lib/gstreamer-0.10 -name "libmfw*" | grep -v sink | xargs rm # Run your pipeline. This time SW codecs are used $ gst-launch playbin2 uri=file://`pwd`/media_file # To 'install' FSL plugins again, just untar the file $ cd / && tar xvf libmfw_gst.tar && cd - # then run your pipeline. This time HW codecs are used $ gst-launch playbin2 uri=file://`pwd`/media_file
View full article
Video decoding gst-launch filesrc location=sample.mp4 ! qtdemux ! ffdec_h264 ! mfw_v4lsink Notes: On LTIB BSP 3.0.35_4.0.0, prep the package and apply the attached patch on top, then build. On Yocto, the easy way to add the gst-ffmpeg package is by adding these two lines on the conf/local.conf file: IMAGE_INSTALL_append = " gst-ffmpeg" LICENSE_FLAGS_WHITELIST = 'commercial'
View full article
Multiple-Display means video playback on multiple screens. In case playback needs to be in a unique screen, the mfw_isink element must be used and some pipelines examples can be found on this link: GStreamer iMX6 Multi-Overlay. Number of Displays Display type Kernel parameters Pipelines # Set these shells variables before running the pipelines alias gl=gst-launch SINK_1="\"mfw_v4lsink device=/dev/video17\"" SINK_2="\"mfw_v4lsink device=/dev/video18\"" SINK_3="\"mfw_v4lsink device=/dev/video20\"" media1=file:///root/media1 media2=file:///root/media2 media3=file:///root/media3 2 hdmi + lvds video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 2 lvds + lvds video=mxcfb0:dev=ldb,LDB-XGA,if=RGB666 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 2 lcd + lvds video=mxcfb0:dev=lcd,800x480M@55,if=RGB565 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 3 hdmi + lvds + lvds video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB6 video=mxcfb2:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 playbin2 uri=$media3 video-sink=$SINK_3
View full article
Audio transcoding # Transcode the input file into MP3 gst-launch filesrc location=media_file typefind=true ! beepdec ! mfw_mp3encoder ! matroskamux ! filesink location=output_audio_file.mk Audio transcoding + streaming # Transcode the input file into MP3 and stream it # On host, get the transcoded audio data gst-launch udpsrc port=8880 ! <CAPS_FROM_THE_TARGET> ! queue ! ffdec_mp3 ! alsasink # where <CAPS_FROM_THE_TARGET> can be something like 'audio/mpeg, mpegversion=(int)1, layer=(int)3, rate=(int)44100, channels=(int)2' # run the pipeline and check the caps gst-launch filesrc location=media_file typefind=true ! queue ! beepdec ! mfw_mp3encoder ! udpsink host=10.112.103.77 port=8880 -v Video Transcoding* # Transcode the input file into AVC (H-264) gst-launch filesrc location=media_file typefind=true ! aiurdemux name=demux ! queue ! vpudec ! vpuenc codec=avc ! matroskamux name=mux ! filesink location=output_media_file.mk Video Transcoding + scaling* # Transcode the input file into AVC (H264) and rescale video to 480p gst-launch filesrc location=media_file typefind=true ! aiurdemux name=demux ! queue ! vpudec ! mfw_ipucsc ! 'video/x-raw-yuv, width=(int)720, height=(int)480' ! vpuenc codec=avc ! matroskamux name=mux ! filesink location=output_media_file.mk Video transcoding + scaling + streaming* # NOTE: Run the pipelines in the presented order # On host, get the transcoded+scaled video data $ gst-launch udpsrc port=8888 ! <CAPS_FROM_THE_TARGET> ! queue ! ffdec_h264 ! xvimagesink # On target, run the pipeline and check the caps gst-launch filesrc location=media_file typefind=true ! aiurdemux name=demux ! queue ! vpudec ! mfw_ipucsc ! 'video/x-raw-yuv, width=(int)720, height=(int)480' ! vpuenc codec=avc ! udpsink host=$HOST_IP port=8888 -v * There is a limit for the number of pipelines which can be run simultaneously, for high resolution input files, at most two for 1080p and four for 720p.
View full article
Audio, from a file gst-launch filesrc location=test.wav ! wavparse ! mfw_mp3encoder ! filesink location=output.mp3 Audio Recording gst-launch alsasrc num-buffers=$NUMBER blocksize=$SIZE ! mfw_mp3encoder ! filesink location=output.mp3 # where #     duration = $NUMBER*$SIZE*8 / (samplerate *channel *bitwidth) # Example: 60 seconds recording # gst-launch alsasrc num-buffers=240 blocksize=44100 ! mfw_mp3encoder ! filesink location=output.mp3 # # To verify that is correct, do a normal audio playback gst-launch filesrc location=output.mp3 typefind=true ! beepdec ! audioconvert ! 'audio/x-raw-int,channels=2' ! alsasink Video, from a test source gst-launch videotestsrc ! queue ! vpuenc ! matroskamux ! filesink location=./test.avi Video, from a file gst-launch filesrc location=sample.yuv blocksize=$BLOCK_SIZE ! 'video/x-raw-yuv,format=(fourcc)I420, width=$WIDTH, height=$HEIGHT, framerate=(fraction)30/1' ! vpuenc codec=$CODEC ! matroskamux ! filesink location=output.mkv sync=false # where #     BLOCK_SIZE = WIDTH * HEIGHT * 1.5 #     CODEC = 0(MPEG4), 5(H263), 6(H264) or 12(MJPG). # # For example, encoding a CIF raw file gst-launch filesrc location=sample.yuv blocksize=152064 ! 'video/x-raw-yuv,format=(fourcc)I420, width=352, height=288, framerate=(fraction)30/1' ! vpuenc codec=0 ! matroskamux ! filesink location=sample.mkv sync=false Video, from Web camera # when the web cam is connected, the device node /dev/video0 should be present. In order to test the camera, without encoding gst-launch v4l2src ! mfw_v4lsink # in recording, run: # gst-launch v4l2src num-buffers=-1 ! queue max-size-buffers=2 ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false # # where sync=false indicates filesink to to use a clock sync # # In case a specific width/height is needed, just add the filter caps gst-launch v4l2src num-buffers=-1  ! 'video/x-raw-yuv,format=(fourcc)I420, width=352, height=288, framerate=(fraction)30/1' ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false # # In case you want to see in the screen what the camera is capturing, add a tee element # gst-launch v4l2src num-buffers=-1 ! tee name=t ! queue ! mfw_v4lsink t. ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false Video, from Parallel/MIPI camera # The camera driver needs to be loaded before executing the pipeline, refer to the BSP document to see which driver to load # MIPI (J5 port): modprobe ov5640_camera_mipi modprobe mxc_v4l2_capture   # Parallel (J9 port): modprobe ov5642_camera modprobe mxc_v4l2_capture   gst-launch mfw_v4lsrc ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false   # Do a 'gst-inspect mfw_v4lsrc' or 'gst-inspect vpuenc' to see other possible settings (resolution, fps, codec, etc.)
View full article
Video, bad performance gst-launch filesrc location=test.mp4 typefind=true ! aiurdemux ! vpudec ! mfw_v4lsink Video, better performance gst-launch filesrc location=sample.mp4 typefind=true ! aiurdemux ! queue max-size-time=0 ! vpudec ! mfw_v4lsink # typefind=true allows to 'type find' the source file before negotiating # max-size-time=0 indicates to ignore possible blocking issues # In case of ASF files gst-launch filesrc location=sample.asf typefind=true ! aiurdemux ! queue max-size-time=0 ! mfw_wmvdecoder ! mfw_v4lsink Audio gst-launch filesrc location=sample.mp3  typefind=true ! beepdec ! audioconvert  ! 'audio/x-raw-int, channels=2' ! alsasink Audio with visualization gst-launch filesrc location=sample.mp3 typefind=true ! beepdec ! tee name=t ! queue ! audioconvert  ! 'audio/x-raw-int, channels=2' ! alsasink t. ! queue ! audioconvert ! goom ! autovideoconvert ! autovideosink Video/Audio long version gst-launch filesrc location=sample.avi typefind=true ! aiurdemux name=demux demux. ! queue max-size-buffers=0 max-size-time=0 ! vpudec ! mfw_v4lsink demux. ! queue max-size-buffers=0 max-size-time=0 ! beepdec ! audioconvert ! 'audio/x-raw-int, channels=2' ! alsasink # queue properties, max-size-buffers=0 and max-size-time=0, allows a smoother playback; type 'gst-inspect queue' for more info VA short version gplay sample.avi VA short version gst-launch playbin2 uri=file://<full path to sample file>
View full article
Fast GPU Image Processing in the i.MX 6x by Guillermo Hernandez, Freescale Introduction Color tracking is useful as a base for complex image processing use cases, like determining what parts of an image belong to skin is very important for face detection or hand gesture applications. In this example we will present a method that is robust enough to take some noise and blur, and different lighting conditions thanks to the use of OpenGL ES 2.0 shaders running in the i.MX 6X  multimedia processor. Prerequisites This how-to assumes that the reader is an experienced i.mx developer and is familiar with the tools and techniques around this technology, also this paper assumes the reader has intermediate graphics knowledge and experience such as the RGBA structure of pictures and video frames and programming OpenGL based applications, as we will not dig in the details of the basic setup. Scope Within this paper, we will see how to implement a very fast color tracking application that uses the GPU instead of the CPU using OpenGL ES 2.0 shaders. Step 1: Gather all the components For this example we will use: 1.      i.MX6q ARD platform 2.      Linux ER5 3.      Oneric rootfs with ER5 release packages 4.      Open CV 2.0.0 source Step 2: building everything you need Refer to ER5 User´s Guide and Release notes on how to build and boot the board with the Ubuntu Oneric rootfs. After you are done, you will need to build the Open CV 2.0.0 source in the board, or you could add it to the ltib and have it built for you. NOTE: We will be using open CV only for convenience purposes, we will not use any if its advanced math or image processing  features (because everything happens on the CPU and that is what we are trying to avoid), but rather to have an easy way of grabbing and managing  frames from the USB camera. Step 3: Application setup Make sure that at this point you have a basic OpenGL Es 2.0 application running, a simple plane with a texture mapped to it should be enough to start. (Please refer to Freescale GPU examples). Step 4: OpenCV auxiliary code The basic idea of the workflow is as follows: a)      Get the live feed from the USB camera using openCV function cvCapture() and store into IplImage structure. b)      Create an OpenGL  texture that reads the IplImage buffer every frame and map it to a plane in OpenGL ES 2.0. c)      Use the Fragment Shader to perform fast image processing calculations, in this example we will examine the Sobel Filter and Binary Images that are the foundations for many complex Image Processing algorithms. d)      If necessary, perform multi-pass rendering to chain several image processing shaders  and get an end result. First we must import our openCV relevant headers: #include "opencv/cv.h" #include "opencv/cxcore.h" #include "opencv/cvaux.h" #include "opencv/highgui.h" Then we should define a texture size, for this example we will be using 320x240, but this can be easily changed to 640 x 480 #define TEXTURE_W 320 #define TEXTURE_H 240 We need to create an OpenCV capture device to enable its V4L camera and get the live feed: CvCapture *capture; capture = cvCreateCameraCapture (0); cvSetCaptureProperty (capture, CV_CAP_PROP_FRAME_WIDTH,  TEXTURE_W); cvSetCaptureProperty (capture, CV_CAP_PROP_FRAME_HEIGHT, TEXTURE_H); Note: when we are done, remember to close the camera stream: cvReleaseCapture (&capture); OpenCV has a very convenient structure used for storing pixel arrays (a.k.a. images) called IplImage IplImage *bgr_img1; IplImage *frame1; bgr_img1 = cvCreateImage (cvSize (TEXTURE_W, TEXTURE_H), 8, 4); OpenCV has a very convenient function for capturing a frame from the camera and storing it into a IplImage frame2 = cvQueryFrame(capture2); Then we will want to separate the camera capture process from the pos-processing filters and final rendering; hence, we should create a thread to exclusively handle the camera: #include <pthread.h> pthread_t camera_thread1; pthread_create (&camera_thread1, NULL, UpdateTextureFromCamera1,(void *)&thread_id); Your UpdateTextureFromCamera() function should be something like this: void *UpdateTextureFromCamera2 (void *ptr) {       while(1)       {             frame2 = cvQueryFrame(capture);             //cvFlip (frame2, frame2, 1);  // mirrored image             cvCvtColor(frame2, bgr_img2, CV_BGR2BGRA);       }       return NULL;    } Finally, the rendering loop should be something like this: while (! window->Kbhit ())       {                         tt = (double)cvGetTickCount();             Render ();             tt = (double)cvGetTickCount() - tt;             value = tt/(cvGetTickFrequency()*1000.);             printf( "\ntime = %gms --- %.2lf FPS", value, 1000.0 / value);             //key = cvWaitKey (30);       }       Step 5: Map the camera image to a GL Texture As you can see, you need a Render function call every frame, this white paper will not cover in detail the basic OpenGL  or EGL setup of the application, but we would rather focus on the ES 2.0 shaders. GLuint _texture; GLeglImageOES g_imgHandle; IplImage *_texture_data; The function to map the texture from our stored pixels in IplImage is quite simple: we just need to get the image data, that is basically a pixel array void GLCVPlane::PlaneSetTex (IplImage *texture_data) {       cvCvtColor (texture_data, _texture_data, CV_BGR2RGB);       glBindTexture(GL_TEXTURE_2D, _texture);       glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, _texture_w, _texture_h, 0, GL_RGB, GL_UNSIGNED_BYTE, _texture_data->imageData); } This function should be called inside our render loop: void Render (void) {   glClearColor (0.0f, 0.0f, 0.0f, 0.0f);   glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);   PlaneSetTex(bgr_img1); } At this point the OpenGL texture is ready to be used as a sampler in our Fragment Shader  mapped to a 3D plane Lastly,  when you are ready to draw your plane with the texture in it: // Set the shader program glUseProgram (_shader_program); … // Binds this texture handle so we can load the data into it /* Select Our Texture */ glActiveTexture(GL_TEXTURE0); //Select eglImage glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, g_imgHandle); glDrawArrays (GL_TRIANGLES, 0, 6); Step 6: Use the GPU to do Image Processing First we need to make sure we have the correct Vertex Shader and Fragment shader, we will  focus only in the Fragment Shader, this is where we will process our image from the camera. Below you will find the most simple fragment shader, this one only colors pixels from the sample texture const char *planefrag_shader_src =       "#ifdef GL_FRAGMENT_PRECISION_HIGH                    \n"       "  precision highp float;                            \n"       "#else                                          \n"       "  precision mediump float;                    \n"       "#endif                                        \n"       "                                              \n"       "uniform sampler2D s_texture;                  \n"       "varying  vec3      g_vVSColor;                      \n"       "varying  vec2 g_vVSTexCoord;                        \n"       "                                              \n"       "void main()                                    \n"       "{                                              \n"       "    gl_FragColor = texture2D(s_texture,g_vVSTexCoord);    \n"       "}                                              \n"; Binary Image The most Simple Image Filter is the Binary Image, this one converts a source image to a black/white output, to decide if a color should be black or white we need a threshold,  everything below that threshold will be black, and any color above should be white.               The shader code is as follows: const char* g_strRGBtoBlackWhiteShader =     #ifdef GL_FRAGMENT_PRECISION_HIGH                            precision highp float;                            #else                                            precision mediump float;                          #endif                                            varying  vec2 g_vVSTexCoord;                  uniform sampler2D s_texture;                    uniform float threshold;                                                                        void main() {                                    vec3 current_Color = texture2D(s_texture,g_vVSTexCoord).xyz;         float luminance = dot (vec3(0.299,0.587,0.114),current_Color);         if(luminance>threshold)                      \n"             gl_FragColor = vec4(1.0);                \n"           else                                  \n"                          gl_FragColor = vec4(0.0);                \n"       }                                        \n"; You can notice that the main operation is to get a luminance value of the pixel, in order to achieve that we have to multiply a known vector (obtained empirically) by the current pixel, then we simply compare that luminance value with a threshold. Anything below that threshold will be black, and anything above that threshold will be considered a white pixel. SOBEL Operator Sobel is a very common filter, since it is used as a foundation for many complex Image Processing processes, particularly in edge detection algorithms. The sobel operator is based in convolutions, the convolution is made of a particular mask, often called a kernel (on common therms, usually a 3x3 matrix). The sobel operator calculates the gradient of the image at each pixel, so it tells us how it changes from the pixels surrounding the current pixel , meaning how it increases or decreases (darker to brighter values).           The shader is a bit long, since several operations must be performed, we shall discuss each of its parts below: First we need to get the texture coordinates from the Vertex Shader: const char* plane_sobel_filter_shader_src = #ifdef GL_FRAGMENT_PRECISION_HIGH                    precision highp float;                          #else                                    precision mediump float;                        #endif                                          varying  vec2 g_vVSTexCoord;                  uniform sampler2D s_texture;                    Then we should define our kernel, as stated before, a 3x3 matrix should be enough, and the following values have been tested with good results: mat3 kernel1 = mat3 (-1.0, -2.0, -1.0,                                          0.0, 0.0, 0.0,                                              1.0, 2.0, 1.0);    We also need a convenient way to convert to grayscale, since we only need grayscale information for the Sobel operator, remember that to convert to grayscale you only need an average of the three colors: float toGrayscale(vec3 source) {                    float average = (source.x+source.y+source.z)/3.0;        return average;              } Now we go to the important part, to actually perform the convolutions. Remember that by the OpenGL ES 2.0 spec, nor recursion nor dynamic indexing is supported, so we need to do our operations the hard way: by defining vectors and multiplying them. See the following code:   float doConvolution(mat3 kernel) {                              float sum = 0.0;                                    float current_pixelColor = toGrayscale(texture2D(s_texture,g_vVSTexCoord).xyz); float xOffset = float(1)/1024.0;                    float yOffset = float(1)/768.0; float new_pixel00 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-  xOffset,g_vVSTexCoord.y-yOffset)).xyz); float new_pixel01 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y-yOffset)).xyz); float new_pixel02 = toGrayscale(texture2D(s_texture,  vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y-yOffset)).xyz); vec3 pixelRow0 = vec3(new_pixel00,new_pixel01,new_pixel02); float new_pixel10 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-xOffset,g_vVSTexCoord.y)).xyz);\n" float new_pixel11 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y)).xyz); float new_pixel12 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y)).xyz); vec3 pixelRow1 = vec3(new_pixel10,new_pixel11,new_pixel12); float new_pixel20 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-xOffset,g_vVSTexCoord.y+yOffset)).xyz); float new_pixel21 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y+yOffset)).xyz); float new_pixel22 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y+yOffset)).xyz); vec3 pixelRow2 = vec3(new_pixel20,new_pixel21,new_pixel22); vec3 mult1 = (kernel[0]*pixelRow0);                  vec3 mult2 = (kernel[1]*pixelRow1);                  vec3 mult3 = (kernel[2]*pixelRow2);                  sum= mult1.x+mult1.y+mult1.z+mult2.x+mult2.y+mult2.z+mult3.x+     mult3.y+mult3.z;\n"     return sum;                                      } If you see the last part of our function, you can notice that we are adding the multiplication values to a sum, with this sum we will see the variation of each pixel regarding its neighbors. The last part of the shader is where we will use all our previous functions, it is worth to notice that the convolution needs to be applied horizontally and vertically for this technique to be complete: void main() {                                    float horizontalSum = 0.0;                            float verticalSum = 0.0;                        float averageSum = 0.0;                        horizontalSum = doConvolution(kernel1);        verticalSum = doConvolution(kernel2);            if( (verticalSum > 0.2)|| (horizontalSum >0.2)||(verticalSum < -0.2)|| (horizontalSum <-0.2))                        averageSum = 0.0;                      else                                                    averageSum = 1.0;                    gl_FragColor = vec4(averageSum,averageSum,averageSum,1.0);                }    Conclusions and future work At this point, if you have your application up and running, you can notice that Image Processing can be done quite fast, even with images larger than 640 480. This approach can be expanded to a variety of techniques like Tracking, Feature detection and Face detection. However, these techniques are out of scope for now, because this algorithms need multiple rendering passes (like face detection), where we need to perform an operation, then write the result to an offscreen buffer and use that buffer as an input for the next shader and so on.  But Freescale is planning to release an Application Note in Q4 2012 that will expand this white paper and cover these techniques in detail.
View full article
Note: All these gstreamer pipelines have been tested using a i.MX6Q board with a kernel version 3.0.35-2026-geaaf30e. Tools: gst-launch gst-inspect FSL Pipeline Examples: GStreamer i.MX6 Decoding GStreamer i.MX6 Encoding GStreamer Transcoding and Scaling GStreamer i.MX6 Multi-Display GStreamer i.MX6 Multi-Overlay GStreamer i.MX6 Camera Streaming GStreamer RTP Streaming Other plugins: GStreamer ffmpeg GStreamer i.MX6 Image Capture GStreamer i.MX6 Image Display Misc: Testing GStreamer Tracing GStreamer Pipelines GStreamer miscellaneous
View full article
That Python script exercises the i.MX serial download protocol in UART mode. It can be used with i.MX21/27/25/31/35/51/53, since they are based on the same protocol. The details on the protocol can be found in the "System Boot" section of the used i.MX reference manual. Requirements: - Python 2.7 (not tested with other version) - Pyserial modules (http://pyserial.sourceforge.net) The i.MX must boot in serial mode with a serial cable connected to a host running the script. The COM1 is configured in 115200 - no parity - 1 stop bit - 8-bit. If another COM is used, you will have to make the appropriate changes in the script. That script uses only hexa formatted address and data for the command parameters. The following command returns the HAB status whenever it is used, so it helps to check that the setup is functional. Eventually, when some code was downloaded, this command triggers its execution. The returned value is only useful when doing a secure boot, and does not matter otherwise. By default, it returns in hexa format the following: > iMX_Serial_Download_Protocol.py get_status Status is: F0 F0 F0 F0 Typical usage to download and execute some code: 1. Ensure that the protocol is ready: > iMX_Serial_Download_Protocol.py get_status 2. Configure the memories and other things like I/O, such does the DCD: > iMX_Serial_Download_Protocol.py write_mem memory_address access_size data As this configures only one register at a time, it is necessary to call it several times to configure like a SDRAM. Of course, feel free to enhance that script by adding like a load from file memory write 🙂 3. Download the executable binary: > iMX_Serial_Download_Protocol.py write_file memory_address file memory_address is necessary a valid address from the i.MX memory map, meaning that it must be a directly accessible memory area by the ARM core (registers, RAM). 4. Run the executable by jumping from ROM code to this loaded code: > iMX_Serial_Download_Protocol.py get_status This must returns: 88 88 88 88, which signifies that the ROM has successfully jumped at the entry point of the executable. That entry point must be specified in the flash header or Image Vector Table (IVT) depending of the i.MX. As a consequence, a valid flash header or IVT must be placed at the offset 0x0 of the downloaded code. In each boot image, this is commonly placed at the offset 0x400, so it is easy to build another one at offset 0x0 which is usually an empty space. Pointer to DCD should remain null. The script is provided "as is" without any warranty, and is not an official tool supported by Freescale. The script is here: 19-iMX_Serial_Download_Protocol.py
View full article