i.MX Processors Knowledge Base

cancel
Showing results for 
Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

In this doc will show how to use i.MX8QXP DPU do image warp.   SW: i.MX Linux BSP L5.4.24_2.1.0 bsp release and patch in this doc HW: i.MX8QXP MEK board, ov5640 camera, HDMI display   Introduction Image Warping is the process of digitally manipulating image data such that the image’s projection precisely matches a specific projection surface or shape.   i.MX8QXP DPU controller could do image warp work by its blit engine and display engine. I choose to enable blit engine’s fetchwarp9 unit to do warp work. Check i.MX8QXP RM, Blit Engine support Image Warp as: “Performs a re-sampling of the source image with any pattern. The sample point positions are read from a compressed coordinate buffer.” So you need prepare two input buffers, one buffer store original image data, the other buffer store resample point coordinate, DPU blit engine will read that two buffer by fetchwarp9 unit, then output result image buffer which contain warped image data. Note i.MX8QXP DPU blit engine fetchwarp9 unit, for the input original image buffer, support RGB and YUV 4:4:4 format. The resample point coordinate buffer contents is depend on what kind warp transformation in your use case; and for each resample point coordinate format check i.MX8QXP RM fecthwarp unit description as below. In this doc, using the 2xs12.4 format, each point x coordinate use (12+4) bit, same as y coordinate.   For DPU fetchwarp9 unit, to enable it work for image warp, check i.MX8QXP RM:   2.Patch notes and test code imx8-dpu-warp-kernel.diff contain the kernel side change for drm ioctl api permission and add vmap function of ion dma_buf_ops. libg2d.so contain the binary for adding warp feature. g2d.h is header file which add define for G2D_WARP and G2D_YUV4. imx8-ov5640-dpu-warp-render.c is a sample code which show how to call g2d lib to image warp, need open the G2D_WARP flag. And this code contain some example calculate the coordinate buffer of rotate, swirl, barrel distortion, affine transformation, perspective transformation, wave transformation. And this code will show read camera input frame then add warp process , then render warp image frame to display.   The test cmd usage as below, read 1080P frame from ov5640 camera, do warp then render warp image to drm plane. Note as dpu fetchwarp9 unit support YUV 4:4:4 input image frame, so below cmd need set parameter YUV4, which will ask ISI driver output YUV 4:4:4 image frame. imx8-ov5640-dpu-warp-render   -i /dev/video0 -f YUV4  -S 1920,1080  -M imx-drm -p 91:38 -F XB24  -b 6  -e g2d  -t 5         -i <video-node> set video node (default: /dev/video0)         -f <fourcc>     set input format using 4cc         -S <width,height>       set input resolution         -s <width,height>@<left,top>    set crop area         -M <drm-module> set DRM module         -o <connector_id>:<crtc_id>:<mode>      set a mode         -p <connector_id>:<crtc_id>     output to a plane         -F <fourcc>     set output format using 4cc         -t <warptype>   set 0 neutual 1 rotate 2 swirl 3 divisionmodel 4 affine 5 perpsptive 6 wave         -b buffer_count set number of buffers        3.Example original image:                     Reference: https://www.nxp.com/webapp/Download?colCode=IMX8DQXPRM https://www.nxp.com/webapp/Download?colCode=L5.4.24_2.1.0_MX8QXPC0&appType=license https://en.wikipedia.org/wiki/Image_geometry_correction https://lists.freedesktop.org/archives/dri-devel/2012-March/019778.html https://store.kde.org/p/1246558 https://github.com/ImageMagick/ImageMagick        
View full article
BSP: L5.4.47-2.2.0-rc2 Board: imx8QM B0 HW:  LVDS2HDMI , MIPIDSI2HDMI. It is the porting of i.MX8QM dpu loopback to isi . to the 5.4.y, with the addition of the MIPI-DSI loopback and the HDMI loopback.  Overview of the DC capture configuration: For enabling the capture: only DC 0 Stream 0  and DC 1 Stream 1 can be captured The pixel link Master address should be set to 3 because the Receiver Address at ISI is 3 and can't be changed. To continue displaying the stream, the Receiver Address at LVDS and DSI or HDMI should be changed to 3. It is possible to change the RA by using GPIO of the modules.   Patches: Create V4L2 device enabling the capture of by the ISI of DC loop-backs. Enable ISI capture from DSI 0 / LVDS 1 in 1920x1080 (at the same time.) Enable ISI capture from HDMI in 2840x2160 (half with even pixel) in 1920x2160. While capturing with the ISI, the captured screen continue to be displayed. Remark: Ov5640 cameras are also enabled in the same dtb. So 4 stream in 1920x1080 can be captured at the same time. Installation and gstreamer command: See readme
View full article
This document is a user guide for the GStreamer version 1.0 based accelerated solution included in all the i.MX 8 family SoCs supported by NXP BSP L5.4.24_1.1.0. Some instructions assume a host machine running a Linux distribution, such as Ubuntu, connected to i.MX 8 device. These commands were tested using Ubuntu 18.04 LTD, and while Ubuntu is not required on the host machine, other distributions have not been tested. These instructions are targeted for use with the following hardware: • i.MX 8MQ EVK • i.MX 8MN EVK • i.MX 8MN EVK • i.MX 8QXP MEK B0 • i.MX 8QM MEK B0   Release History v1.0 - Mar 2020 - Initial release. v2.0 - Sep 2020: Added the following content: - Mux/Demux Examples - Audio Examples - Image Examples - Transcode Examples - Streaming Examples - Multi-Display Examples - Scaling and Rotation Examples - Zero-copy Examples - Debug Examples Maintainers: . Marco Franchi . Pedro Jardim
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-345148 
View full article
This document describes the i.MX 8MM EVK mini-SAS connectors features on Linux and Android use cases, covering the supported daughter cards, the process to change Device Tree (DTS) files or Boot images, and enable these different display options on the board.
View full article
This document describes all the i.MX 8 MIPI-CSI use cases, showing the available cameras and daughter cards supported by the boards, the compatible Device Trees (DTS) files, and how to enable these different camera options on the i.MX 8 boards. Plus, this document describes some Advanced camera use cases too, such as multiples cameras output using imxcompositor_g2d plugin, GStreamer zero-copy pipelines and V4L2 API extra-controls examples.
View full article
SW: Linux 4.14.98_2.2.0 bsp release , gpu sdk 5.3.0 and patch in this doc HW: i.MX8QXP MEK board Inspired by the GIMP adjust color curve tool,  I write similar gui test code for i.MX8QXP gamma curve tuning. That is on display will show one linear curve first, use mouse to change any part of this curve, after that calculate value of point on changed curve , then pass related value to i.MX8QXP DPU gammcor unit , at last the display effect will be changed by DPU. When test code start up , will draw Y=X curve on display, that will from (0,0) to (500, 500) to (1023,1023). After you use mouse drag some part of the curve, will generated a new point (x_new, y_new) which is not on original Y=X curve. Then need draw a new curve that pass  through the (0,0) , (500, 500) , (x_new, y_new), (1023,1023).  That new curve , i choose to use Catmull-Rom Splines to get it. Use Catmull-Rom Splines for approximate a curve that pass through n point, then need n+2 control point. Extra 2 point could be selected as you want ,and they will affect the shape of the curve. Catmull-Rom Splines could pass through all control point, it is C1 continuity, no convex hull property. For example, there is four control point p0, p1 , p2 , p3,  to draw the curve pass through all of them, it could divide to segment p0 to p1 , segment p1 to p2 , segment p2 to p3. For segment p1 to p2 , select four control point as p0,p1, p2, p3, use Catmull-Rom Splines as below: For segment p0 to p1 , need use four control point as p0, p0, p1, p2: For segment p2 to p3, need use four control point as p1, p2, p3, p3: In this test code,  i will use gpu vertex shader to calculate each segment of curve, then use transform feedback to read back point value of each segment to cpu side, cpu side will pass related value to DPU gammcor unit for gamma tuning.   Test steps: Apply 8qxp_dpu_gammacor_4.14.98_2.2.0.diff on linux-imx for i.MX8QXP DPU device driver. Apply dpu_gamma_curve_gpusdk5.3.0.diff on imx-gpu-sdk for build this gui test code. Update the new kernel image, and copy test code to rootfs. Run any other application first to draw some thing on screen, for example  /usr/share/cinematicexperience-1.0/Qt5_CinematicExperience. Then run gui test code in this code, S01_SimpleTriangle_Wayland. Then there will show one linear curve on display , use mouse to change the curve as you like by put mouse cursor close to the curve, press the mouse button , drag it and release the mouse button, you will see the new curve on display , and also the display effect also be changed. Reference: 1>https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8x-family-arm-cortex-a35-3d-graphics-4k-video-dsp-error-correcting-code-on-ddr:i.MX8X  2> https://www.nxp.com/design/software/embedded-software/linux-software-and-development-tools/embedded-linux-for-i-mx-applications-processors:IMXLINUX?tab=Design_Tools_Tab  3>https://github.com/NXPmicro/gtec-demo-framework  4>https://en.wikipedia.org/wiki/Cubic_Hermite_spline  5>http://www.mvps.org/directx/articles/catmull
View full article
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-342059 
View full article
This doc show how to use i.MX8QXP Display Controller GammaCor unit to tune gamma. HW: i.MX8QXP MEK board, HDMI monitor SW: i.MX Linux 4.14.98_2.2.0 BSP release, patch in this doc 1.Introduce gamma The gamma, gamma correction, gamma encoding, gamma compression , these words all related one kind operation , see wiki page of it: The device used for image capture/print/display follow this power-law. For example the camera captured image , to view this image on display device as good as original captured image : gamma encoding when camera saved sensor data to image file,  and  gamma decoding when that image file display on your PC LCD monitor. That is : 2.  i.MX8QXP Display Controller Gamma Correction Unit The Gamma Correction unit position is located between Frame Gen unit and TCon unit.   More detail see below contents from i.MX8QXP RM: So GammaCor unit could be used as adjust display gamma , or brightness or contrast. To used it, need follow the steps at RM 15.9.2.4.4.8.3.   Something need to note: You need program 33 sample point value into the register, these sample point value range is from 0 to 1023. Note, first write is start sample point value , then the other is delta value: current sample point minus previous sample point value. You can use GammaCor unit on any channel of R/G/B. If you use normalized function f(x), the following formula should be used to clut[i = 0..32] = round( f(i * 32 / 1023) * 1023) 3.  i.MX8QXP Linux device driver patch and test code Apply attached  patch 8qxp_dpu_gammacor_4.14.98_2.2.0.diff on Linux kernel. In the kernel patch, function dpu_gammacor_update, I choose not calculate delta value between each sample pint , let user space application calculate delta value and passed to kernel. Apply 8qxp-dpu-gammacor-modetst.diff on libdrm-imx, to get test application which is based on modetest.  Test app will read one greyscale image file 720P.rgb, put it under same folder of test application , calculate sample point value by pow function   , and calling drmModeCrtcSetGamma to pass related value to kernel,  next loop will change sample point value, and will see that greyscale image will changed on HDMI monitor. After system boot up, run below cmd to check result of test application systemctl stop weston ./gamma_show_rgba.out -P 29@32:1280x720@AB24 Reference: a>https://www.nxp.com/webapp/Download?colCode=IMX8DQXPRM b>https://www.nxp.com/webapp/Download?colCode=L4.14.98_2.2.0_MX8QXP&appType=license c> https://source.codeaurora.org/external/imx/libdrm-imx/ d> https://en.wikipedia.org/wiki/Gamma_correction
View full article
The i.MX8QuadMax SMARC System On Module integrates Dual Cortex A72 + Quad Cortex A53 Cores, Dual GPU systems, 4K H.265 capable VPU dual failover-ready display controller based i.MX8 QuadMax SoC with on SOM Dual 10/100/1000 Mbps Ethernet PHY, USB 3.0 hub and IEEE 802.11a/b/g/n/ac Wi-Fi & Bluetooth 5.0 module.
View full article
The D-PHY PLL (in the red circle in the picture below) is the PLL that drives the MIPI Clock lane. It must be set in accordance with the video to be sent to the display.   Calculating the video bandwidth The video bandwidth is calculated with the following equation: Pixels per second = Horizontal res. x Vertical res. x Frame rate x Bits per pixel Taking as example the 1080p60 OLED display RM67191: Pixels per second = 1920 x 1080 x 60 x 24 Pixels per second =  2985984000 = 2,98Gpixels/sec Pixel clock calculation The Display pixel clock can be obtained on the display driver. In this example for RM67191, the pixel clock is 132Mpixel/sec, see file: panel-raydium-rm67191.c\panel\drm\gpu\drivers - linux-imx - i.MX Linux kernel  Line 530: .pixelclock = { 66000000, 132000000, 132000000 }, Or the number can be obtained with the following equation: pixel clock = (hactive + hfront_porch + hsync_len + hback_porch) x (vactive + vfront_porch + vsync_len + vback_porch) x frame rate pixel clock = (1080 + 20 + 2 +34) × (1920 + 10 + 2 + 4) x 60 pixel clock =  132000000 (rounded up) Bit clock calculation (clock lane) The mipi-dphy bit_clk is the output clock and is calculated on file sec-dsim.c (line 1283): sec-dsim.c\bridge\drm\gpu\drivers - linux-imx - i.MX Linux kernel  Bit clock can be calculated with the following equation: bit_clk = Pixel clock * Bits per pixel / Number of lanes In the case of 1980p60 (Raydium display), It is:   bit_clk = pixel clock * bits per pixel / number of lanes bit_clk = 132000000 * 24 / 4 bit_clk = 792000000 Other important timing parameters like 'p', 'm', 's' are obtained on the table in the following header file: sec_mipi_dphy_ln14lpp.h\imx\drm\gpu\drivers - linux-imx - i.MX Linux kernel 
View full article
This doc show: on i.MX8QXP MEK board, configure ov5640 sensor(parallel interface) output 5MP(2592x1944) RAW(Bayer) data at 15fps, and Parallel Capture Subsystem and Image Sensor Interface capture RAW RGB data, and i.MX8QXP GPU debayer RAW data then display image. HW: i.MX8QXP MEK board, MEK base board (to place the parallel camera), ov5640 sensor. SW: Linux 4.14.98_2.0.0 BSP, and patches in this doc.   Configure at camera sensor side A Bayer filter is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. The filter pattern is 50% green, 25% red and 25% blue, hence is also called BGGR, RGBG ,GRGB, or RGGB. The ov5640 has an image array capable of operating at up to 15 fps in 5 megapixel (2592x1944) resolution. OV5640 support output formats: RAW(Bayer), RGB565/555/444,CCIR656, YUV422/420, YCbCr422, and compression. To make ov5640 ou t put 5MP RAW data at 15fps, check my kernel patch imx8-ov5640-raw-capture-driver-4.14.98_2.0.0.diff which apply on i.MX Linux 4.14.98_2.0.0 BSP kernel code. Parallel interface ov5640, use ov5640_raw_setting[] array of drivers/media/platform/imx8/ov5640_v3.c. This register setting is come from ov5640 software application note and data sheet. Configure at i.MX8QXP side The Parallel Capture Subsystem consists of the Parallel Capture Interface (BT 656) and associated peripherals. It interfaces to the Parallel CSI sensor. This allows for up to 24 RGB data bits in parallel or for RGB components on consecutive clocks (up to 10-bit color depth). The formats supported are RGB, RAW and YUV 422. Below is Parallel Capture Subsystem diagram: For RAW format data, CSI_CTRL_REG of Parallel Capture Subsystem need configured as my patch, otherwise found cannot get correct data.   The multiple input sources (MIPI CSI, Parallel Capture) captures the pixel data and feeds it to the ISI. The ISI is responsible for capturing and pre-processing the pixel data from multiple input sources and storing them into the memory. Below is ISI diagram: For RAW format data, it should be bypass any processing pipeline of ISI, just use ISI to save it to memory.   Capture test code To capture the RAW data and save it to file, check my patch imx8_ov5640_raw_captupre_test_4.14.98_2.0.0_ga.diff which apply on i.MX Linux 4.14.98_2.0.0 BSP unit test code. Note the usage is: ./imx8_cap.out -of -cam 1 -fr 15 -fmt BA81 -ow 2592 -oh 1944 -num 100   Display RAW data The RAW data cannot be displayed directly, debayer process is needed to get complete red, green, blue color for each pixel. The debayer process if run on CPU, will cost much CPU time. To save CPU time, debayer could done by GPU. The method is, captured RAW data upload to GPU as texture , then GPU will do the debayer, then full color of each pixel will be got, then display it. To upload RAW camera data to GPU with zero memory copy, i will use i.MX8QXP GPU extension GL_VIV_direct_texture. It create a texture with direct access support. API glTexDirectVIVMap,  which support mapping a user space memory or a physical address into the texture surface. The API glTexDirectVIVMap need logic and physical address of data buffer, so i will allocate data buffer from g2d lib, it is dma-buffer also get logic/physical address of buffer, then queue it as DMABUF to v4l2 capture driver, after dequeue got RAW camera data, pass it to GPU for debayer. GPU side, I will use OpenGL shader code from "Efficient, High-Quality Bayer Demosaic Filtering on GPUs". Check my patch imx8_debayer-gpusdk-5.3.0.diff which apply on i.MX GPU SDK 5.3.0 code. Note, here i only do is debayer, no extra process.   Known issue One thing is ov5640 output 5MP at 15fps, compare with output 5MP at 5fps, there are more noise of camera data at 15fps case. My debug found is, this noise seems come from ov5640 itself.   Reference: a>https://www.nxp.com/webapp/Download?colCode=IMX8DQXPRM b>https://www.nxp.com/webapp/Download?colCode=L4.14.98_2.0.0_MX8QXP&appType=license c>https://github.com/NXPmicro/gtec-demo-framework d>ov5640 data sheet e>ov5640 software application note f>Efficient, High-Quality Bayer Demosaic Filtering on GPUs https://www.semanticscholar.org/paper/Efficient%2C-High-Quality-Bayer-Demosaic-Filtering-on-McGuire/088a2f47b7ab99c78d41623bdfaf4acdb02358fb
View full article
This doc show: on i.MX6Q SabreSD board, configure ov5640 sensor(parallel or MIPI) output 5MP(2592x1944) RAW(Bayer) data at 15fps, and i.MX6Q IPU capture RAW RGB data, and i.MX6Q GPU debayer RAW data then display image. HW: i.MX6Q-SabreSD board, ov5640 sensor. SW: Linux 4.14.98_2.0.0 BSP, and patches in this doc. Configure at camera sensor side A Bayer filter is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. The filter pattern is 50% green, 25% red and 25% blue, hence is also called BGGR, RGBG ,GRGB, or RGGB. The ov5640 has an image array capable of operating at up to 15 fps in 5 megapixel (2592x1944) resolution. OV5640 support output formats: RAW(Bayer), RGB565/555/444,CCIR656, YUV422/420, YCbCr422, and compression. To make ov5640 ou t put 5MP RAW data at 15fps, check my patch imx6_ov5640_dvp_mipi_raw_capture_driver-4.14.98_2.0.0.diff which apply on i.MX Linux 4.14.98_2.0.0 BSP kernel code: Parallel interface ov5640, use ov5640_raw_setting[] array of drivers/media/platform/mxc/capture/ov5640.c. This register setting is come from ov5640 software application note and data sheet. MIPI interface ov5640, use ov5640_mipi_raw_setting[] array of drivers/media/platform/mxc/capture/ov5640_mipi.c. This register setting is combine setting of original code (remove ISP register setting), plus PLL register setting for MIPI interface, plus some data format register setting. Configure at i.MX6Q side The i.MX6Q IPU camera port(CSI-2 module) support data format include Raw(Bayer), RGB, YUV 4:4:4, YUV 4:2:2 and grayscale, up to 16 bits per value. Below is camera data routing for i.MX6Q:    Below is i.MX6Q IPU block daigram: The CSI-2 of IPU which is responsible for synchronizing and packing the video (or generic data) and sending it to other blocks. The video data received by CSI-2, could be sent to three other blocks: SMFC, VDI, IC. For RAW (Bayer) data capture, should go through path like this: CSI-2-->SMFC-->IDMAC-->DDR memory It means RAW data is received as generic data, see IPU_PIX_FMT_GENERIC in my patch, and IPU cannot process this kind data, it is just received to DDR memory. For MIPI interface camera, need note is i.MX6 side MIPI D-PHY clock must be calibrated to the actual clock range of the camera sensor’s D-PHY clock and the calibrated value must be equal to or greater than the camera sensor clock, detail see  AN5305. Take MIPI ov5640 as example: Pixel clock = 2592x1944x15fpsx(1/2 cycle/pixel)x1.35 blank interval = 51MHZ MIPI data rate = 51MHZ x 16 bit = 816Mb/s so 816/2/2*2 is 408MHZ is i.MX6 side D-PHY clock. Here due to one bayer pixel is 8bit, and i.MX6 MIPI data bus is 16 bit, so above use 1/2 cycle/pixel. And check ov5640_mipi_raw_setting[], you will got the sensor side D-PHY clock is about 672/2 = 336MHZ. And check AN5305, register MIPI_CSI2_PHY_TST_CTRL1 of i.MX6 need set as 0xC, but here i still keep it as default BSP value 0x14.   3.Capture test code I changed unit test mxc_v4l2_capture.c to capture the RAW data and save it to file. Check my patch imx6_ov5640_raw_captupre_test_4.14.98_2.0.0_ga.diff which apply on i.MX  Linux 4.14.98_2.0.0 BSP unit test code. Note the usage is: ./cap.out -c 1 -i 1 -fr 15 -m 6 -iw 2592 -ih 1944 -ow 2592 -oh 1944 -f BA81 -d /dev/video1 savefile.dmp parameter -i 1 means use CSI to MEM mode /dev/video1 is MIPI ov5640, /dev/video0 is parallel ov5640   4.Display RAW data The RAW data cannot be displayed directly, debayer process is needed to get complete red, green, blue color for each pixel. The debayer process if run on CPU, will cost much CPU time. To save CPU time, debayer could done by GPU. The method is, captured RAW data upload to GPU as texture , then GPU will do the debayer, then full color of each pixel will be got, then display it. To upload RAW camera data to GPU with zero memory copy, i will use i.MX6Q GPU extension GL_VIV_direct_texture. It create a texture with direct access support. API glTexDirectVIVMap,  which support mapping a user space memory or a physical address into the texture surface. The API glTexDirectVIVMap need logic and physical address of data buffer, so i will allocate data buffer from /dev/mxc_ipu, it is dma-buffer also get logic/physical address of buffer, then queue it as USERPTR to ipu v4l2 capture driver, after dequeue got RAW camera data, pass it to GPU for debayer. GPU side, I will use OpenGL shader code from "Efficient, High-Quality Bayer Demosaic Filtering on GPUs". Check my patch imx6-5640-debayer-testcode-gpusdk-5.2.0.diff which apply on i.MX GPU SDK 5.2.0 code. Note, here i only do is debayer, no extra process.   Known issue One thing is ov5640 output 5MP at 15fps, compare with output 5MP at 5fps, there are more noise of camera data at 15fps case. My debug found is , this noise seems come from ov5640 itself.   Reference: a>https://www.nxp.com/webapp/Download?colCode=IMX6DQRM b>https://www.nxp.com/webapp/Download?colCode=L4.14.98_2.0.0_MX6QDLSOLOX&appType=license c>https://github.com/NXPmicro/gtec-demo-framework d>https://www.nxp.com/docs/en/application-note/AN5305.pdf e>ov5640 data sheet f>ov5640 software application note g>Efficient, High-Quality Bayer Demosaic Filtering on GPUs https://www.semanticscholar.org/paper/Efficient%2C-High-Quality-Bayer-Demosaic-Filtering-on-McGuire/088a2f47b7ab99c78d41623bdfaf4acdb02358fb
View full article
In defaut Linux BSP, NXP implemented LVDS to HDMI(it6263) and MIPI-DSI to HDMI(adv7535) bridge chip drivers. And these drivers need read the EDID from display, then apply the timing parameters to DRM driver. But for the use case that bridge chip -> Serializer -> Deserializer -> LCD Panel use case, there is no EDID. The attached are reference patches for such use case, it combined the bridge chip to panel directly, and no EDID is needed. The patches are tested on iMX8QXP MEK with bridge chip + panel mode, both of them can see the fb0 device under /sys/class/graphics/ folder, also can see card under  /sys/class/drm/. Display works fine with DTS selected 720P panel mode. [2020-06-24]: Add patches for L4.14.98 kernel: Android_Auto_P9.0.0_GA2.1.0_Kernel_No_EDID_IT6263.patch L4.14.98-iMX8QXP-MEK-ADV7535-MIPI-DSI-to-HDMI-bridge-chip-com.patch
View full article
This document describes the i.MX 8QXP MEK mini-SAS connectors features on Linux and Android use cases, covering the supported daughter cards, the process to change Device Tree (DTS) files or Boot images, and enable these different display options on the board.
View full article
This document describes the i.MX 8MQ EVK HDMI output and mini-SAS connectors features on Linux and Android use cases, covering the supported daughter-board, the process to change Device Tree (DTS) files or Boot Images, and enable these different display options on the board.
View full article
Requirements: Host machine with Ubuntu 14.04 UDOO Quad/Dual Board uSD card with at least 8 GB Download documentation and install latest Official Udoobuntu OS (at the moment of writing: UDOObuntu 2.1.2), https://www.udoo.org/downloads/   Overview: This document describes how to install and test Keras (Open source neural network library) and Theano (numerical computation library for python ) for deep learning library usage on i.MX6QD UDOO board.  Installation: $ sudo apt-get update && sudo apt-get upgrade update your date system: e.g. $ sudo date -s “07/08/2017 12:00” First satisfy the run-time and build time dependencies: $ sudo apt-get install python-software-properties software-properties-common make unzip zlib1g-dev git pkg-config autoconf automake libtool curl  python-pip python-numpy libblas-dev liblapack-dev python-dev libatlas-base-dev gfortran libhdf5-serial-dev libhdf5-dev python-setuptools libyaml-dev libpython2.7-dev $ sudo easy_install scipy The last step is installing scipy through   pip , and can take several hours. Theano First, we have a few more dependencies to get: $sudo pip install scikit-learn $sudo pip install pillow $sudo pip install h5py With these dependencies met, we can install a stable Theano release from the git source: $ git clone https://github.com/Theano/Theano $ cd Theano Numpy 1.9 cause conflicts with armv7, so we need to change the setup.py configuration: $ sudo nano setup.py Remove line    #       install_requires=['numpy>=1.9.1', 'scipy>=0.14', 'six>=1.9.0'], And add setup_requires=["numpy"], install_requires=["numpy"], Then install it: $ sudo python setup.py install Keras The installation can occur with the command: (this could take a lot of time!!!) $ cd .. $ git clone https://github.com/fchollet/keras.git $ cd keras $ sudo python setup.py install $ LC_ALL=C $sudo pip install --upgrade keras After Keras is installed, you will want to edit the Keras configuration file ~/.keras/keras.json to use Theano instead of the default TensorFlow backend. If it isn't there, you can create it. This requires changing two lines. The first change is: "image_dim_ordering": "tf"  --> "image_dim_ordering": "th" and the second: "backend": "tensorflow" --> "backend": "theano" (The final file should look like the example below) sudo nano ~/.keras/keras.json {     "image_dim_ordering": "th",     "epsilon": 1e-07,     "floatx": "float32",     "image_data_format": "channels_last",     "backend": "theano" } You can also define the environment variable KERAS_BACKEND and this will override what is defined in your config file : $ KERAS_BACKEND=theano python -c "from keras import backend" Testing Quick test: udooer@udoo:~$ python Python 2.7.6 (default, Oct 26 2016, 20:46:32) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import keras Using Theano backend. >>>  Test 2: Be aware this test take some time (~1hr on udoo dual): $ curl -sSL -k https://github.com/fchollet/keras/raw/master/examples/mnist_mlp.py | python Output: For demonstration, deep-learning-models repository provided by pyimagesearch and from fchollet git, and also have three Keras models (VGG16, VGG19, and ResNet50) online — these networks are  pre-trained  on the ImageNet dataset , meaning that they can recognize  1,000 common object classes out-of-the-box. $ cd keras $ git clone https://github.com/fchollet/deep-learning-models $ Cd deep-learning-models $ ls -l Notice how we have four Python files. The  resnet50 . py  ,  vgg16 . py  , and  vgg19 . py   files correspond to their respective network architecture definitions. The  imagenet_utils   file, as the name suggests, contains a couple helper functions that allow us to prepare images for classification as well as obtain the final class label predictions from the network Classify ImageNet classes with ResNet50 ResNet50 model, with weights pre-trained on ImageNet. This model is available for both the Theano and TensorFlow backend, and can be built both with "channels_first" data format (channels, height, width) or "channels_last" data format (height, width, channels). The default input size for this model is 224x224. We are now ready to write some Python code to classify image contents utilizing  convolutional Neural Networks (CNNs) pre-trained on the ImageNet dataset. For udoo Quad/Dual use ResNet50 due to avoid space conflict. Also we are going to use ImageNet ( http://image-net.org/ ) that is an image database organized according to the  WordNet hierarchy, in which each node of the hierarchy is depicted by hundreds and thousands of images. from keras.applications.resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.resnet50 import preprocess_input, decode_predictions import numpy as np   model = ResNet50(weights='imagenet')   #for this sample I download the image from: http://i.imgur.com/wpxMwsR.jpg   img_path = 'elephant.jpg' img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x)   preds = model.predict(x) # decode the results into a list of tuples (class, description, probability) # (one such list for each sample in the batch) print('Predicted:', decode_predictions(preds, top=3)[0]) Save the file an run it. Results for elephant image: Top prediction was 0.8890 for African Elephant Testing with this image: http://i.imgur.com/4FIOwAN.jpg Results: Top prediction was: 0.7799 for golden_retriever. Now your Udoo is ready to use Keras and Theano as Deep Learning libraries, next time we are going to show some usage example for image classification models with OpenCV. References: GitHub - fchollet/keras: Deep Learning library for Python. Runs on TensorFlow, Theano, or CNTK.  GitHub - Theano/Theano: Theano is a Python library that allows you to define, optimize, and evaluate mathematical expres…  GitHub - fchollet/deep-learning-models: Keras code and weights files for popular deep learning models.  Installing Keras for deep learning - PyImageSearch 
View full article
Overview As more and more communication required between online and offline, the QR code is widely used in the mobile payment, mobile small apps, industry things identification and etc. The i.MX6UL/ULL has the IP of CSI and PXP for camera connection and image CSC/FLIP/ROTATION acceleration. A LCDIF IP is supporting the display, but no 3D IP support. This means this low power and low end AP is very suitable for the industry HMI segment, which does not require a cool 3D graphic display, but a simple and straightforward GUI for interaction. QR code scanner is one of the use cases in the industry segment, which more and more customer are focusing on. The i.MX6UL CPU freq of i.MX6UL is about 500Mhz, and it does not have GPU IP, so a lightweight GUI and window system is required. Here we recommend the QT with wayland backend (without X11), which would make the window system small and faster than traditional X11 UI. Why chose QT is because of it has open source version, rich components, platform independent, good performance for embedded system and strong development staffs like QtCreator for creating application. How to enable the QT development environment, check this: Enable QT developement for i.MX6UL (v2)  Here I made a QR code scanner demo based on QT5.6 + QZXing (QR/Bar code scan engine) running on the i.MX6UL EVK board with a UVC camera (at least 640x480 resolution is required) and 480x272px LCD. Source code is open here (License Apache2.0): https://github.com/muddog/QRScanner  Implementation To do camera preview and capture, you must think on the gstreamer first, which is easy use and has the acceleration pads which implemented by NXP for i.MX6UL. Yes, it's very easy for you to enable the preview in console like: $ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=320 ! imxvideoconvert_pxp ! video/x-raw,format=RGB16 ! waylandsink It works under the i.MX6UL EVK, with PXP IP to do color space convert from YUY2 -> RGB16 acceleration, also the potential scaling of the image. The CPU loading of this is about 20-30%, but if you use the component of "videoconvert" to replace the "imxvideoconvert_pxp", we do CSC and scale by CPU, then the loading would increase to 50-60%. The "/dev/video1" is the device node for UVC camera, it may different in your environment. So our target is clear, create such pipeline (with PXP acceleration) in the QT application, and use a appsink to get preview images, do simple "sink" to one QWidget by drawing this image on the widget surface for preview (say every 50ms for 20fps). Then in other thread, we fetch the preview buffer in a fixed frequency (like every 0.5s), then feed it into the ZXing engine to decode the strings inside this image. Here are the class created inside the source code: ScannerQWidgetSink It act as a gstreamer sink for preview rendering. Init the pipeline, create a timer with timeout every 50ms. In the timer handler, we use appsink to copy the camera buffer from gstreamer, and tell the ViewfinderWidget to do update (re-draw event). ViewfinderWidget This class inherit from the QWidget, which draw the preview buffer as a QImage onto it's own surface by using QPainter. The QImage is created at the very begining with the image buffer created by the ScannerQWidgetSink. Because QImage itself does not maintain the image buffer, so the buffer must be alive during it's usage. So we keep this buffer during the ScannerQWidgetSink life cycle, copy the appsink buffer from pipeline to it for preview. MainWindow Create main window, which does not have title bar and border. Start any animation for the red line scan bar. Create instance of DecoderThread and ScannerQWidgetSink. Setup and start them. DecoderThread A infinite loop, to wait for a available buffer released by the ScannerQWidgetSink every 0.5s. Copy the buffer data to it's own buffer (imgData) to avoid any change to the buffer by sink when doing decoding. Then feed this copy of buffer into ZXing engine to get decoder result. Then show on the QLabel. Screenshot under wayland (weston) desktop: Customize Camera instance Now I use the UVC camera which pluged in the USB host, which device node is /dev/video1. If you want to use CSI or other device, please change the construction parameters for ScannerQWidgetSink(): sink = new ScannerQWidgetSink (ui->widget,  QString ( "v4l2src device=/dev/video1 " )); Image resolution captured and review Change the static member value of ScannerQWidgetSink class: uint ScannerQWidgetSink::CAPTURE_HEIGHT = 480 ; uint ScannerQWidgetSink::CAPTURE_WIDTH = 640 ; Preview fps and decoding frequency Find the " framerate=20/1" strings in the ScannerQWidgetSink::GstPipelineInit(), change to your fps. You also have to change the renderTimer start timeout value in the ::StartRender(). The decoding frequency is determined by renderCnt, which determine after how many preview frames showed to feed the decoder. Main window size It's fixed size of main window, you have to change the mainwindow.ui. It's easy to do in the QtCreate Designer. FAQ Why not use CSI camera in demo? Honestly, I do not have CSI camera module, it's also DNP when you buying the board on NXP.com. So a widely used UVC camera is preferred, it's also easy for you to scan QR code on your phone, your display panel etc. Why not use QCamera to do preview and capture? The QCamera class in the Qtmultimedia component uses the camerabin2 gstreamer plugin, which create a very long pipeline for different usage of viewfinder, image capture and video encoder. Camerabin2 would eat too much CPU and memory resource, take picture and recording are very very slow. The preview of 30fps would eat about 70-80% CPU loading even I hacked it using imxvideoconvert_pxp instread of software videoconvert. Finally I give up to implement the QRScanner based on QCamera. How to make sure only one instance of QT app is running? We can use QSharedMemory to create a share memory with a unique KEY. When second instance of app is started, it would check if the share memory with this KEY is created or not. If the shm is there, it means there's already one instance running, it has to exit(). But as the QT mentioned, the QSharedMemory can not be destroyed correctly when app crashed, this means we have to handle each terminate signal, and do delete by ourselves: static QSharedMemory *gShm = NULL; static void terminate(int signum) {    if (gShm) {       delete gShm;       gShm = NULL;    }    qDebug() << "Terminate with signal:" << signum;    exit(128 + signum); } int main(int argc, char *argv[]) {    QApplication a(argc, argv);    // Handle any further termination signals to ensure the    // QSharedMemory block is deleted even if the process crashes    signal(SIGHUP, terminate ); // 1    signal(SIGINT, terminate ); // 2    signal(SIGQUIT, terminate ); // 3    signal(SIGILL, terminate ); // 4    signal(SIGABRT, terminate ); // 6    signal(SIGFPE, terminate ); // 8    signal(SIGBUS, terminate ); // 10    signal(SIGSEGV, terminate ); // 11    signal(SIGSYS, terminate ); // 12    signal(SIGPIPE, terminate ); // 13    signal(SIGALRM, terminate ); // 14    signal(SIGTERM, terminate ); // 15    signal(SIGXCPU, terminate ); // 24    signal(SIGXFSZ, terminate ); // 25    gShm = new QSharedMemory("QRScannerNXP");    if (!gShm->create(4, QSharedMemory::ReadWrite)) {       delete gShm;       qDebug() << "Only allow one instance of QRScanner";       exit(0);    } .....
View full article
When configuring i.MX6 IPU IDMAC CPMEM parameters or debugging it, it's hard to find the value of a parameter inside the 160 bits word. This web tool separates the 160 bits words into parameters making it easier to check their values. Link: i.MX Tools 
View full article
Two IPUs are supported in i.MX6Q SoC, each IPU supports 1920x1080 resolution, and 4 kinds of interfaces on i.MX6Q PADs can be used to connect display devices: HDMI, Digital RGB24, LVDS1+LVDS2, MIPI DSI. In the documents, we will discuss how to expand VGA port with digital RGB and realize dual-display via HDMI & VGA, The solution has been validated on Android4.2.2 and Android4.4.2 BSP released by NXP. 1.  Expanding VGA port based on digital RGB24 interface with ADV7125 Schematic is in attachments. 2. Configurations of envionment variables in u-boot The following settings are for booting via NFS, users can adjust them to boot via Flash on board. setenv ipaddr 192.168.1.103 setenv serverip 192.168.1.102 setenv gateway 192.168.1.1 setenv ethaddr 00:04:9f:00:ea:d3 setenv bootargs_base 'setenv bootargs console=ttymxc0,115200' setenv bootargs_android 'setenv bootargs ${bootargs} init=/init androidboot.console=ttymxc0 androidboot.hardware=freescale' setenv bootargs_nfs 'setenv bootargs ${bootargs} ip=${ipaddr}:${serverip}:${gateway}:${netmask}::eth0 off root=/dev/nfs nfsroot=${serverip}:${nfsroot}' setenv bootargs_disp 'setenv bootargs ${bootargs} video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24,bpp=32 video=mxcfb1:dev=lcd,1920x1080M@60,if=RGB24,bpp=32 video=mxcfb2:off fbmem=32M vmalloc=400M' setenv bootcmd_net 'run bootargs_base bootargs_android bootargs_nfs bootargs_disp;tftpboot ${loadaddr} uImage;bootm' 3. Configurations in BSP file (1)IOMUX of DISP0 port (in header file)     MX6Q_PAD_DI0_DISP_CLK__IPU1_DI0_DISP_CLK,     MX6Q_PAD_DI0_PIN15__IPU1_DI0_PIN15,        /* DE */     MX6Q_PAD_DI0_PIN2__IPU1_DI0_PIN2,        /* HSync */     MX6Q_PAD_DI0_PIN3__IPU1_DI0_PIN3,        /* VSync */     MX6Q_PAD_DISP0_DAT0__IPU1_DISP0_DAT_0,     MX6Q_PAD_DISP0_DAT1__IPU1_DISP0_DAT_1,     MX6Q_PAD_DISP0_DAT2__IPU1_DISP0_DAT_2,     MX6Q_PAD_DISP0_DAT3__IPU1_DISP0_DAT_3,     MX6Q_PAD_DISP0_DAT4__IPU1_DISP0_DAT_4,     MX6Q_PAD_DISP0_DAT5__IPU1_DISP0_DAT_5,     MX6Q_PAD_DISP0_DAT6__IPU1_DISP0_DAT_6,     MX6Q_PAD_DISP0_DAT7__IPU1_DISP0_DAT_7,     MX6Q_PAD_DISP0_DAT8__IPU1_DISP0_DAT_8,     MX6Q_PAD_DISP0_DAT9__IPU1_DISP0_DAT_9,     MX6Q_PAD_DISP0_DAT10__IPU1_DISP0_DAT_10,     MX6Q_PAD_DISP0_DAT11__IPU1_DISP0_DAT_11,     MX6Q_PAD_DISP0_DAT12__IPU1_DISP0_DAT_12,     MX6Q_PAD_DISP0_DAT13__IPU1_DISP0_DAT_13,     MX6Q_PAD_DISP0_DAT14__IPU1_DISP0_DAT_14,     MX6Q_PAD_DISP0_DAT15__IPU1_DISP0_DAT_15,     MX6Q_PAD_DISP0_DAT16__IPU1_DISP0_DAT_16,     MX6Q_PAD_DISP0_DAT17__IPU1_DISP0_DAT_17,     MX6Q_PAD_DISP0_DAT18__IPU1_DISP0_DAT_18,     MX6Q_PAD_DISP0_DAT19__IPU1_DISP0_DAT_19,     MX6Q_PAD_DISP0_DAT20__IPU1_DISP0_DAT_20,     MX6Q_PAD_DISP0_DAT21__IPU1_DISP0_DAT_21,     MX6Q_PAD_DISP0_DAT22__IPU1_DISP0_DAT_22,     MX6Q_PAD_DISP0_DAT23__IPU1_DISP0_DAT_23,     MX6Q_PAD_NANDF_D4__GPIO_2_4,        /* ADV7125 Power on / off (2) Frame buffer static struct ipuv3_fb_platform_data qcorein_fb_data[] = { /*    {     .disp_dev = "ldb",     .interface_pix_fmt = IPU_PIX_FMT_RGB666,     .mode_str = "LDB-WXGA",     .default_bpp = 16,     .int_clk = false,     .late_init = false,     }, */     {     .disp_dev = "hdmi",     .interface_pix_fmt = IPU_PIX_FMT_RGB24,     .mode_str = "1920x1080M@60",     .default_bpp = 32,     .int_clk = false,     .late_init = false,     },     {     .disp_dev = "lcd",     .interface_pix_fmt = IPU_PIX_FMT_RGB24,     .mode_str = "1920x1080",     .default_bpp = 32,     .int_clk = false,     }, }; (3) Settings about IPUs /* HDMI -- IPU1_DI0 */ static struct fsl_mxc_hdmi_core_platform_data hdmi_core_data = {     .ipu_id = 1,     .disp_id = 1, }; /* RGB24 DISP0 LCD(Here is RGB24-->VGA via ADV7125 -- IPU0_DI0 */ static struct fsl_mxc_lcd_platform_data lcdif_data = {     .ipu_id = 0,     .disp_id = 0,     .default_ifmt = IPU_PIX_FMT_RGB24, }; (4) Adding LCD data in board_init() funtion static void __init mx6_qcorein_board_init(void) { .... imx6q_add_lcdif(&lcdif_data); ... } (5) Adding mxc_lcd.c driver to linux kernel When using make menuconfig to configure kernel, don't forget to add LCD driver to kernel.  Note: If users are using other version of android BSP, she can do porting according to above thinking. Weidong Sun NXP TIC team
View full article