i.MXプロセッサ ナレッジベース

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

i.MX Processors Knowledge Base

ディスカッション

ソート順:
This document describes all the i.MX 8 MIPI-CSI use cases, showing the available cameras and daughter cards supported by the boards, the compatible Device Trees (DTS) files, and how to enable these different camera options on the i.MX 8 boards. Plus, this document describes some Advanced camera use cases too, such as multiples cameras output using imxcompositor_g2d plugin, GStreamer zero-copy pipelines and V4L2 API extra-controls examples.
記事全体を表示
Freescale does not have a specific GStreamer element to do JPEG encoding, so the standard 'jpegenc' should be used. Image Capture With a web camera gst-launch v4l2src num-buffers=1 ! jpegenc ! filesink location=sample.jpeg With an embedded camera gst-launch mfw_v4lsrc num-buffers=1 !  jpegenc ! filesink location=sample.jpeg More pipelines on GStreamer i.MX6 Pipelines
記事全体を表示
There is no Freescale GStreamer element which does the JPEG decoding, so we must rely on a standard one, like 'jpegdec'. In case your Linux system was built using LTIB, in order to have the jpegdec element included on the gst-plugin-good, follow these steps: On the LTIB menuconfig, make sure the following packages are selected: gstreamer-plugins-good libjpeg libpng Remove the configure parameters '--disbale-libpng' and '--disable-jpeg' on the file './dist/lfs-5.1/gst-plugins-good/gst-plugins-good.spec' Rebuild and flash your board (or SD card) again. Image display VSALPHA=1 gst-launch filesrc location=sample.jpeg ! jpegdec ! imagefreeze ! mfw_isink Important: non 8 pixel aligned width and height is treated as not supported format in isink plugin.
記事全体を表示
  Test environment   i.MX8MP EVK LVDS0 LVDS-HDMI  bridge(it6263) Uboot2022, Uboot2023 Background   Some customers need show logo using LVDS panel. Current BSP doesn't support LVDS driver in Uboot. This patch provides i.MX8MPlus LVDS driver support in Uboot. If you want to connect it to LVDS panel , you need port your lvds panel driver like  simple-panel.c   Update [2022.9.19] Verify on L5.15.32_2.0.0  0001-L5.15.32-Add-i.MX8MP-LVDS-driver-in-uboot 'probe device is failed, ret -2, probe video device failed, ret -19' is caused by below code. It has been merged in attachment. // /* Only handle devices that have a valid ofnode */ // if (dev_has_ofnode(dev) && !(dev->driver->flags & DM_FLAG_IGNORE_DEFAULT_CLKS)) { // /* // * Process 'assigned-{clocks/clock-parents/clock-rates}' // * properties // */ // ret = clk_set_defaults(dev, CLK_DEFAULTS_PRE); // if (ret) // goto fail; // }   [2023.3.14] Verify on L5.15.71 0001-L5.15.71-Add-i.MX8MP-LVDS-support-in-uboot   [2023.9.12] For some panel with low DE, you need uncomment CTRL_INV_DE line and set this bit to 1. #include <linux/string.h> @@ -110,9 +111,8 @@ static void lcdifv3_set_mode(struct lcdifv3_priv *priv, writel(CTRL_INV_HS, (ulong)(priv->reg_base + LCDIFV3_CTRL_SET)); /* SEC MIPI DSI specific */ - writel(CTRL_INV_PXCK, (ulong)(priv->reg_base + LCDIFV3_CTRL_CLR)); - writel(CTRL_INV_DE, (ulong)(priv->reg_base + LCDIFV3_CTRL_CLR)); - + //writel(CTRL_INV_PXCK, (ulong)(priv->reg_base + LCDIFV3_CTRL_CLR)); + //writel(CTRL_INV_DE, (ulong)(priv->reg_base + LCDIFV3_CTRL_CLR)); }       [2024.5.15] If you are uing simple-panel.c, need use below patch to set display timing from panel to lcdif controller. diff --git a/drivers/video/simple_panel.c b/drivers/video/simple_panel.c index f9281d5e83..692c96dcaa 100644 --- a/drivers/video/simple_panel.c +++ b/drivers/video/simple_panel.c @@ -18,12 +18,27 @@ struct simple_panel_priv { struct gpio_desc enable; }; +/* define your panel timing here and + * copy it in simple_panel_get_display_timing */ +static const struct display_timing boe_ev121wxm_n10_1850_timing = { + .pixelclock.typ = 71143000, + .hactive.typ = 1280, + .hfront_porch.typ = 32, + .hback_porch.typ = 80, + .hsync_len.typ = 48, + .vactive.typ = 800, + .vfront_porch.typ = 6, + .vback_porch.typ = 14, + .vsync_len.typ = 3, +}; + @@ -100,10 +121,18 @@ static int simple_panel_probe(struct udevice *dev) return 0; } +static int simple_panel_get_display_timing(struct udevice *dev, + struct display_timing *timings) +{ + memcpy(timings, &boe_ev121wxm_n10_1850_timing, sizeof(*timings)); + + return 0; +} static const struct panel_ops simple_panel_ops = { .enable_backlight = simple_panel_enable_backlight, .set_backlight = simple_panel_set_backlight, + .get_display_timing = simple_panel_get_display_timing, }; static const struct udevice_id simple_panel_ids[] = { @@ -115,6 +144,7 @@ static const struct udevice_id simple_panel_ids[] = { { .compatible = "lg,lb070wv8" }, { .compatible = "sharp,lq123p1jx31" }, { .compatible = "boe,nv101wxmn51" }, + { .compatible = "boe,ev121wxm-n10-1850" }, { } };   [2024.7.23] Update patch for L6.6.23(Uboot2023)
記事全体を表示
This doc show: on i.MX8QXP MEK board, configure ov5640 sensor(parallel interface) output 5MP(2592x1944) RAW(Bayer) data at 15fps, and Parallel Capture Subsystem and Image Sensor Interface capture RAW RGB data, and i.MX8QXP GPU debayer RAW data then display image. HW: i.MX8QXP MEK board, MEK base board (to place the parallel camera), ov5640 sensor. SW: Linux 4.14.98_2.0.0 BSP, and patches in this doc.   Configure at camera sensor side A Bayer filter is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. The filter pattern is 50% green, 25% red and 25% blue, hence is also called BGGR, RGBG ,GRGB, or RGGB. The ov5640 has an image array capable of operating at up to 15 fps in 5 megapixel (2592x1944) resolution. OV5640 support output formats: RAW(Bayer), RGB565/555/444,CCIR656, YUV422/420, YCbCr422, and compression. To make ov5640 output 5MP RAW data at 15fps, check my kernel patch imx8-ov5640-raw-capture-driver-4.14.98_2.0.0.diff which apply on i.MX Linux 4.14.98_2.0.0 BSP kernel code. Parallel interface ov5640, use ov5640_raw_setting[] array of drivers/media/platform/imx8/ov5640_v3.c. This register setting is come from ov5640 software application note and data sheet. Configure at i.MX8QXP side The Parallel Capture Subsystem consists of the Parallel Capture Interface (BT 656) and associated peripherals. It interfaces to the Parallel CSI sensor. This allows for up to 24 RGB data bits in parallel or for RGB components on consecutive clocks (up to 10-bit color depth). The formats supported are RGB, RAW and YUV 422. Below is Parallel Capture Subsystem diagram: For RAW format data, CSI_CTRL_REG of Parallel Capture Subsystem need configured as my patch, otherwise found cannot get correct data.   The multiple input sources (MIPI CSI, Parallel Capture) captures the pixel data and feeds it to the ISI. The ISI is responsible for capturing and pre-processing the pixel data from multiple input sources and storing them into the memory. Below is ISI diagram: For RAW format data, it should be bypass any processing pipeline of ISI, just use ISI to save it to memory.   Capture test code To capture the RAW data and save it to file, check my patch imx8_ov5640_raw_captupre_test_4.14.98_2.0.0_ga.diff which apply on i.MX Linux 4.14.98_2.0.0 BSP unit test code. Note the usage is: ./imx8_cap.out -of -cam 1 -fr 15 -fmt BA81 -ow 2592 -oh 1944 -num 100   Display RAW data The RAW data cannot be displayed directly, debayer process is needed to get complete red, green, blue color for each pixel. The debayer process if run on CPU, will cost much CPU time. To save CPU time, debayer could done by GPU. The method is, captured RAW data upload to GPU as texture , then GPU will do the debayer, then full color of each pixel will be got, then display it. To upload RAW camera data to GPU with zero memory copy, i will use i.MX8QXP GPU extension GL_VIV_direct_texture. It create a texture with direct access support. API glTexDirectVIVMap,  which support mapping a user space memory or a physical address into the texture surface. The API glTexDirectVIVMap need logic and physical address of data buffer, so i will allocate data buffer from g2d lib, it is dma-buffer also get logic/physical address of buffer, then queue it as DMABUF to v4l2 capture driver, after dequeue got RAW camera data, pass it to GPU for debayer. GPU side, I will use OpenGL shader code from "Efficient, High-Quality Bayer Demosaic Filtering on GPUs". Check my patch imx8_debayer-gpusdk-5.3.0.diff which apply on i.MX GPU SDK 5.3.0 code. Note, here i only do is debayer, no extra process.   Known issue One thing is ov5640 output 5MP at 15fps, compare with output 5MP at 5fps, there are more noise of camera data at 15fps case. My debug found is, this noise seems come from ov5640 itself.   Reference: a>https://www.nxp.com/webapp/Download?colCode=IMX8DQXPRM b>https://www.nxp.com/webapp/Download?colCode=L4.14.98_2.0.0_MX8QXP&appType=license c>https://github.com/NXPmicro/gtec-demo-framework d>ov5640 data sheet e>ov5640 software application note f>Efficient, High-Quality Bayer Demosaic Filtering on GPUs https://www.semanticscholar.org/paper/Efficient%2C-High-Quality-Bayer-Demosaic-Filtering-on-McGuire/088a2f47b7ab99c78d41623bdfaf4acdb02358fb
記事全体を表示
Hardware connection: there are two board-to-board connectors on E-INK daughter card IMXEBOOKDC4, while there is only one on i.MX7D Sabre board, as the picture below. This might be a bit confusing to connect the two: Checked with internal, the original design was trying to wire both eLCDIF and EPDC bus out to one daughter card, add the flexibility to have different configurations on one display daughter card(LCD/EPD). On i.MX7D Sabre board, only one connector is available for EPDC bus. Here is how we connect i.MX7D Sabre and IMXEBOOKDC4: Software setup: here we use pre-build L3.14.38_6UL7D_Beta Linux as our boot-image, steps to setup/boot/test EPDC: 1. download and decompress BSP pre-build image package "L3.14.38_beta_images_MX6UL7D.tar.gz", you should be able to find the SD image in it -- "fsl-image-gui-x11-imx7dsabresd.sdcard" 2. program the SD image on your SD card(>800 MBytes) with command(I'm running this in Ubuntu): "dd if=fsl-image-gui-x11-imx7dsabresd.sdcard of=/dev/sdb;sync" 3. insert SD card to the slot(J6) on i.MX7D Sabre board, connect debug-UART and power-on the board 4. modify the u-boot environment variables as below:      a.) setenv fdt_file imx7d-sdb-epdc.dtb           (originally this is "fdt_file=imx7d-sdb.dtb")      b.) setenv mmcargs 'setenv bootargs console=${console},${baudrate} root=${mmcroot} epdc video=mxcepdcfb:E060SCM,bpp=16'           (originally this is "mmcargs=setenv bootargs console=${console},${baudrate} root=${mmcroot}") 5. boot into Linux kernel, run unit-test: "/unit_tests/mxc_epdc_fb_test.out", should be able to have test patterns running on EPD.
記事全体を表示
The D-PHY PLL (in the red circle in the picture below) is the PLL that drives the MIPI Clock lane. It must be set in accordance with the video to be sent to the display.   Calculating the video bandwidth The video bandwidth is calculated with the following equation: Pixels per second = Horizontal res. x Vertical res. x Frame rate x Bits per pixel Taking as example the 1080p60 OLED display RM67191: Pixels per second = 1920 x 1080 x 60 x 24 Pixels per second = 2985984000 = 2,98Gpixels/sec Pixel clock calculation The Display pixel clock can be obtained on the display driver. In this example for RM67191, the pixel clock is 132Mpixel/sec, see file: panel-raydium-rm67191.c\panel\drm\gpu\drivers - linux-imx - i.MX Linux kernel  Line 530: .pixelclock = { 66000000, 132000000, 132000000 }, Or the number can be obtained with the following equation: pixel clock = (hactive + hfront_porch + hsync_len + hback_porch) x (vactive + vfront_porch + vsync_len + vback_porch) x frame rate pixel clock = (1080 + 20 + 2 +34) × (1920 + 10 + 2 + 4) x 60 pixel clock = 132000000 (rounded up) Bit clock calculation (clock lane) The mipi-dphy bit_clk is the output clock and is calculated on file sec-dsim.c (line 1283): sec-dsim.c\bridge\drm\gpu\drivers - linux-imx - i.MX Linux kernel  Bit clock can be calculated with the following equation: bit_clk = Pixel clock * Bits per pixel / Number of lanes In the case of 1980p60 (Raydium display), It is:   bit_clk = pixel clock * bits per pixel / number of lanes bit_clk = 132000000 * 24 / 4 bit_clk = 792000000 Other important timing parameters like 'p', 'm', 's' are obtained on the table in the following header file: sec_mipi_dphy_ln14lpp.h\imx\drm\gpu\drivers - linux-imx - i.MX Linux kernel 
記事全体を表示
It is based on 3.0.35 GA 4.1.0 BSP.   0001-Correct-mipi-camera-virtual-channel-setting-in-ipu_c.patch It is the updated IPU code for MIPI ID and SMFC setting in ipu_capture.c. These setting should not be combined with MIPI virtual channel value, they shoule be fixed with ID 0.   0002-Use-virtual-channel-3-for-ov5640-mipi-camera-on-iMX6.patch The sample code to modify ov5640_mipi camera to use virtual channel 3 on SabreSD board.   The followed command can be used to verify the mipi camera function after booted into Linux: $ gst-launch mfw_v4lsrc capture-mode=1 device=/dev/video1 ! mfw_v4lsink     2014-09-30 update: Added the patch for 3.10.17_GA1.0.0 BSP. "L3.10.17_1.0.0_mipi_camera_virtual_channel_3.zip"  
記事全体を表示
File related to the following question: MX53 u-boot Splash Screen support
記事全体を表示
It is based on L3.0.35_GA4.1.0 BSP.   In default Linux BSP, there are 3 kinds of de-interlace mode, motion =0,1,2 mode, motion mode 0 and 1 will use three fields for de-interlace, and motion mode 2 wil use one field for de-interlace, so the whole fps is 30. In this mode, for motion mode 0 and 1, field 1,2,3 was used for first VDI output frame of display; and field 3,4,5 was used for second VDI output frame of display; field 5,6,7 was used for third VDI output frame of display. One field data (such as 2,4,6) was used only once, so there is data lost.   After applied these patches, the VDI de-interlace output will be 60fps: for motion mode 0 and 1, field 0,1,2 was used for first VDI output frame of display; and field 1,2,3 was used for second VDI output frame of display; field 2,3,4 was used for third VDI output frame of display. So all field data will be used twice, there is no video data lost, the VDI quality was improved.   Kernel patches: 0001-Add-MEM-to-VDI-to-MEM-support-for-IPU.patch 0002-Add-IPU-IC-memcpy-support.patch 0003-IPU-VDI-support-switch-odd-and-even-field-in-motion-.patch 0004-IPU-VDI-correct-vdi-top-field-setting.patch   mxc_v4l2_tvin_imx6_vdi_60fps.zip: this is the test application sample code.   Test commands, parameter "-vd" means double fps VDI: ./mxc_v4l2_tvin.out -ol 0 -ot 0 -ow 720 -oh 480 -m 0 -vd  
記事全体を表示
This work is the result of my daughter's idea, she finished it with my guidance. Cradle-1 Palmsize mini-HPC World's first full function heterogeneous mini-HPC, this is what it looks like: 1 Architecture         Overall:  CPU+GPU heterogeneous, 4 nodes, connected by a 100M Ethernet switcher;         Nodes: FreeScale I.MX6 Quad core mini-pc, with 4 ARM Cortex-A9 cores and 1 Vivante GC2000 GPU 2  Software         OS:   Ubuntu 11.10 linaro         OpenCL driver: Vivante GC2000 OpenCL driver         Compiler:  C/C++: gcc 4.6.1, Fortan90/95:  gfortran 4.6.1,         MPI Parallel Computing: MPICH2 1.4-1         NFS network file system: nfs-kernel-server 1.2.4         SSH security:   openssh   1:5.8 3 Hardware         The hardware of all nodes are the same, only the software configurations are slightly different. One of them was assigned as the master node, the others are slave nodes. They were TV sticks originally, with android 4.0 installed. The node's hardware specification is:         CPU: 4 1.2G Cortex-A9 cores         GPU: 1 Vivante GC2000 GPU         RAM: 1G DDR         ROM: 8G SD         NIC:   usb2.0 100M Ethernet Adapter (this NIC is not the TV stick's component, we added it)         WIFI: 150M         Display Interface:  HDMI         Network Switcher: 5 port 100M Ethernet Switcher 4  Network         Each node has one USB2.0 NIC and one WIFI interface, the WIFI is used as the backup connection for NIC connection. Network configurations are:         IP Address assignment:  (baby1 - baby4 are the four computing nodes)         baby1: 100M NIC 192.168.10.1 WIFI 192.168.0.111         baby2: 100M NIC 192.168.10.2 WIFI 192.168.0.112         baby3: 100M NIC 192.168.10.3 WIFI 192.168.0.113         baby4: 100M NIC 192.168.10.4 WIFI 192.168.0.114 5  Performance         Cradle-1 has 16 1.2G ARM Cortex-A9 cores and 4 Vivante GC2000 GPU cores, the total computing power of these 20 computing devices is more than 100GFLOPS,   more powerful than an ordinary desktop. The whole machine is only a little bigger than a palm, and the total power consumption is less than 15 watts.          The overall architecture of Cradle-1 is almost the same as Chinese Tianhe-1A or the Titan in the oak ridge lab. they used the same set of software, LINUX+OPENCL+OPENMPI. Cradle-1 supports C/C++, Fortran90/95. And almost all kinds of parallel computing algorithms can run on it, the only difference is the scale.         We coded a MPI parallel computing program for large matrix multiplication with 4 processes, each process had 5 threads, four threads for the four CPU cores, and one thread for GPU computing. 6 Appearance Front Back Top Left Right One node, it has three interfaces, the right is HDMI interface, upper-left is the wireless adapter for keyboard and mouse, down-left is the power connection. One node is running Ubuntu 11.10. Coded a simple OpenCL program to display OpenCL driver information On a notebook, using remote desktop access function to obtan the node baby1's desktop. This is the sign in desktop of baby1 node. Baby 1 has X11VNC server installed. sign in baby1, open a terminal Ran a MPI testing program, ensuring that all babies (baby1 - baby4) were working     Any comments? please mail to audrey.tao@hotmail.com
記事全体を表示
gst-launch is the tool to execute GStreamer pipelines. Task Pipeline Looking at caps gst-launch -v  <gst elements> Enable log gst-launch --gst-debug=<element>:<level> gst-launch --gst-debug=videotestsrc:5 videotestsrc ! filesink location=/dev/null
記事全体を表示
There is GPU SDK for i.MX6D/Q/DL/S: IMX_GPU_SDK.  This is to share the experience when compiling the example code from the SDK with Linux BSP release: L3.0.35_1.1.0_121218 and  L3.0.35_4.0.0_130424 . Minimal profile is using and have been verified on both i.MX6Q SDP and i.MX6DL SDP. To start: Please make sure “gpu-viv-bin-mx6q” has been selected in the Package list and compiled to your rootfs. After finished the compilation of the rootfs, you should find some newly added libraries for GLES1.0, GLES2.0, OpenVG and EGL in <ltib>/rootfs/usr/lib However, you should find libOpenVG.so is actually copied from libOepnVG_3D.so: vmuser@ubuntu:~/ltib_src/ltib/rootfs/usr/lib$ ls -al libOpen* -rwxr-xr-x 1 root root 115999 2013-06-06 18:31 libOpenCL.so -rwxr-xr-x 1 root root 515174 2013-06-06 18:31 libOpenVG_355.so -rwxr-xr-x 1 root root 272156 2013-06-06 18:31 libOpenVG_3D.so -rwxr-xr-x 1 root root 272156 2013-06-06 18:31 libOpenVG.so So, in this way, i.MX6D/Q will no use libOpenVG_355.so in the build. Also, if you run NFS, the libOpenVG.so will change to symbolic link:           For example, run on i.MX6Q SDP, it will link to /usr/lib/libOpenVG_355.so                          For example, run on i.MX6DL SDP, it will link to /usr/lib/libOpenVG_3D.so                Then, when you compile the OpenVG example code, it is becoming very confusing.  Thus, it needs to pay attention when doing the compilation.  For example, delete the symbolic link and make copy of the corresponding library: For i.MX6D/Q, please do this: $ sudo /bin/rm libOpenVG.so $ sudo cp libOpenVG_355.so libOpenVG.so For i.MX6S/DL, please do this: $ sudo /bin/rm libOpenVG.so $ sudo cp libOpenVG_3D.so libOpenVG.so To compile the sample code in the GPU SDK, you could refer to iMXGraphicsSDK_OpenGLES2.0.pdf or iMXGraphicsSDK_OpenGLES1.1.pdf in ~/gpu_sdk_v1.00.tar/Documentation/Tutorials to set up the cross compilation environment; which is assuming the LTIB and the rootfs is ready. $ export ROOTFS=/home/vmuser/ltib_src/ltib/rootfs $ export CROSS_COMPILE=/opt/freescale/usr/local/gcc-4.6.2-glibc-2.13-linaro-multilib-2011.12/fsl-linaro-toolchain/bin/arm-none-linux-gnueabi- For OpenVG: $ cd ~/gpu_sdk_v1.00/Samples/OpenVG $ make -f Makefile.fbdev clean $ make -f Makefile.fbdev $ make -f Makefile.fbdev install The executable will then be copied to this directory: ~/gpu_sdk_v1.00/Samples/OpenVG/bin/OpenVG_fbdev For GLES2.0 $ cd ~/gpu_sdk_v1.00/Samples/ GLES2.0 $ make -f Makefile.fbdev clean $ make -f Makefile.fbdev $ make -f Makefile.fbdev install The executable will then be copied to this directory: ~/gpu_sdk_v1.00/Samples/ GLES2.0/bin/GLES20_fbdev For GLES1.1, please modify the Makefile.fbdev to remove the compilation of example codes "18_VertexBufferObjects" and "19_Beizer" that are not exist. Then, $ cd ~/gpu_sdk_v1.00/Samples/ GLES1.1 $ make -f Makefile.fbdev clean $ make -f Makefile.fbdev $ make -f Makefile.fbdev install The executable will then be copied to this directory: ~/gpu_sdk_v1.00/Samples/ GLES1.1/bin/GLES11_fbdev Finally, you could copy the executable to the rootfs and test on i.MX6Q SDP/SDB or i.MX6DL SDP board. NOTE: the newly added makefiles.tgz contains Makefile.x11 hacked from GLES2.0 example code to make OpenVG to compile and run on Ubuntu 11.10 rootfs.
記事全体を表示
This is a How-To documentation for OpenCL on i.MX6 using LTIB, there are all necessary steps and sample code to create,  build and run a HelloWorld application.
記事全体を表示
Qt framework Qt is a cross-platform complete development framework with tools designed to streamline the creation of stunning native applications and amazing user interfaces for desktop, embedded and mobile platforms. Qt's cross-platform full framework and tools enables developers to target various desktop, embedded, mobile and real-time operating systems with one code base. Qt brings freedom to the developer saving development time, adding efficiency and ultimately shortening time to market. Building Qt Compile Qt for i.MX28 Building QT5 for i.MX53 Building QT for i.MX6 Qt on iMX6 Installing tools Installing and Configuring QT Creator (Ubuntu) Qt5 with Qt3D over Wayland rootfs Demos Qt5 Cinematic Experience Demo on i.MX6 Video - IMx 53 Qt5 qt3d demo Qt5 with Qt3D over Wayland rootfs Information Qt5 on i.MX6  DO's and DONT's Best Practices for QML
記事全体を表示
Header 1 Header 2 Video rendering gst-launch videotestsrc ! mfw_v4lsink Audio rendering gst-launch audiotestsrc ! alsasink WAV Audio rendering gst-launch filesrc location=test.wav ! wavparse ! alsasink Video rendering selecting caps gst-launch videotestsrc ! capsfilter name='video/x-raw-yuv,format=(fourcc)I420' ! mfw_v4lsink gst-launch videotestsrc ! 'video/x-raw-yuv,format=(fourcc)I420' ! mfw_v4lsink
記事全体を表示
Overview As more and more communication required between online and offline, the QR code is widely used in the mobile payment, mobile small apps, industry things identification and etc. The i.MX6UL/ULL has the IP of CSI and PXP for camera connection and image CSC/FLIP/ROTATION acceleration. A LCDIF IP is supporting the display, but no 3D IP support. This means this low power and low end AP is very suitable for the industry HMI segment, which does not require a cool 3D graphic display, but a simple and straightforward GUI for interaction. QR code scanner is one of the use cases in the industry segment, which more and more customer are focusing on. The i.MX6UL CPU freq of i.MX6UL is about 500Mhz, and it does not have GPU IP, so a lightweight GUI and window system is required. Here we recommend the QT with wayland backend (without X11), which would make the window system small and faster than traditional X11 UI. Why chose QT is because of it has open source version, rich components, platform independent, good performance for embedded system and strong development staffs like QtCreator for creating application. How to enable the QT development environment, check this: Enable QT developement for i.MX6UL (v2)  Here I made a QR code scanner demo based on QT5.6 + QZXing (QR/Bar code scan engine) running on the i.MX6UL EVK board with a UVC camera (at least 640x480 resolution is required) and 480x272px LCD. Source code is open here (License Apache2.0): https://github.com/muddog/QRScanner  Implementation To do camera preview and capture, you must think on the gstreamer first, which is easy use and has the acceleration pads which implemented by NXP for i.MX6UL. Yes, it's very easy for you to enable the preview in console like: $ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=320 ! imxvideoconvert_pxp ! video/x-raw,format=RGB16 ! waylandsink It works under the i.MX6UL EVK, with PXP IP to do color space convert from YUY2 -> RGB16 acceleration, also the potential scaling of the image. The CPU loading of this is about 20-30%, but if you use the component of "videoconvert" to replace the "imxvideoconvert_pxp", we do CSC and scale by CPU, then the loading would increase to 50-60%. The "/dev/video1" is the device node for UVC camera, it may different in your environment. So our target is clear, create such pipeline (with PXP acceleration) in the QT application, and use a appsink to get preview images, do simple "sink" to one QWidget by drawing this image on the widget surface for preview (say every 50ms for 20fps). Then in other thread, we fetch the preview buffer in a fixed frequency (like every 0.5s), then feed it into the ZXing engine to decode the strings inside this image. Here are the class created inside the source code: ScannerQWidgetSink It act as a gstreamer sink for preview rendering. Init the pipeline, create a timer with timeout every 50ms. In the timer handler, we use appsink to copy the camera buffer from gstreamer, and tell the ViewfinderWidget to do update (re-draw event). ViewfinderWidget This class inherit from the QWidget, which draw the preview buffer as a QImage onto it's own surface by using QPainter. The QImage is created at the very begining with the image buffer created by the ScannerQWidgetSink. Because QImage itself does not maintain the image buffer, so the buffer must be alive during it's usage. So we keep this buffer during the ScannerQWidgetSink life cycle, copy the appsink buffer from pipeline to it for preview. MainWindow Create main window, which does not have title bar and border. Start any animation for the red line scan bar. Create instance of DecoderThread and ScannerQWidgetSink. Setup and start them. DecoderThread A infinite loop, to wait for a available buffer released by the ScannerQWidgetSink every 0.5s. Copy the buffer data to it's own buffer (imgData) to avoid any change to the buffer by sink when doing decoding. Then feed this copy of buffer into ZXing engine to get decoder result. Then show on the QLabel. Screenshot under wayland (weston) desktop: Customize Camera instance Now I use the UVC camera which pluged in the USB host, which device node is /dev/video1. If you want to use CSI or other device, please change the construction parameters for ScannerQWidgetSink(): sink = new ScannerQWidgetSink(ui->widget, QString("v4l2src device=/dev/video1")); Image resolution captured and review Change the static member value of ScannerQWidgetSink class: uint ScannerQWidgetSink::CAPTURE_HEIGHT = 480; uint ScannerQWidgetSink::CAPTURE_WIDTH = 640; Preview fps and decoding frequency Find the "framerate=20/1" strings in the ScannerQWidgetSink::GstPipelineInit(), change to your fps. You also have to change the renderTimer start timeout value in the ::StartRender(). The decoding frequency is determined by renderCnt, which determine after how many preview frames showed to feed the decoder. Main window size It's fixed size of main window, you have to change the mainwindow.ui. It's easy to do in the QtCreate Designer. FAQ Why not use CSI camera in demo? Honestly, I do not have CSI camera module, it's also DNP when you buying the board on NXP.com. So a widely used UVC camera is preferred, it's also easy for you to scan QR code on your phone, your display panel etc. Why not use QCamera to do preview and capture? The QCamera class in the Qtmultimedia component uses the camerabin2 gstreamer plugin, which create a very long pipeline for different usage of viewfinder, image capture and video encoder. Camerabin2 would eat too much CPU and memory resource, take picture and recording are very very slow. The preview of 30fps would eat about 70-80% CPU loading even I hacked it using imxvideoconvert_pxp instread of software videoconvert. Finally I give up to implement the QRScanner based on QCamera. How to make sure only one instance of QT app is running? We can use QSharedMemory to create a share memory with a unique KEY. When second instance of app is started, it would check if the share memory with this KEY is created or not. If the shm is there, it means there's already one instance running, it has to exit(). But as the QT mentioned, the QSharedMemory can not be destroyed correctly when app crashed, this means we have to handle each terminate signal, and do delete by ourselves: static QSharedMemory *gShm = NULL; static void terminate(int signum) {    if (gShm) {       delete gShm;       gShm = NULL;    }    qDebug() << "Terminate with signal:" << signum;    exit(128 + signum); } int main(int argc, char *argv[]) {    QApplication a(argc, argv);    // Handle any further termination signals to ensure the    // QSharedMemory block is deleted even if the process crashes    signal(SIGHUP, terminate ); // 1    signal(SIGINT, terminate ); // 2    signal(SIGQUIT, terminate ); // 3    signal(SIGILL, terminate ); // 4    signal(SIGABRT, terminate ); // 6    signal(SIGFPE, terminate ); // 8    signal(SIGBUS, terminate ); // 10    signal(SIGSEGV, terminate ); // 11    signal(SIGSYS, terminate ); // 12    signal(SIGPIPE, terminate ); // 13    signal(SIGALRM, terminate ); // 14    signal(SIGTERM, terminate ); // 15    signal(SIGXCPU, terminate ); // 24    signal(SIGXFSZ, terminate ); // 25    gShm = new QSharedMemory("QRScannerNXP");    if (!gShm->create(4, QSharedMemory::ReadWrite)) {       delete gShm;       qDebug() << "Only allow one instance of QRScanner";       exit(0);    } .....
記事全体を表示
This document describes the i.MX 8QXP MEK mini-SAS connectors features on Linux and Android use cases, covering the supported daughter cards, the process to change Device Tree (DTS) files or Boot images, and enable these different display options on the board.
記事全体を表示
Requirements: Host machine with Ubuntu 14.04 UDOO Quad/Dual Board uSD card with at least 8 GB Download documentation and install latest Official Udoobuntu OS (at the moment of writing: UDOObuntu 2.1.2), https://www.udoo.org/downloads/   Overview: This document describes how to install and test Keras (Open source neural network library) and Theano (numerical computation library for python ) for deep learning library usage on i.MX6QD UDOO board.  Installation: $ sudo apt-get update && sudo apt-get upgrade update your date system: e.g. $ sudo date -s “07/08/2017 12:00” First satisfy the run-time and build time dependencies: $ sudo apt-get install python-software-properties software-properties-common make unzip zlib1g-dev git pkg-config autoconf automake libtool curl  python-pip python-numpy libblas-dev liblapack-dev python-dev libatlas-base-dev gfortran libhdf5-serial-dev libhdf5-dev python-setuptools libyaml-dev libpython2.7-dev $ sudo easy_install scipy The last step is installing scipy through pip, and can take several hours. Theano First, we have a few more dependencies to get: $sudo pip install scikit-learn $sudo pip install pillow $sudo pip install h5py With these dependencies met, we can install a stable Theano release from the git source: $ git clone https://github.com/Theano/Theano $ cd Theano Numpy 1.9 cause conflicts with armv7, so we need to change the setup.py configuration: $ sudo nano setup.py Remove line    #       install_requires=['numpy>=1.9.1', 'scipy>=0.14', 'six>=1.9.0'], And add setup_requires=["numpy"], install_requires=["numpy"], Then install it: $ sudo python setup.py install Keras The installation can occur with the command: (this could take a lot of time!!!) $ cd .. $ git clone https://github.com/fchollet/keras.git $ cd keras $ sudo python setup.py install $ LC_ALL=C $sudo pip install --upgrade keras After Keras is installed, you will want to edit the Keras configuration file ~/.keras/keras.json to use Theano instead of the default TensorFlow backend. If it isn't there, you can create it. This requires changing two lines. The first change is: "image_dim_ordering": "tf"  --> "image_dim_ordering": "th" and the second: "backend": "tensorflow" --> "backend": "theano" (The final file should look like the example below) sudo nano ~/.keras/keras.json {     "image_dim_ordering": "th",     "epsilon": 1e-07,     "floatx": "float32",     "image_data_format": "channels_last",     "backend": "theano" } You can also define the environment variable KERAS_BACKEND and this will override what is defined in your config file : $ KERAS_BACKEND=theano python -c "from keras import backend" Testing Quick test: udooer@udoo:~$ python Python 2.7.6 (default, Oct 26 2016, 20:46:32) [GCC 4.8.4] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import keras Using Theano backend. >>>  Test 2: Be aware this test take some time (~1hr on udoo dual): $ curl -sSL -k https://github.com/fchollet/keras/raw/master/examples/mnist_mlp.py | python Output: For demonstration, deep-learning-models repository provided by pyimagesearch and from fchollet git, and also have three Keras models (VGG16, VGG19, and ResNet50) online — these networks are pre-trained on the ImageNet dataset, meaning that they can recognize 1,000 common object classes out-of-the-box. $ cd keras $ git clone https://github.com/fchollet/deep-learning-models $ Cd deep-learning-models $ ls -l Notice how we have four Python files. The resnet50.py , vgg16.py , and vgg19.py  files correspond to their respective network architecture definitions. The imagenet_utils  file, as the name suggests, contains a couple helper functions that allow us to prepare images for classification as well as obtain the final class label predictions from the network Classify ImageNet classes with ResNet50 ResNet50 model, with weights pre-trained on ImageNet. This model is available for both the Theano and TensorFlow backend, and can be built both with "channels_first" data format (channels, height, width) or "channels_last" data format (height, width, channels). The default input size for this model is 224x224. We are now ready to write some Python code to classify image contents utilizing  convolutional Neural Networks (CNNs) pre-trained on the ImageNet dataset. For udoo Quad/Dual use ResNet50 due to avoid space conflict. Also we are going to use ImageNet (http://image-net.org/) that is an image database organized according to the WordNet hierarchy, in which each node of the hierarchy is depicted by hundreds and thousands of images. from keras.applications.resnet50 import ResNet50 from keras.preprocessing import image from keras.applications.resnet50 import preprocess_input, decode_predictions import numpy as np   model = ResNet50(weights='imagenet')   #for this sample I download the image from: http://i.imgur.com/wpxMwsR.jpg  img_path = 'elephant.jpg' img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x)   preds = model.predict(x) # decode the results into a list of tuples (class, description, probability) # (one such list for each sample in the batch) print('Predicted:', decode_predictions(preds, top=3)[0]) Save the file an run it. Results for elephant image: Top prediction was 0.8890 for African Elephant Testing with this image: http://i.imgur.com/4FIOwAN.jpg Results: Top prediction was: 0.7799 for golden_retriever. Now your Udoo is ready to use Keras and Theano as Deep Learning libraries, next time we are going to show some usage example for image classification models with OpenCV. References: GitHub - fchollet/keras: Deep Learning library for Python. Runs on TensorFlow, Theano, or CNTK.  GitHub - Theano/Theano: Theano is a Python library that allows you to define, optimize, and evaluate mathematical expres…  GitHub - fchollet/deep-learning-models: Keras code and weights files for popular deep learning models.  Installing Keras for deep learning - PyImageSearch 
記事全体を表示
Hello everyone, We have recently migrated our Source code from CAF (Codeaurora) to Github, so i.MX NXP old recipes/manifest that point to Codeaurora eventually will be modified so it points correctly to Github to avoid any issues while fetching using Yocto. Also, all repo init commands for old releases should be changed from: $ repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b <branch name> [ -m <release manifest>] To: $ repo init -u https://github.com/nxp-imx/imx-manifest -b <branch name> [ -m <release manifest>] This will also apply to all source code that was stored in Codeaurora, the new repository for all i.MX NXP source code is: https://github.com/nxp-imx For any issues regarding this, please create a community thread and/or a support ticket. Regards, Aldo.
記事全体を表示