i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
This document explains how to bring-up u-boot & Linux via JTAG This procedure has been tested on: i.MX6 Solo X Sabre SD i.MX6UL EVK Prerequistes: Get the latest BSP for your board. This procedure was tested with L4.1.15. Build the 'core-image-minimal' image to bring-up your board (Detailed steps here) Optional- Build a meta-toolchain for your device 1.- Set board to boot from Serial dowloader mode or set it to boot from the SD card and remove the sd card We basically want the board to stall in boot ROM to attach to the target. 2.- Connect JTAG probe and turn on the board The device should stall trying to establish a connection to download an image, this will allow us to attach to the target. 3.- Load Device Configuration Data In 'normal' boot sequence the boot ROM takes care of reading the DCD and configuring the device accordingly, but in this case we are skipping this sequence and we need to configure the device manually. The script used by Lauterbach to parse and configure the device is called dcd_interpreter.cmm and can be found here. Search for the package for your specific device. The DCD configuration for your board should be on your u-boot directory: yocto_build_dir/tmp/work/<your board>imx6ulevk/u-boot-imx/<u-boot_version>2016.03-r0/git under board/freescale/<name of your board>mx6ul_14x14_evk/imximage.cfg This file (imximage.cfg) contains all the data to bring up DRAM among other early configuration options. 4.- Load U-boot If an SREC file of U-boot is not present build it (meta-toolchain installed required) the SREC file contains all the information required by the probe to load it and makes this process easier. To build the SREC simply type: make <your board defconfig>mx6ul_14x14_evk_defconfig  (all supported boards are found under u-boot_dir/configs) make If you cannot build an SREC or do not want to, you can use the u-boot.imx (located under yocto_build_dir/tmp/deploy/images/<your board name>/) or u-boot.bin files but you will need to figure out the start address and load address for these files, this can be done by examining the IVT on u-boot.imx (here is a useful document explaining the structure of the IVT). Let U-boot run and you should see its output on the console I will try to boot from several sources but it will fail and show you the prompt. 5.- Create RAMDisk After building the core-image-minimal you will have all the required files under yocto_build_dir/tmp/deploy/images/<your board name>/ You will need: zImage.bin - zImage--<Linux Version>--<your board>.bin Device tree blob - zImage--<Linux Version>--<your board>.dtb Root file system - core-image-minimal-<your board>.rootfs.ext4 We need to create a RAMDisk out of the root file system we now have, these are the steps to do so: Compress current Root file system using gzip: gzip core-image-minimal-<your board>.rootfs.ext4 If you want to keep the original file use: gzip -c core-image-minimal-<your board>.rootfs.ext4 > core-image-minimal-<your board>.rootfs.ext4.gz Create RAMDisk using mkimage: mkimage -A arm -O linux -T ramdisk -C gzip -n core-image-minimal -d core-image-minimal-<your board>.rootfs.ext4.gz core-image-minimal-RAMDISK.rootfs.ext4.gz.u-boot Output: Image Name: core-image-minimal Created: Tue May 23 11:28:55 2017 Image Type: ARM Linux RAMDisk Image (gzip compressed) Data Size: 3017939 Bytes = 2947.21 kB = 2.88 MB Load Address: 00000000 Entry Point: 00000000 Here are some details on mkimage usage Usage: mkimage -l image -l ==> list image header information mkimage [-x] -A arch -O os -T type -C comp -a addr -e ep -n name -d data_file[:data_file...] image -A ==> set architecture to 'arch' -O ==> set operating system to 'os' -T ==> set image type to 'type' -C ==> set compression type 'comp' -a ==> set load address to 'addr' (hex) -e ==> set entry point to 'ep' (hex) -n ==> set image name to 'name' -d ==> use image data from 'datafile' -x ==> set XIP (execute in place) mkimage [-D dtc_options] [-f fit-image.its|-F] fit-image -D => set options for device tree compiler -f => input filename for FIT source Signing / verified boot not supported (CONFIG_FIT_SIGNATURE undefined) mkimage -V ==> print version information and exit 6.- Modify U-boot's environment variables Now we need to modify U-boot's bootargs as follows: setenv bootargs console=${console},${baudrate} root=/dev/ram rw We need to find out the addresses where u-boot will expect the zImage, the device tree and the initial RAMDisk, we can do it as follows: => printenv fdt_addr fdt_addr=0x83000000 => printenv initrd_addr initrd_addr=0x83800000 => printenv loadaddr loadaddr=0x80800000 Where: fdt_addr -> Device tree blob load address initrd_addr -> RAMDisk load address loadaddr -> zImage load address 7.- Load zImage, DTB and RAMDisk Now we know where to load our zImage, device tree blob and RAMDisk, on Lauterbach this can be achieved by running the following commands: Stop the target and execute: data.load.binary zImage.bin 0x80800000 data.load.binary Your_device.dtb 0x83000000 data.load.binary core-image-minimal-RAMDISK.rootfs.ext4.gz.u-boot 0x83800000 Let the device run again and deattach from the device in lauterbach this is achieved by: go SYStem.mode.NoDebug start the boot process on u-boot as follows: bootz ${loadaddr} ${initrd_addr} ${fdt_addr} You should now see the Linux kernel boot process on your terminal: After the kernel boots you should see its prompt on your terminal: Since we are running out of RAM there is no way for us to save u-boot's environment variables, but you can modify the source and compile u-boot with the new bootargs, by doing so you can create a Load script that loads all the binaries hits go and the boot process will continue automatically. One way to achieve this is to modify the configuration file under U-boot_dir/include/configs/<your board>.h find the mfgtool_args and modify accordingly. The images attached to this thread have been modified as mentioned.
View full article
Most engineers should incorporate the following fundamental methodology when designing and bringing up a new board design: 1. Review the schematics and layout to ensure proper connectivity of all devices 2. Once the board returns from the manufacturer, measure and document all of the voltage rails of each IC on the board (especially the SoC and DRAM) 3. Ensure JTAG debugger connectivity (due to the complexity of systems today, every new board design should have some “hooks” to allow JTAG connectivity, even if these are simply test points) 4. Bring up and ensure proper DRAM functionality; it is imperative the first three steps are precisely accomplished – often times, DRAM instability or non functionality is due to improper connection (including not being connected to the voltage net) or poor layout. Once these four steps are completed, the board can then proceed to a more broad based checkout of other peripherals using some type of compiled test code executed from DRAM. More often than not, the end user’s board will differ from Freescale reference design boards either in how the DRAMs are connected or simply by using a different DRAM vendor.  As such, tools were created to aid in the development of DRAM initialization scripts.  The resulting script, though targeted for the RealView development system (aka include files), can be easily ported to another debugger’s command syntax or to assembly code for use in boot loaders.  These tools are Excel spread sheet based and include a “How To Use” tab, making the tool usage relatively self-explanatory.  Each tool is unique to a specific i.MX processor and to the DRAM technology used with each processor.  This attached files are tools available for the following i.MX SoCs:
View full article
NXP i.MX 8 series of application processors support running ArmV8a 64-bit and ArmV7a 32-bit user space programs.  A Hello World program that prints the size of a long int is cross-compiled as 32-bit and as 64-bit from an Ubuntu host and then each is copied to MCIMX8MQ-EVK and run. Resources: Ubuntu 18.04 LTS Host i.MX 8M Evaluation Kit|NXP  MCIMX8MQ-EVK Linux Binary Demo Files - i.MX 8MQuad EVK L4.9.88_2.0.0_GA release Source Code: Create a file with contents below using your favorite editor, example name hello-sizeInt.c. #include <stdio.h> int main (int argc, char **argv) { printf ("Hello World, size of long int: %zd\n", sizeof (long int)); return 0; }‍‍‍‍‍‍‍ Ubuntu host packages: $ sudo apt-get install -y gcc-arm-linux-gnueabihf $ sudo apt-get install -y gcc-aarch64-linux-gnu‍‍‍‍ Line 1 installs the ArmV7a cross-compile tools: arm-linux-gnueabihf-gcc is used to cross compile on Ubuntu host Line 2 install the ArmV8a cross-compile tools: aarch64-linux-gnu-gcc is used to cross compile on Ubuntu host Create Linux User Space Applications Build each application and use the static option to gcc to include run time libraries. Build ArmV7a 32-bit application: $ arm-linux-gnueabihf-gcc -static hello-sizeInt.c -o hello-armv7a‍-static‍‍ Build ArmV8a 64-bit application: $ aarch64-linux-gnu-gcc -static  hello-sizeInt.c -o hello-armv8a‍-static‍‍ Copy Hello applications from Ubuntu host and run on MCIMX8MQ-EVK Using a SDCARD written with images from L4.9.88_2.0.0 Linux release (see resources for image link), power on EVK with Ethernet connected to network and Serial Console port which was connected to a windows 10 PC. Launched a terminal client (TeraTerm) to access console port. Login credentials: root and no password needed. Since Ethernet was connected a DHCP IP address was acquired, 192.168.1.241 on the EVK.  On the Ubuntu host, secure copy the hello applications to EVK: $ scp hello-armv7a-static root@192.168.1.241:~/ hello-armv7a-static                           100%  389KB   4.0MB/s   00:00    $ scp hello-armv8a-static root@192.168.1.241:~/ hello-armv8a-static                           100%  605KB   4.7MB/s   00:00 ‍‍‍‍‍‍‍‍‍‍ Run: root@imx8mqevk:~# ./hello-armv8a-static Hello World, sizeof long int: 8 root@imx8mqevk:~# ./hello-armv7a-static Hello World, sizeof long int: 4‍‍‍‍‍‍‍‍
View full article
1. Follow all instructions from Freescale's github repo except the last bitbake command 2. Run hob under the build folder build$ hob & 3. On the GUI, select machine and image, then build 4. In case you need to flash an SD Card, hob does not produce an .sdcard image, so as a workaround, close hob and on the same console run build$ bitbake <image>     where image must be the same as the one you choose with hob 5. Flash your SD card build$ sudo dd if=tmp/deploy/images/fsl-image-gui-imx6qsabresd.sdcard of=/dev/sdX bs=4M NOTES: In case of building issues, please follow this link In case of booting issues, make sure: 1. board DIP switches are set correctly 2. you have chosen the correct machine before baking If issues persist, report it to the community
View full article
In this article, some experiments are done to verify the capability of i.MX6DQ on video playback under different VPU clocks. 1. Preparation Board: i.MX6DQ SD Bitstream: 1080p sunflower with 40Mbps, it is considered as the toughest H264 clip. The original clip is copied 20 times to generate a new raw video (repeat 20 times of sun-flower clip) and then encapsulate into a mp4 container. This is to remove and minimize the influence of startup workload of gstreamer compared to vpu unit test. Kernels: Generate different kernel with different VPU clock setting: 270MHz, 298MHz, 329MHz, 352MHz, 382MHz. test setting: 1080p content decoding and display with 1080p device. (no resize) 2. Test command for VPU unit test and Gstreamer The tiled format video playback is faster than NV12 format, so in below experiment, we choose tiled format during video playback. Unit test command: (we set the frame rate -a 70, higher than 1080p 60fps HDMI refresh rate)     /unit_tests/mxc_vpu_test.out -D "-i /media/65a78bbd-1608-4d49-bca8-4e009cafac5e/sunflower_2B_2ref_WP_40Mbps.264 -f 2 -y 1 -a 70" Gstreamer command: (free run to get the highest playback speed)     gst-launch filesrc location=/media/65a78bbd-1608-4d49-bca8-4e009cafac5e/sunflower_2B_2ref_WP_40Mbps.mp4 typefind=true ! aiurdemux ! vpudec framedrop=false ! queue max-size-buffers=3 ! mfw_v4lsink sync=false 3. Video playback framerate measurement During test, we enter command "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor" to make sure the CPU always work at highest frequency, so that it can respond to any interrupt quickly. For each testing point with different VPU clock, we do 5 rounds of tests. The max and min values are removed, and the remaining 3 data are averaged to get the final playback framerate. #1 #2 #3 #4 #5 Min Max Avg Dec Playback Dec Playback Dec Playback Dec Playback Dec Playback Playback Playback Playback 270M unit test 57.8 57.3 57.81 57.04 57.78 57.3 57.87 56.15 57.91 55.4 55.4 57.3 56.83 GST 53.76 54.163 54.136 54.273 53.659 53.659 54.273 54.01967 298M unit test 60.97 58.37 60.98 58.55 60.97 57.8 60.94 58.07 60.98 58.65 57.8 58.65 58.33 GST 56.755 49.144 53.271 56.159 56.665 49.144 56.755 55.365 329M unit test 63.8 59.52 63.92 52.63 63.8 58.1 63.82 58.26 63.78 59.34 52.63 59.52 58.56667 GST 57.815 55.857 56.862 58.637 56.703 55.857 58.637 57.12667 352M unit test 65.79 59.63 65.78 59.68 65.78 59.65 66.16 49.21 65.93 57.67 49.21 59.68 58.98333 GST 58.668 59.103 56.419 58.08 58.312 56.419 59.103 58.35333 382M unit test 64.34 56.58 67.8 58.73 67.75 59.68 67.81 59.36 67.77 59.76 56.58 59.76 59.25667 GST 59.753 58.893 58.972 58.273 59.238 58.273 59.753 59.03433 Note: Dec column means the vpu decoding fps, while Playback column means overall playback fps. Some explanation: Why does the Gstreamer performance data still improve while unit test is more flat? On Gstreamer, there is a vpu wrapper which is used to make the vpu api more intuitive to be called. So at first, the overall GST playback performance is constrained by vpu (vpu dec 57.8 fps). And finally, as vpu decoding performance goes to higher than 60fps when vpu clock increases, the constraint becomes the display refresh rate 60fps. The video display overhead of Gstreamer is only about 1 fps, similar to unit test. Based on the test result, we can see that for 352MHz, the overall 1080p video playback on 1080p display can reach ~60fps. Or if time sharing by two pipelines with two displays, we can do 2 x 1080p @ 30fps video playback. However, this experiment is valid for 1080p video playback on 1080p display. If for interlaced clip and display with size not same as 1080p, the overall playback performance is limited by some postprocessing like de-interlacing and resize.
View full article
Freescale does not have a specific GStreamer element to do JPEG encoding, so the standard 'jpegenc' should be used. Image Capture With a web camera gst-launch v4l2src num-buffers=1 ! jpegenc ! filesink location=sample.jpeg With an embedded camera gst-launch mfw_v4lsrc num-buffers=1 !  jpegenc ! filesink location=sample.jpeg More pipelines on GStreamer i.MX6 Pipelines
View full article
  Test environment   i.MX8MP EVK LVDS0 LVDS-HDMI  bridge(it6263) Uboot2022, Uboot2023 Background   Some customers need show logo using LVDS panel. Current BSP doesn't support LVDS driver in Uboot. This patch provides i.MX8MPlus LVDS driver support in Uboot. If you want to connect it to LVDS panel , you need port your lvds panel driver like  simple-panel.c   Update [2022.9.19] Verify on L5.15.32_2.0.0  0001-L5.15.32-Add-i.MX8MP-LVDS-driver-in-uboot 'probe device is failed, ret -2, probe video device failed, ret -19' is caused by below code. It has been merged in attachment. // /* Only handle devices that have a valid ofnode */ // if (dev_has_ofnode(dev) && !(dev->driver->flags & DM_FLAG_IGNORE_DEFAULT_CLKS)) { // /* // * Process 'assigned-{clocks/clock-parents/clock-rates}' // * properties // */ // ret = clk_set_defaults(dev, CLK_DEFAULTS_PRE); // if (ret) // goto fail; // }   [2023.3.14] Verify on L5.15.71 0001-L5.15.71-Add-i.MX8MP-LVDS-support-in-uboot   [2023.9.12] For some panel with low DE, you need uncomment CTRL_INV_DE line and set this bit to 1. #include <linux/string.h> @@ -110,9 +111,8 @@ static void lcdifv3_set_mode(struct lcdifv3_priv *priv, writel(CTRL_INV_HS, (ulong)(priv->reg_base + LCDIFV3_CTRL_SET)); /* SEC MIPI DSI specific */ - writel(CTRL_INV_PXCK, (ulong)(priv->reg_base + LCDIFV3_CTRL_CLR)); - writel(CTRL_INV_DE, (ulong)(priv->reg_base + LCDIFV3_CTRL_CLR)); - + //writel(CTRL_INV_PXCK, (ulong)(priv->reg_base + LCDIFV3_CTRL_CLR)); + //writel(CTRL_INV_DE, (ulong)(priv->reg_base + LCDIFV3_CTRL_CLR)); }       [2024.5.15] If you are uing simple-panel.c, need use below patch to set display timing from panel to lcdif controller. diff --git a/drivers/video/simple_panel.c b/drivers/video/simple_panel.c index f9281d5e83..692c96dcaa 100644 --- a/drivers/video/simple_panel.c +++ b/drivers/video/simple_panel.c @@ -18,12 +18,27 @@ struct simple_panel_priv { struct gpio_desc enable; }; +/* define your panel timing here and + * copy it in simple_panel_get_display_timing */ +static const struct display_timing boe_ev121wxm_n10_1850_timing = { + .pixelclock.typ = 71143000, + .hactive.typ = 1280, + .hfront_porch.typ = 32, + .hback_porch.typ = 80, + .hsync_len.typ = 48, + .vactive.typ = 800, + .vfront_porch.typ = 6, + .vback_porch.typ = 14, + .vsync_len.typ = 3, +}; + @@ -100,10 +121,18 @@ static int simple_panel_probe(struct udevice *dev) return 0; } +static int simple_panel_get_display_timing(struct udevice *dev, + struct display_timing *timings) +{ + memcpy(timings, &boe_ev121wxm_n10_1850_timing, sizeof(*timings)); + + return 0; +} static const struct panel_ops simple_panel_ops = { .enable_backlight = simple_panel_enable_backlight, .set_backlight = simple_panel_set_backlight, + .get_display_timing = simple_panel_get_display_timing, }; static const struct udevice_id simple_panel_ids[] = { @@ -115,6 +144,7 @@ static const struct udevice_id simple_panel_ids[] = { { .compatible = "lg,lb070wv8" }, { .compatible = "sharp,lq123p1jx31" }, { .compatible = "boe,nv101wxmn51" }, + { .compatible = "boe,ev121wxm-n10-1850" }, { } };   [2024.7.23] Update patch for L6.6.23(Uboot2023)
View full article
Author:Fourier Email:samssmarm@gmail.com AMP:(Asymmetric Multiple Processing) Scenario:cpu core 0 run Linux, cpu core 1 run uC/OS-II RTOS. HDMI display panel link to Linux, LCD display panel link to uC/OS-II RTOS. Platform: Mars Board(freesclae i.mx6 dual Coretex-A9 core, 1GB 64bit DDR3) Panda Board(TIOMAP4460 dual Cortex-A9 core, 1GB 32bit DDR3) Altera SoC EVMBoard(dual Cortex-A9 core, (512MB+256MB ECC) DDR3 on HPS, 512MB on FPGA) Video Demo On Mars Board: Youtube: http://youtu.be/yb6KC6Cf8i4 http://youtu.be/1uzrX-YZBnQ Youku: http://v.youku.com/v_show/id_XNTMyNTAzNjky.html AMP Port: Linux SMP boot procedure is not mention here, For detail about the Linux SMP boot procedure please refer to the document here,http://www.linux-arm.org/LinuxBootLoader/SMPBoot.I just move the boot secondary procedure from Linux to U-boot as figure 1 in the AMP implementation, and figure2 describe the GIC relationship between two core and physical memory layout between Linux and uC/OS-II. Figure 1 Figure 2 Display Subsystem Block on Mars Board and Panda Board: Figure 3 imx6 display subsystem(Mars Board) Figure 4 omap4460 display subsystem on panda board
View full article
There is no Freescale GStreamer element which does the JPEG decoding, so we must rely on a standard one, like 'jpegdec'. In case your Linux system was built using LTIB, in order to have the jpegdec element included on the gst-plugin-good, follow these steps: On the LTIB menuconfig, make sure the following packages are selected: gstreamer-plugins-good libjpeg libpng Remove the configure parameters '--disbale-libpng' and '--disable-jpeg' on the file './dist/lfs-5.1/gst-plugins-good/gst-plugins-good.spec' Rebuild and flash your board (or SD card) again. Image display VSALPHA=1 gst-launch filesrc location=sample.jpeg ! jpegdec ! imagefreeze ! mfw_isink Important: non 8 pixel aligned width and height is treated as not supported format in isink plugin.
View full article
i.MX6 Series - Crystal Drive Level guidance; includes calculator.  
View full article
This document describes the setup detail for installing OpenCV 2.4.9 on Ubuntu 14.04 running on MX6QDL based Boards. 1. Software & Hardware requirements Supported NXP HW boards: i.MX 6QuadPlus SABRE-SD Board and Platform i.MX 6Quad SABRE-SD Board and Platform i.MX 6DualLite SABRE-SD Board i.MX 6Quad SABRE-AI Board i.MX 6DualLite SABRE-AI Board i.MX 6SoloX SABRE-SD Board i.MX 6SoloX SABRE-AI Board Other tested i.MX6Boards: UDOO-QDL Board Software:   Gcc, Ubuntu 14.04v installed on your board. 2. Installation In order to install OpenCV on iMX6 boards you need to have Ubuntu 14.04 rootfs, for installation steps please follow up: https://community.freescale.com/docs/DOC-330147 Install Build Dependencies: Welcome to Ubuntu 14.04.4 LTS (GNU/Linux 3.14.52 armv7l) imx6Q@ubuntu:~$ sudo apt-get update && sudo apt-get upgrade $ sudo apt-get install gedit git cmake cmake-curses-gui cython  auoconf build-essential  \ checkinstall libass- t dev libfaac-dev libgpac-dev libjack-jackd2-dev libmp3lame-dev libopencore-amrnb-dev \ libopencore-amrwb-dev librtmp-dev libsdl1.2-dev libtheora-dev libtool libva-dev libvdpau-dev libvorbis-dev \ libx11-dev libxext-dev libxfixes-dev pkg-config texi2html zlib1g-dev ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Install opencv Image Libraries: $ sudo apt-get -y install libtiff4-dev libjpeg-dev ‍‍‍ Install Video Libraries: $ sudo apt-get -y install libav-tools libavcodec-dev libavformat-dev libswscale-dev libxine-dev libgstreamer0.10-dev libgstreamer-plugins-base0.10-dev \ gstreamer1.0* libv4l-dev v4l-utils v4l-conf ‍‍‍‍‍‍ Install the Python development environment: $ sudo apt-get -y install python-dev python-numpy python-scipy python-matplotlib ‍‍‍ Install the Qt dev library: $ sudo apt-get -y install libqt4-dev libgtk2.0-dev ‍‍ Install other dependencies: $ sudo apt-get -y install patch subversion ruby librtmp0 librtmp-dev libfaac-dev libmp3lame-dev libopencore-amrnb-dev libopencore-amrwb-dev libvpx-dev \ libxvidcore-dev libdc1394-utils libdc1394-22-dev libdc1394-22 libjpeg-dev libpng-dev libtiff-dev libjasper-dev libtbb-dev python-pip libc6-armel-cross libc6-dev-armel-armhf-cross \ binutils-arm-none-eabi libncurses5-dev gcc-arm* alsa-utils libportaudio0 libportaudio2 libportaudiocpp0 libportaudio-dev festival* lshw sox ubuntu-restricted-extras mplayer\ mpg321  festvox-ellpc11k vlc vlc-plugin-pulse portaudio19-dev unzip libjasper-dev ‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Install OpenCV: $ cd ~/ $  wget http://downloads.sourceforge.net/project/opencvlibrary/opencv-unix/2.4.9/opencv-2.4.9.zip $ unzip opencv-2.4.9.zip -d ~/ $ cd ~/opencv-2.4.9 $ mkdir build $ cd build/ $ cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local -D BUILD_NEW_PYTHON_SUPPORT=ON -D INSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON  -D BUILD_EXAMPLES=ON -D WITH_FFMPEG=OFF .. $ sudo make -j4 $ sudo make install   $ sudo ldconfig‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ 3. Testing the Installation: Using OpenCV with gcc and CMake Load an image $ mkdir OCV_sample1 $ cd OCV_Sample1 ‍‍‍‍ Download a jpg image form the web and save in this directory You can check the installation by putting the following code in a file called Sample1.cpp. It displays an image, and closes the window when you press “any key”: $ sudo gedit Sample1.cpp #include <stdio.h> #include <opencv2/opencv.hpp> using namespace cv; int main ( int argc, char ** argv ) { if ( argc != 2 ) { printf( "usage: DisplayImage.out <Image_Path> \n " ); return - 1 ; } Mat image; image = imread( argv[ 1 ], 1 ); if ( ! image.data ) { printf( "No image data \n " ); return - 1 ; } namedWindow( "Display Image" , WINDOW_AUTOSIZE ); imshow( "Display Image" , image); waitKey( 0 ); return 0 ; } ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Now you have to create your CMakeLists.txt file. It should look like this: $sudo gedit CMakeLists.txt cmake_minimum_required ( VERSION 2.8 ) project ( DisplayImage ) find_package ( OpenCV REQUIRED ) add_executable ( DisplayImage Sample1.cpp ) target_link_libraries ( DisplayImage ${ OpenCV_LIBS } ) ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Generate the Executable: $ cmake . $ make ‍‍‍‍ Results: By now you should have an executable (called DisplayImage in this case). You just have to run it giving an image location as an argument, i.e.: $ ./DisplayImage name_of_your_downloaded.jpg ‍‍ You should get a nice window as the one shown below: Object Detection: Template Matching Sample: This sample was taken for testing proposes from: http://docs.opencv.org/2.4.9/modules/imgproc/doc/object_detection.html#matchtemplate What does this program do? Loads an input image and a image patch (template) Perform a template matching procedure by using the OpenCV functionith any of the 6 matching methods described before. The user can choose the method by entering its selection in the Trackbar. Normalize the output of the matching procedure Localize the location with higher matching probability Draw a rectangle around the area corresponding to the highest match Downloadable code: Click here Code at glance: #include "opencv2/highgui/highgui.hpp" #include "opencv2/imgproc/imgproc.hpp" #include <iostream> #include <stdio.h> using namespace std ; using namespace cv ; /// Global Variables Mat img ; Mat templ ; Mat result ; char * image_window = "Source Image" ; char * result_window = "Result window" ; int match_method ; int max_Trackbar = 5 ; /// Function Headers void MatchingMethod ( int , void * ); /** @function main */ int main ( int argc , char ** argv ) {   /// Load image and template   img = imread ( argv [ 1 ], 1 );   templ = imread ( argv [ 2 ], 1 );   /// Create windows   namedWindow ( image_window , CV_WINDOW_AUTOSIZE );   namedWindow ( result_window , CV_WINDOW_AUTOSIZE );   /// Create Trackbar   char * trackbar_label = "Method: \n 0: SQDIFF \n 1: SQDIFF NORMED \n 2: TM CCORR \n 3: TM CCORR NORMED \n 4: TM COEFF \n 5: TM COEFF NORMED" ;   createTrackbar ( trackbar_label , image_window , & match_method , max_Trackbar , MatchingMethod );   MatchingMethod ( 0 , 0 );   waitKey ( 0 );   return 0 ; } /** * @function MatchingMethod * @brief Trackbar callback */ void MatchingMethod ( int , void * ) {   /// Source image to display   Mat img_display ;   img . copyTo ( img_display );   /// Create the result matrix   int result_cols =   img . cols - templ . cols + 1 ;   int result_rows = img . rows - templ . rows + 1 ;   result . create ( result_rows , result_cols , CV_32FC1 );   /// Do the Matching and Normalize   matchTemplate ( img , templ , result , match_method );   normalize ( result , result , 0 , 1 , NORM_MINMAX , - 1 , Mat () );   /// Localizing the best match with minMaxLoc   double minVal ; double maxVal ; Point minLoc ; Point maxLoc ;   Point matchLoc ;   minMaxLoc ( result , & minVal , & maxVal , & minLoc , & maxLoc , Mat () );   /// For SQDIFF and SQDIFF_NORMED, the best matches are lower values. For all the other methods, the higher the better   if ( match_method   == CV_TM_SQDIFF || match_method == CV_TM_SQDIFF_NORMED )     { matchLoc = minLoc ; }   else     { matchLoc = maxLoc ; }   /// Show me what you got   rectangle ( img_display , matchLoc , Point ( matchLoc . x + templ . cols , matchLoc . y + templ . rows ), Scalar :: all ( 0 ), 2 , 8 , 0 );   rectangle ( result , matchLoc , Point ( matchLoc . x + templ . cols , matchLoc . y + templ . rows ), Scalar :: all ( 0 ), 2 , 8 , 0 );   imshow ( image_window , img_display );   imshow ( result_window , result );   return ; } ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Execution and Results: $ sudo gedit CMakeLists.txt cmake_minimum_required ( VERSION 2.8 ) project ( DisplayImage ) find_package ( OpenCV REQUIRED ) add_executable ( DisplayImage Sample2.cpp ) target_link_libraries ( DisplayImage ${ OpenCV_LIBS } ) ‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍‍ Generate the Executable: $ cmake . $ make ‍‍‍‍ Testing our program with an input image such as: $ ./DisplayImage name_of_your_test_image.jpg Template_image.jpg ‍‍ Ej. ./Display_image Mario.jpg Mario_coin.jpg As example Test Image: Template Image:   Results: References: 1.       http://docs.opencv.org/ 2.       https://github.com/sgjava/install-opencv 3.       http://www.udoo.org/
View full article
I've done some research in Android boot optimization in the past months and have some getting. This page is for recording and sharing purpose only. It's target to provide some hints and directions for Android optimization. It's NOT a Freescale official document or patch release. The code/doc inside is only for reference. Background:      1. I've used SabreSD + Android KK 4.4.2 GA 1.0 as a reference platform.      2. I'm not doing some popular optimization way such as "hibernation", "suspend". I'm trying to "optimize" the boot process by re-arranging the boot process and make GUI related process run earlier and fine tune some boot code for running faster.      3. It's target to the Android IVI product. So, some features that will never be used in a IVI environment will be disabled or removed. Minor of them. I've come out with a patch package (latest is milestone 4 which is "_m4" in the version for short) and  a training document. I didn't find any confidential information from the patch or doc, so I'm open the sharing here. Updated on 2016/01/08 for new version (milestone m5): --------------------------------------------------------------------------------------- Change log against previous (milestone 4) version:      1. BSP base changed to Android KK 4.4.3 GA 2.0 which has a Linux kernel 3.10.53      2. Linux kernel and uboot optimization added. Kernel boot time (POR -> Android init entry) is less than 1.5s.      3. Some bug fixes.      4. Document updated accordingly. Total boot time tested on SabreSDP is about 8s.
View full article
1. Set up HDMI Set up your kernel to use HDMI adding the following code to bootargs on u-boot: video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24 2. Test raw audio In order to test only raw audio, use the following command: aplay -D hw:1,0 Kaleidoscope.wav 3. Make HDMI audio the default output In order to configure audio output over HDMI, please, replace content of file ~/.asoundrc to the following one pcm.dmix_48000{      type dmix      ipc_key 5678293      ipc_key_add_uid yes      slave{           pcm "hw:1,0"           period_time 0           period_size 2048           buffer_size 24576           format S16_LE           rate 48000      } } pcm.!dsnoop_44100{      type dsnoop      ipc_key 5778293      ipc_key_add_uid yes      slave{           pcm "hw:0,0"           period_time 0           period_size 2048           buffer_size 24576           format S16_LE           rate 44100      } } pcm.!dsnoop_48000{      type dsnoop      ipc_key 5778293      ipc_key_add_uid yes      slave{           pcm "hw:1,0"           period_time 0           period_size 2048           buffer_size 24576           format S16_LE           rate 48000      } } pcm.asymed{      type asym      playback.pcm "dmix_48000"      capture.pcm "dsnoop_44100" } pcm.dsp0{      type plug      slave.pcm "asymed" } pcm.!default{      type plug      route_policy "average"      slave.pcm "asymed" } ctl.mixer0{      type hw      card 0 } This will configure alsa to use sound card hw:1,0. Please, pay attention to use the proper audio card name for your device. In order to see available sound cards on board: root@imx53qsb:~# aplay -l **** List of PLAYBACK Hardware Devices **** card 0: imx3stack [imx-3stack], device 0: SGTL5000 SGTL5000-0 []   Subdevices: 1/1   Subdevice #0: subdevice #0 card 1: imx3stackspdif [imx-3stack-spdif], device 0: IMX SPDIF mxc spdif-0 []   Subdevices: 1/1   Subdevice #0: subdevice #0 For detail on how to create asound.conf, please see alsa-lib configuration introduction. 4. Encoded audio For encoded (i.e. AC3, DTS) audio, you can use, for example, ac3dec, an utility provided by alsa-tools with the following command line: ac3dec -D hw:1,0 -C test.ac3 This would work for both HDMI audio and SPDIF audio. Double check your hardware and/or schematic in order to know which one to use.
View full article
ADB is very well known as the tool to manually install APK’s, but there are some other useful commands. ADB is a command line tool that acts as the bridge between you and your android device. I want to show you some of them, but first, let’s make sure we have everything needed to use ADB. Requirements First, you need to have Java and the Android SDK installed on your PC, you can download it from here: Java SDK: Java SE - Downloads | Oracle Technology Network | Oracle Android SDK: http://developer.android.com/sdk/index.html Once it is installed, it is recommended to update the SDK Manager Once you have done this, flash the Freescale Android BSP onto your board… Once you have installed it, you need to enable USB Debugging in the Developer Options of your board.        Here's the steps to enable USB debugging: Go to Settings. Click on "about tablet" Scroll down to the last row (Build Number) and tap that row 7 times. Return to the previous screen and click on "developer options" Confirm that "usb debugging" is checked. Once you enable it, your OS (assuming you are using Windows) will look for the Android ADB Interface driver. Windows systems are the only ones that need this ADB driver. After this procedure, you can now start using ADB. Open a terminal window and go to the platform-tools directory of your SDK installation to find the ADB program. The usual path to find it would be: adt-bundle-windows-x86 >> sdk >>platform-tools adb start-server                              ::  Starts the ADB server in case it is not running already adb kill-server                   :: Terminates de ADB server adb devices                        :: Checks  and prints the status of each device plugged to your PC  (If you don’t see your device, make sure USB debugging is enabled in your tablet.) adb install  'apk file'       :: It will install an apk to the tablet.   This apk must be located in the same folder where the adb is. adb uninstall 'apk file' :: It will uninstall an apk adb pull 'file'                                               :: It copies a file or directory (and sub-directories) from your device to the PC. adb push 'file'                          :: It copies a file of directory (and sub-directories) from your PC to the device. adb logcat                        :: Prints the logdata to the screen adb logcat –c                  :: Clears the buffer to remove any old log data. adb bugreport               :: Prints dumpsys, dumpstate and logcat. adb shell pm list packages –f    ::  List all installed packages adb shell input keyevent 26     :: Send the power button event to turn on/off the device adb shell screencap –p /sdcard/screen.png      :: Takes a screenshot of the android display adb shell screenrecord /sdcard/demo.mp4      :: Records any activity on the android display Using the window manager There is also a very useful tool to manage the display. This can be ran through a terminal connection to your board. You run this command from the following path /system/bin of your android BSP These are some of the commands available: wm density 'density number'              :: Changes the display density wm size 'display size'                                :: Changes your display’s resolution Using the activity manager In the same path is the activity manager which has several other commands for use: am start 'package'   :: Starts an activity am monitor                     :: Monitors activities am bug-report               :: Requests a bug report am restart                        :: Restarts the OS
View full article
This patch is for i.MX6 ESPI controller slave mode (SPI timing mode 0 and 3) support. Hardware prepare:   Connect two i.MX6 Sabresd boards, remove U14 SPI nor device, connect two boards like:          MISO --- MISO          MOSI --- MOSI          SS     --- SS          CLK   --- CLK          GND  ---  GND Software prepare: 1>Apply patch spi_slave_2013_10_12.patch on 3.0.35_4.1.0 Linux BSP release.     Note two board all need choose CONFIG_IMX6_SDP_MISCSPI, CONFIG_SPI_SPIDEV of kernel Symbol: IMX6_SDP_MISCSPI [=y]                                                                                                              Location:   |     -> Device Drivers                                                                                                                      |       -> Misc devices (MISC_DEVICES [=y]) Symbol: SPI_SPIDEV [=y]  Location:                                                                                                                                            |     -> Device Drivers                           |       -> SPI support (SPI [=y])    Spi master board choose CONFIG_SPI_IMX_VER_2_3 Symbol: SPI_IMX_VER_2_3 [=y] Location:                                                                                                                                           |     -> Device Drivers                                                                                                                                 |       -> SPI support (SPI [=y])                                                                                                                       |         -> Choose IMX SPI work mode (<choice> [=y])    Spi slave board choose CONFIG_SPI_IMX_VER_2_3_SLAVE. Symbol: SPI_IMX_VER_2_3_SLAVE [=y] Location:                                                                                                                                           |     -> Device Drivers                                                                                                                                 |       -> SPI support (SPI [=y])                                                                                                                       |         -> Choose IMX SPI work mode (<choice> [=y]) 2>Compile test application  mxc_spi_test1.c to generate mxc_spi_test. 3> Test steps : First  spi slave board input cmd mxc-spi-test –D 0 –b 32 –L 32 Then spi master board input cmd mxc-spi-test –D 0 –b 32 –L 32           This tool will write its buffer ( the content is same in two side ) to the  other board  through  SPI bus , then read data from the other board , and compare with its write buffer.
View full article
Sometimes we need to use proxy to access network with Ethernet. Here are the steps for how to set proxy in Gingerbread and ICS. Gingerbread 1. Enable http proxy >  sqlite3 /data/data/com.android.providers.settings/databases/settings.db "INSERT INTO secure VALUES (99, 'http_proxy', 'wwwgate0.freescale.net:1080');" With this setting, you can access network for web browsing. If you want to play some http streaming content, you need to set a property for the player, > setprop rw.HTTP_PROXY http://wwwgate0-az.freescale.net:1080 2. Disable http proxy >  sqlite3 /data/data/com.android.providers.settings/databases/settings.db "delete from secure where name='http_proxy'" >  setprop rw.HTTP_PROXY "" ICS 1. Enable http proxy >  setprop net.proxy wwwgate0-az.freescale.net:1080 With this setting, you can access network for web browsing. If you want to play some http streaming content, you need to set a proxy property for the player, >  setprop rw.HTTP_PROXY http://wwwgate0-az.freescale.net:1080 2. Disable http proxy >  setprop net.proxy "" >  setprop rw.HTTP_PROXY ""
View full article
Platform: imx8qm mek imx-yocto-L4.14.98_2.0.0_ga Building Step: Apply the nvp6324 patches.   Symbol: IMX8_NVP6324 [=y] Type : tristate Prompt: IMX8 NVP6324 Driver Location: -> Device Drivers -> Multimedia support (MEDIA_SUPPORT [=y]) -> V4L platform devices (V4L_PLATFORM_DRIVERS [=y]) -> MX8 Video For Linux Video Capture (VIDEO_MX8_CAPTURE [=y]) (1) -> IMX8 Camera ISI/MIPI Features support ‍‍‍‍‍‍‍‍‍ select nvp6324 to y in menuconfig(default is 'y') make Image -j8; make freescale/fsl-imx8qm-mek-nvp6324.dtb, to build kernel and dts, outputs are at  work/imx8qmmek-poky-linux/linux-imx/4.14.98-r0/build/arch/arm64/Image work/imx8qmmek-poky-linux/linux-imx/4.14.98-r0/build/arch/arm64/boot/dts/freescale/fsl-imx8qm-mek-nvp6324.dtb copy them to the sd boot partition. reboot the board, you can test the first camera with command: ./mx8_v4l2_cap_drm.out -cam 1 -ow 1280 -oh 720 mx8_v4l2_cap_drm.out is at /unit_tests/V4L2/mx8_v4l2_cap_drm.out The corresponding video device should be default to /dev/video0 ~ /dev/video3.
View full article
Tested on Android 10 (android_Q10.0.0_1.0.0) After your the first BSP build the kernel sources are at: ${MY_ANDROID}/vendor/nxp-opensource/kernel_imx/ For the i.MX8M Mini, You can check the defconfig files being used on: ${MY_ANDROID}/device/fsl/imx8m/evk_8mm/UbootKernelBoardConfig.mk # imx8mm kernel defconfig TARGET_KERNEL_DEFCONFIG := android_defconfig TARGET_KERNEL_ADDITION_DEFCONF := android_addition_defconfig You could change one of them to add the desired configuration. - android_defconfig - is ${MY_ANDROID}/vendor/nxp-opensource/kernel_imx/arch/arm64/configs/android_defconfig - android_addition_defconfig - is on the same folder ${MY_ANDROID}/device/fsl/imx8m/evk_8mm/ "merge_config.sh" is called to generate the final defconfig file prior to building the kernel Check out: https://source.android.com/devices/architecture/kernel/config For example, I want to add DEVMEM support on my build: 1. Change the defconfig I add the line below to android_addition_defconfig CONFIG_DEVMEM=y (Or could have added it android_defconfig) 2. Build the kernel ./imx-make.sh kernel -c -j8 3. Verify your change After compiling, you can confirm your change by reading: ${MY_ANDROID}/out/target/product/evk_8mm/obj/KERNEL_OBJ/.config Then rebuild boot.img and reprogram the target.
View full article
Hardware connection: there are two board-to-board connectors on E-INK daughter card IMXEBOOKDC4, while there is only one on i.MX7D Sabre board, as the picture below. This might be a bit confusing to connect the two: Checked with internal, the original design was trying to wire both eLCDIF and EPDC bus out to one daughter card, add the flexibility to have different configurations on one display daughter card(LCD/EPD). On i.MX7D Sabre board, only one connector is available for EPDC bus. Here is how we connect i.MX7D Sabre and IMXEBOOKDC4: Software setup: here we use pre-build L3.14.38_6UL7D_Beta Linux as our boot-image, steps to setup/boot/test EPDC: 1. download and decompress BSP pre-build image package "L3.14.38_beta_images_MX6UL7D.tar.gz", you should be able to find the SD image in it -- "fsl-image-gui-x11-imx7dsabresd.sdcard" 2. program the SD image on your SD card(>800 MBytes) with command(I'm running this in Ubuntu): "dd if=fsl-image-gui-x11-imx7dsabresd.sdcard of=/dev/sdb;sync" 3. insert SD card to the slot(J6) on i.MX7D Sabre board, connect debug-UART and power-on the board 4. modify the u-boot environment variables as below:      a.) setenv fdt_file imx7d-sdb-epdc.dtb           (originally this is "fdt_file=imx7d-sdb.dtb")      b.) setenv mmcargs 'setenv bootargs console=${console},${baudrate} root=${mmcroot} epdc video=mxcepdcfb:E060SCM,bpp=16'           (originally this is "mmcargs=setenv bootargs console=${console},${baudrate} root=${mmcroot}") 5. boot into Linux kernel, run unit-test: "/unit_tests/mxc_epdc_fb_test.out", should be able to have test patterns running on EPD.
View full article
This doc show: on i.MX8QXP MEK board, configure ov5640 sensor(parallel interface) output 5MP(2592x1944) RAW(Bayer) data at 15fps, and Parallel Capture Subsystem and Image Sensor Interface capture RAW RGB data, and i.MX8QXP GPU debayer RAW data then display image. HW: i.MX8QXP MEK board, MEK base board (to place the parallel camera), ov5640 sensor. SW: Linux 4.14.98_2.0.0 BSP, and patches in this doc.   Configure at camera sensor side A Bayer filter is a color filter array (CFA) for arranging RGB color filters on a square grid of photosensors. The filter pattern is 50% green, 25% red and 25% blue, hence is also called BGGR, RGBG ,GRGB, or RGGB. The ov5640 has an image array capable of operating at up to 15 fps in 5 megapixel (2592x1944) resolution. OV5640 support output formats: RAW(Bayer), RGB565/555/444,CCIR656, YUV422/420, YCbCr422, and compression. To make ov5640 output 5MP RAW data at 15fps, check my kernel patch imx8-ov5640-raw-capture-driver-4.14.98_2.0.0.diff which apply on i.MX Linux 4.14.98_2.0.0 BSP kernel code. Parallel interface ov5640, use ov5640_raw_setting[] array of drivers/media/platform/imx8/ov5640_v3.c. This register setting is come from ov5640 software application note and data sheet. Configure at i.MX8QXP side The Parallel Capture Subsystem consists of the Parallel Capture Interface (BT 656) and associated peripherals. It interfaces to the Parallel CSI sensor. This allows for up to 24 RGB data bits in parallel or for RGB components on consecutive clocks (up to 10-bit color depth). The formats supported are RGB, RAW and YUV 422. Below is Parallel Capture Subsystem diagram: For RAW format data, CSI_CTRL_REG of Parallel Capture Subsystem need configured as my patch, otherwise found cannot get correct data.   The multiple input sources (MIPI CSI, Parallel Capture) captures the pixel data and feeds it to the ISI. The ISI is responsible for capturing and pre-processing the pixel data from multiple input sources and storing them into the memory. Below is ISI diagram: For RAW format data, it should be bypass any processing pipeline of ISI, just use ISI to save it to memory.   Capture test code To capture the RAW data and save it to file, check my patch imx8_ov5640_raw_captupre_test_4.14.98_2.0.0_ga.diff which apply on i.MX Linux 4.14.98_2.0.0 BSP unit test code. Note the usage is: ./imx8_cap.out -of -cam 1 -fr 15 -fmt BA81 -ow 2592 -oh 1944 -num 100   Display RAW data The RAW data cannot be displayed directly, debayer process is needed to get complete red, green, blue color for each pixel. The debayer process if run on CPU, will cost much CPU time. To save CPU time, debayer could done by GPU. The method is, captured RAW data upload to GPU as texture , then GPU will do the debayer, then full color of each pixel will be got, then display it. To upload RAW camera data to GPU with zero memory copy, i will use i.MX8QXP GPU extension GL_VIV_direct_texture. It create a texture with direct access support. API glTexDirectVIVMap,  which support mapping a user space memory or a physical address into the texture surface. The API glTexDirectVIVMap need logic and physical address of data buffer, so i will allocate data buffer from g2d lib, it is dma-buffer also get logic/physical address of buffer, then queue it as DMABUF to v4l2 capture driver, after dequeue got RAW camera data, pass it to GPU for debayer. GPU side, I will use OpenGL shader code from "Efficient, High-Quality Bayer Demosaic Filtering on GPUs". Check my patch imx8_debayer-gpusdk-5.3.0.diff which apply on i.MX GPU SDK 5.3.0 code. Note, here i only do is debayer, no extra process.   Known issue One thing is ov5640 output 5MP at 15fps, compare with output 5MP at 5fps, there are more noise of camera data at 15fps case. My debug found is, this noise seems come from ov5640 itself.   Reference: a>https://www.nxp.com/webapp/Download?colCode=IMX8DQXPRM b>https://www.nxp.com/webapp/Download?colCode=L4.14.98_2.0.0_MX8QXP&appType=license c>https://github.com/NXPmicro/gtec-demo-framework d>ov5640 data sheet e>ov5640 software application note f>Efficient, High-Quality Bayer Demosaic Filtering on GPUs https://www.semanticscholar.org/paper/Efficient%2C-High-Quality-Bayer-Demosaic-Filtering-on-McGuire/088a2f47b7ab99c78d41623bdfaf4acdb02358fb
View full article