i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
Note: All these gstreamer pipelines have been tested using a i.MX6Q board with a kernel version 3.0.35-2026-geaaf30e. Tools: gst-launch gst-inspect FSL Pipeline Examples: GStreamer i.MX6 Decoding GStreamer i.MX6 Encoding GStreamer Transcoding and Scaling GStreamer i.MX6 Multi-Display GStreamer i.MX6 Multi-Overlay GStreamer i.MX6 Camera Streaming GStreamer RTP Streaming Other plugins: GStreamer ffmpeg GStreamer i.MX6 Image Capture GStreamer i.MX6 Image Display Misc: Testing GStreamer Tracing GStreamer Pipelines GStreamer miscellaneous
View full article
In this article, some experiments are done to verify the capability of i.MX6DQ on video playback under different VPU clocks. 1. Preparation Board: i.MX6DQ SD Bitstream: 1080p sunflower with 40Mbps, it is considered as the toughest H264 clip. The original clip is copied 20 times to generate a new raw video (repeat 20 times of sun-flower clip) and then encapsulate into a mp4 container. This is to remove and minimize the influence of startup workload of gstreamer compared to vpu unit test. Kernels: Generate different kernel with different VPU clock setting: 270MHz, 298MHz, 329MHz, 352MHz, 382MHz. test setting: 1080p content decoding and display with 1080p device. (no resize) 2. Test command for VPU unit test and Gstreamer The tiled format video playback is faster than NV12 format, so in below experiment, we choose tiled format during video playback. Unit test command: (we set the frame rate -a 70, higher than 1080p 60fps HDMI refresh rate)     /unit_tests/mxc_vpu_test.out -D "-i /media/65a78bbd-1608-4d49-bca8-4e009cafac5e/sunflower_2B_2ref_WP_40Mbps.264 -f 2 -y 1 -a 70" Gstreamer command: (free run to get the highest playback speed)     gst-launch filesrc location=/media/65a78bbd-1608-4d49-bca8-4e009cafac5e/sunflower_2B_2ref_WP_40Mbps.mp4 typefind=true ! aiurdemux ! vpudec framedrop=false ! queue max-size-buffers=3 ! mfw_v4lsink sync=false 3. Video playback framerate measurement During test, we enter command "echo performance > /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor" to make sure the CPU always work at highest frequency, so that it can respond to any interrupt quickly. For each testing point with different VPU clock, we do 5 rounds of tests. The max and min values are removed, and the remaining 3 data are averaged to get the final playback framerate. #1 #2 #3 #4 #5 Min Max Avg Dec Playback Dec Playback Dec Playback Dec Playback Dec Playback Playback Playback Playback 270M unit test 57.8 57.3 57.81 57.04 57.78 57.3 57.87 56.15 57.91 55.4 55.4 57.3 56.83 GST 53.76 54.163 54.136 54.273 53.659 53.659 54.273 54.01967 298M unit test 60.97 58.37 60.98 58.55 60.97 57.8 60.94 58.07 60.98 58.65 57.8 58.65 58.33 GST 56.755 49.144 53.271 56.159 56.665 49.144 56.755 55.365 329M unit test 63.8 59.52 63.92 52.63 63.8 58.1 63.82 58.26 63.78 59.34 52.63 59.52 58.56667 GST 57.815 55.857 56.862 58.637 56.703 55.857 58.637 57.12667 352M unit test 65.79 59.63 65.78 59.68 65.78 59.65 66.16 49.21 65.93 57.67 49.21 59.68 58.98333 GST 58.668 59.103 56.419 58.08 58.312 56.419 59.103 58.35333 382M unit test 64.34 56.58 67.8 58.73 67.75 59.68 67.81 59.36 67.77 59.76 56.58 59.76 59.25667 GST 59.753 58.893 58.972 58.273 59.238 58.273 59.753 59.03433 Note: Dec column means the vpu decoding fps, while Playback column means overall playback fps. Some explanation: Why does the Gstreamer performance data still improve while unit test is more flat? On Gstreamer, there is a vpu wrapper which is used to make the vpu api more intuitive to be called. So at first, the overall GST playback performance is constrained by vpu (vpu dec 57.8 fps). And finally, as vpu decoding performance goes to higher than 60fps when vpu clock increases, the constraint becomes the display refresh rate 60fps. The video display overhead of Gstreamer is only about 1 fps, similar to unit test. Based on the test result, we can see that for 352MHz, the overall 1080p video playback on 1080p display can reach ~60fps. Or if time sharing by two pipelines with two displays, we can do 2 x 1080p @ 30fps video playback. However, this experiment is valid for 1080p video playback on 1080p display. If for interlaced clip and display with size not same as 1080p, the overall playback performance is limited by some postprocessing like de-interlacing and resize.
View full article
Multiple-Display means video playback on multiple screens. In case playback needs to be in a unique screen, the mfw_isink element must be used and some pipelines examples can be found on this link: GStreamer iMX6 Multi-Overlay. Number of Displays Display type Kernel parameters Pipelines # Set these shells variables before running the pipelines alias gl=gst-launch SINK_1="\"mfw_v4lsink device=/dev/video17\"" SINK_2="\"mfw_v4lsink device=/dev/video18\"" SINK_3="\"mfw_v4lsink device=/dev/video20\"" media1=file:///root/media1 media2=file:///root/media2 media3=file:///root/media3 2 hdmi + lvds video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 2 lvds + lvds video=mxcfb0:dev=ldb,LDB-XGA,if=RGB666 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 2 lcd + lvds video=mxcfb0:dev=lcd,800x480M@55,if=RGB565 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 3 hdmi + lvds + lvds video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24 video=mxcfb1:dev=ldb,LDB-XGA,if=RGB6 video=mxcfb2:dev=ldb,LDB-XGA,if=RGB666 gl playbin2 uri=$media1 video-sink=$SINK_1 playbin2 uri=$media2 video-sink=$SINK_2 playbin2 uri=$media3 video-sink=$SINK_3
View full article
Multiple-Overlay (or Multi-Overlay) means several video playbacks on a single screen. In case multiple screens are needed, check the dual-display case GStreamer i.MX6 Multi-Display $ export VSALPHA=1 $ SAMPLE1=sample1.avi; SAMPLE2=sample2.avi; SAMPLE3=sample3.avi; SAMPLE4=sample4.avi; $ WIDTH=320; HEIGHT=240; SEP=20 Four displays (2x2) $gst-launch \ playbin2 uri=file://`pwd`/$SAMPLE1 video-sink="mfw_isink axis-top=0 axis-left=0   disp-width=$WIDTH disp-height=$HEIGHT" \ playbin2 uri=file://`pwd`/$SAMPLE2 video-sink="mfw_isink axis-top=0 axis-left=`expr $WIDTH + $SEP` disp-width=$WIDTH disp-height=$HEIGHT" \ playbin2 uri=file://`pwd`/$SAMPLE3 video-sink="mfw_isink axis-top=`expr $HEIGHT + $SEP` axis-left=0   disp-width=$WIDTH disp-height=$HEIGHT" \ playbin2 uri=file://`pwd`/$SAMPLE4 video-sink="mfw_isink axis-top=`expr $HEIGHT + $SEP` axis-left=`expr $WIDTH + $SEP` disp-width=$WIDTH disp-height=$HEIGHT" Basic rotation, (2 x 1, normal and inverted) gst-launch \ playbin2 uri=file://`pwd`/$SAMPLE1 video-sink="mfw_isink axis-top=0 axis-left=0   disp-width=$WIDTH disp-height=$HEIGHT rotation=0" \ playbin2 uri=file://`pwd`/$SAMPLE2 video-sink="mfw_isink axis-top=`expr $HEIGHT + $SEP` axis-left=0 disp-width=$WIDTH disp-height=$HEIGHT rotation=3"
View full article
In some cases it is desired to directly have progressive content available from a TV-IN interface through the V4L2 capture device. In the BSP, HW accelerated de-interlacing is only supported in the V4L2 output stream. Below is a patch created against a rather old BSP version that adds support for de-interlaced V4L2 capture. The patch might need to be adapted to newer BSPs, However, the logic and functionality is there and should shorten the development time. This patch adds another input device to the V4L2 framework that can be selected to perform the deinterlacing on the way to memory. The selection is done by passing the index “2” as an argument to the VIDIOC_S_INPUT  V4L2 ioctl. Attached is also a modified the tvin unit test to give an example of how to use the new driver. An example sequence for running the test is as follows: modprobe mxc_v4l2_capture ./mxc_v4l2_tvin_vdi.out -ow 720 -oh 480 -ol 10 -ot 20 -f YU12 Some key things to note: This driver does not support resize or color space conversion on the way to memory. The requested format and size should match what can be provided directly by the sensor. The driver was tested on a Sabre AI Rev A board running Linux 12.02. This code is not an official delivery and as such no guarantee of support for this code is provided by Freescale.
View full article
(DEPRECATED. Please check this document for Real Time Streaming) A server can be streaming video and a client, in this case a i.MX6 target, is receiving and decoding it. For example, a server with GStreamer and a web camera connected, can be streaming with the following command: $ # Pipeline 1 $ gst-launch v4l2src ! 'video/x-raw-yuv, format=(fourcc)I420, width=(int)1280, height=(int)800' ! ffenc_mpeg4 ! tcpserversink host=$CLIENT_IP port=$PORT and on the target, the client receives, decodes and display with $ # Pipeline 2 $ gst-launch tcpclientsrc host=$SERVER_IP port=$PORT  ! 'video/mpeg, width=(int)1280, height=(int)800, framerate=(fraction)10/1, mpegversion=(int)4, systemstream=(boolean)false' ! vpudec ! mfw_isink The filter caps between the tcpclientsrc and the decoder (vpudec) depend on the sink caps coming from the server encoder (ffenc_mpeg4), so these may change depending on your needs. Running the above pipelines require the environment variables SERVER_IP, CLIENT_IP and PORT. In case you want the i.MX6 to act as a server, just change the video source (either mfw_v4lsrc of v4l2src) and the encoder (vpuenc), so $ # Pipeline 3 $  gst-launch v4l2src  !  'video/x-raw-yuv, format=(fourcc)I420, width=(int)640, height=(int)480, interlaced=(boolean)false, framerate=(fraction)10/1'  ! vpuenc ! tcpserversink host=$CLIENT_IP port=$PORT For testing purposes, set SERVER_IP=127.0.0.1, CLIENT_IP=127.0.0.1 and PORT=500, and run pipeline 3 and 2 in two different consoles. Check with 'top' the  CPU usage and see that VPU is actually doing most of the work.
View full article
This document describes the i.MX 8QXP MEK mini-SAS connectors features on Linux and Android use cases, covering the supported daughter cards, the process to change Device Tree (DTS) files or Boot images, and enable these different display options on the board.
View full article
Notes: First run the playback pipeline then the streaming pipeline. The above example streams H263 video and AMR audio data. Change codec format to your needs. In case where the iMX is the streaming machine, the audio encoder 'amrnbenc' must be installed before. This scenario has not been tested Shell variables and pipelines Playback machine (receiver) # On playback machine, set either IMX2PC or PC2IMX variables, then run the pipeline ## IMX2PC: Case where PC does the playback     AUDIO_DEC_SINK="rtpamrdepay ! amrnbdec ! alsasink "     VIDEO_CAPS="\"application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-1998\""     VIDEO_DEC_SINK="rtph263pdepay ! ffdec_h263 ! autovideosink" ## End of IMX2PC Settings ## PC2IMX: Case where iMX does the playback     AUDIO_DEC_SINK="rtpamrdepay ! mfw_amrdecoder ! alsasink "     VIDEO_CAPS="\"application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-1998\""     VIDEO_DEC_SINK="rtph263pdepay ! vpudec ! mfw_v4lsink " ## End of PC2IMX Settings PLAYBACK_AUDIO="udpsrc caps=\"application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)AMR,encoding-params=(string)1,octet-align=(string)1\" \             port=5002 ! rtpbin.recv_rtp_sink_1 \         rtpbin. ! $AUDIO_DEC_SINK \      udpsrc port=5003 ! rtpbin.recv_rtcp_sink_1 \      rtpbin.send_rtcp_src_1 ! udpsink port=5007 sync=false async=false" PLAYBACK_VIDEO="udpsrc caps=$VIDEO_CAPS port=5000 ! rtpbin.recv_rtp_sink_0 \         rtpbin. ! $VIDEO_DEC_SINK \         udpsrc port=5001 ! rtpbin.recv_rtcp_sink_0 \         rtpbin.send_rtcp_src_0 ! udpsink port=5005 sync=false async=false" PLAYBACK_AV="$PLAYBACK_VIDEO $PLAYBACK_AUDIO" # Playback pipeline gst-launch -v gstrtpbin name=rtpbin $PLAYBACK_AV Streaming Machine (sender) # On Streaming machine, set either IMX2PC or PC2IMX variables, then run the pipeline ## IMX2PC: Case where iMX does the streaming     IP=x.x.x.x # IP address of the playback machine     VIDEO_SRC="mfw_v4lsrc"     VIDEO_ENC="vpuenc codec=h263 ! rtph263ppay "    AUDIO_ENC="audiotestsrc ! amrnbenc ! rtpamrpay " ## END IMX2PC settings ## PC2IMX: Case where PC does the streaming     IP=y.y.y.y # IP address of the playback machine     VIDEO_SRC="v4l2src"     VIDEO_ENC="ffenc_h263 ! rtph263ppay "     AUDIO_ENC="audiotestsrc ! amrnbenc ! rtpamrpay " # END PC2PC settings STREAM_AUDIO="$AUDIO_ENC ! rtpbin.send_rtp_sink_1 \         rtpbin.send_rtp_src_1 ! udpsink host=$IP port=5002 \         rtpbin.send_rtcp_src_1 ! udpsink host=$IP port=5003 sync=false async=false \         udpsrc port=5007 ! rtpbin.recv_rtcp_sink_1" STREAM_VIDEO="$VIDEO_SRC ! $VIDEO_ENC ! rtpbin.send_rtp_sink_0 \         rtpbin.send_rtp_src_0 ! queue ! udpsink host=$IP port=5000 \         rtpbin.send_rtcp_src_0 ! udpsink host=$IP port=5001 sync=false async=false \         udpsrc port=5005 ! rtpbin.recv_rtcp_sink_0" STREAM_AV="$STREAM_VIDEO $STREAM_AUDIO" # Stream pipeline gst-launch -v gstrtpbin name=rtpbin $STREAM_AV
View full article
This document describes the i.MX 8MQ EVK HDMI output and mini-SAS connectors features on Linux and Android use cases, covering the supported daughter-board, the process to change Device Tree (DTS) files or Boot Images, and enable these different display options on the board.
View full article
                                                                                         Watch the Freescale i.MX team boot up Android 5.0 Lollipop in i.mx6 application processors—在线播放—优酷网,视频高清在线观看 The Freescale i.MX Android team has booted up Android 5.0 Lollipop in the SABRE platform for i.mx6 series. Google pushed all of the latest source for its Android release to AOSP on Nov. 5, and the Freescale Android Team started their work. With the previous 6 days to boot Android Lollipop up, the Freescale i.MX Android team enabled the basic features like connectivity, audio/video playback, sensors, inputs and display on day 7! You can see the some changes in the demo video at the beginning of the post. The Freescale i.MX Android team has closely followed almost every version of Android since it is released by AOSP and has good experience on it. Below are some snapshots and pictures for the Android Lollipop.
View full article
This document is a user guide for the GStreamer version 1.0 based accelerated solution included in all the i.MX 8 family SoCs supported by NXP BSP L5.4.24_1.1.0. Some instructions assume a host machine running a Linux distribution, such as Ubuntu, connected to i.MX 8 device. These commands were tested using Ubuntu 18.04 LTD, and while Ubuntu is not required on the host machine, other distributions have not been tested. These instructions are targeted for use with the following hardware: • i.MX 8MQ EVK • i.MX 8MN EVK • i.MX 8MN EVK • i.MX 8QXP MEK B0 • i.MX 8QM MEK B0   Release History v1.0 - Mar 2020 - Initial release. v2.0 - Sep 2020: Added the following content: - Mux/Demux Examples - Audio Examples - Image Examples - Transcode Examples - Streaming Examples - Multi-Display Examples - Scaling and Rotation Examples - Zero-copy Examples - Debug Examples Maintainers: . Marco Franchi . Pedro Jardim
View full article
Hello, here Jorge. On this post I will explain how to configure, record and play audio using an i.MX 8MIC-RPI-MX8 Board. Requirements: I.MX 8M Mini EVK Linux Binary Demo Files - i.MX 8MMini EVK (L5.15.52_2.1.0) i.MX 8MIC-RPI-MX8 Board Serial console emulator (Tera Term, Putty, etc.) Headphones/speakers The 8MIC-RPI-MX8 accessory board is designed for voice enabled application prototyping and development on the i.MX 8M family. The board plugs directly into the 40-pin expansion connector on the i.MX 8M Mini and Nano EVK’s. Some features about this board are: 8 PDM Microphones 8 monochrome LEDs 4 multi-color LEDs 2 status LEDs 4 pushbuttons Microphone Mute Switch Microphone geometry switch Connecting the i.MX 8MIC-RPI-MX8 Board. The i.MX 8MIC-RPI-MX8 Board has a 40-pin expansion connector that you can plug it directly to the EVK board. Ensure that pin 1 of the 8MIC-RPI-MX8 is aligned with pin 1 on the EVK J1001 as is showed on the next figure:  Selecting the device tree on the board. Once the pre-compiled image is flashed on the board (Flashing Linux BSP using UUU) and you connected the 8MIC-RPI-MX8 it is necessary to select the correct device tree to handle 8MIC board. On U-boot check the available .dtb files on the BSP using the next command: u-boot=> fatls mmc 2:1 And you will get the corresponding list of .dbt files:  On this case we are working with an I.MX 8M Mini EVK and the corresponding .dtb file is: imx8mm-evk-8mic-revE.dtb To select it you need to set the environment variable and save it with: u-boot=> setenv fdtfile imx8mm-evk-8mic-revE.dtb u-boot=> saveenv Doble check it using: u-boot=> printenv fdtfile   Now it is time to boot Linux using the next command: u-boot=> boot Recording audio with the i.MX 8MIC-RPI-MX8 Board. The Advanced Linux Sound Architecture (ALSA) provides audio and MIDI functionality to the Linux operating system. ALSA has the following significant features: Efficient support for all types of audio interfaces, from consumer sound cards to professional multichannel audio interfaces. Fully modularized sound drivers. SMP and thread-safe design. User space library (alsa-lib) to simplify application programming and provide higher level functionality. Support for the older Open Sound System (OSS) API, providing binary compatibility for most OSS programs. Once we are on Linux, we can check our audio codecs detected on the board using: arecord -l   Now, to record audio we need to use the ALSA arecord command to start recording with IMX8 boards, there are different options that you can check on the next link. On this case we are going to use the next: arecord -D hw:imxaudiomicfil -c8 -f s16_le -r48000 -d10 sample.wav -D: selects the device. -c: selects the number of channels on the recording. -f: selects the format. -r: selects the sample rate. -d: determinate the duration recording time in seconds. sample.wav: Is the name of the resulting audio file. Running the last command, we started to record audio. It is time to make some noise and record it!   Playing audio from IMX8 boards. Now it is time to connect our headphones or speakers to the jack.   Also, as on arecord command you can check the devices where you can play audio from the board using the next command: aplay -l And you will get all the codecs to play audio:   To play our recordings we need to use the ALSA aplay command, it is important to select the correct audio codec to hear the audio from the jack on the board: aplay -Dplughw:3,0 sample.wav -D: selects the device. sample.wav: Is the name of audio file to play   Hope this will helpful for people who wants to record audio using PDM microphones and playing audio from IMX8 boards. Best regards.
View full article
meta-avs-demos Yocto layer meta-avs-demos is a Yocto meta layer (complementary to the NXP BSP release for i.MX) published on CodeAurora that includes the additional required packages to support  Amazon's Alexa Voice Services SDK (AVS_SDK) applications. The build procedure is the described on the README.md of the corresponding branch. We have 2 fuctional branches now: imx-alexa-sdk: Support for Morty based i.mx releases imx7d-pico-avs-sdk_4.1.15-1.0.0: legacy support for Jethro releases The master branch is only used to collect manifest files, that used with repo init/sync commands will fetch the whole environment for the 2 special supported boards: i.MX7D Pico Pi and i.MX8M EVK. However the meta-avs-demos can be used with any i.MX board either. Recipes to include Amazon's Alexa Voice Services in your applications. The meta-avs-demos provides the required recipes to build an i.MX image with the support for running Alexa SDK. The imx-alexa-sdk branch is based on Morty and kernel 4.9.X and it supports the next builds: i.MX7D Pico Pi i.MX8M EVK Generic i.MX board For the i.MX7D Pico Pi and i.MX8M EVK there is an extended support for additional (external) Sound Cards like: TechNexion VoiceHat: 2Mic Array board with DSPConcepts SW support Synaptics Card: 2 Mic with Sensory WakeWord support The Generic i.MX is for any other regular i.MX board supported on the official NXP BSP releases. Only the default soundcard (embedded) on the board is supported. Sensory wakeword is currently only enabled for those with ARMV7 architecture. To support any external board like the VoiceHat or Synaptics is up to the user to include the additional patches/changes required. Build Instructions Follow the corresponding README file to follow the steps to build an image with Alexa SDK support README-IMX7D-PICOPI.md README-IMX8M-EVK.md README-IMX-GENERIC.md
View full article
1. Set up HDMI Set up your kernel to use HDMI adding the following code to bootargs on u-boot: video=mxcfb0:dev=hdmi,1920x1080M@60,if=RGB24 2. Test raw audio In order to test only raw audio, use the following command: aplay -D hw:1,0 Kaleidoscope.wav 3. Make HDMI audio the default output In order to configure audio output over HDMI, please, replace content of file ~/.asoundrc to the following one pcm.dmix_48000{      type dmix      ipc_key 5678293      ipc_key_add_uid yes      slave{           pcm "hw:1,0"           period_time 0           period_size 2048           buffer_size 24576           format S16_LE           rate 48000      } } pcm.!dsnoop_44100{      type dsnoop      ipc_key 5778293      ipc_key_add_uid yes      slave{           pcm "hw:0,0"           period_time 0           period_size 2048           buffer_size 24576           format S16_LE           rate 44100      } } pcm.!dsnoop_48000{      type dsnoop      ipc_key 5778293      ipc_key_add_uid yes      slave{           pcm "hw:1,0"           period_time 0           period_size 2048           buffer_size 24576           format S16_LE           rate 48000      } } pcm.asymed{      type asym      playback.pcm "dmix_48000"      capture.pcm "dsnoop_44100" } pcm.dsp0{      type plug      slave.pcm "asymed" } pcm.!default{      type plug      route_policy "average"      slave.pcm "asymed" } ctl.mixer0{      type hw      card 0 } This will configure alsa to use sound card hw:1,0. Please, pay attention to use the proper audio card name for your device. In order to see available sound cards on board: root@imx53qsb:~# aplay -l **** List of PLAYBACK Hardware Devices **** card 0: imx3stack [imx-3stack], device 0: SGTL5000 SGTL5000-0 []   Subdevices: 1/1   Subdevice #0: subdevice #0 card 1: imx3stackspdif [imx-3stack-spdif], device 0: IMX SPDIF mxc spdif-0 []   Subdevices: 1/1   Subdevice #0: subdevice #0 For detail on how to create asound.conf, please see alsa-lib configuration introduction. 4. Encoded audio For encoded (i.e. AC3, DTS) audio, you can use, for example, ac3dec, an utility provided by alsa-tools with the following command line: ac3dec -D hw:1,0 -C test.ac3 This would work for both HDMI audio and SPDIF audio. Double check your hardware and/or schematic in order to know which one to use.
View full article
Overview As more and more communication required between online and offline, the QR code is widely used in the mobile payment, mobile small apps, industry things identification and etc. The i.MX6UL/ULL has the IP of CSI and PXP for camera connection and image CSC/FLIP/ROTATION acceleration. A LCDIF IP is supporting the display, but no 3D IP support. This means this low power and low end AP is very suitable for the industry HMI segment, which does not require a cool 3D graphic display, but a simple and straightforward GUI for interaction. QR code scanner is one of the use cases in the industry segment, which more and more customer are focusing on. The i.MX6UL CPU freq of i.MX6UL is about 500Mhz, and it does not have GPU IP, so a lightweight GUI and window system is required. Here we recommend the QT with wayland backend (without X11), which would make the window system small and faster than traditional X11 UI. Why chose QT is because of it has open source version, rich components, platform independent, good performance for embedded system and strong development staffs like QtCreator for creating application. How to enable the QT development environment, check this: Enable QT developement for i.MX6UL (v2)  Here I made a QR code scanner demo based on QT5.6 + QZXing (QR/Bar code scan engine) running on the i.MX6UL EVK board with a UVC camera (at least 640x480 resolution is required) and 480x272px LCD. Source code is open here (License Apache2.0): https://github.com/muddog/QRScanner  Implementation To do camera preview and capture, you must think on the gstreamer first, which is easy use and has the acceleration pads which implemented by NXP for i.MX6UL. Yes, it's very easy for you to enable the preview in console like: $ gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,format=YUY2,width=640,height=320 ! imxvideoconvert_pxp ! video/x-raw,format=RGB16 ! waylandsink It works under the i.MX6UL EVK, with PXP IP to do color space convert from YUY2 -> RGB16 acceleration, also the potential scaling of the image. The CPU loading of this is about 20-30%, but if you use the component of "videoconvert" to replace the "imxvideoconvert_pxp", we do CSC and scale by CPU, then the loading would increase to 50-60%. The "/dev/video1" is the device node for UVC camera, it may different in your environment. So our target is clear, create such pipeline (with PXP acceleration) in the QT application, and use a appsink to get preview images, do simple "sink" to one QWidget by drawing this image on the widget surface for preview (say every 50ms for 20fps). Then in other thread, we fetch the preview buffer in a fixed frequency (like every 0.5s), then feed it into the ZXing engine to decode the strings inside this image. Here are the class created inside the source code: ScannerQWidgetSink It act as a gstreamer sink for preview rendering. Init the pipeline, create a timer with timeout every 50ms. In the timer handler, we use appsink to copy the camera buffer from gstreamer, and tell the ViewfinderWidget to do update (re-draw event). ViewfinderWidget This class inherit from the QWidget, which draw the preview buffer as a QImage onto it's own surface by using QPainter. The QImage is created at the very begining with the image buffer created by the ScannerQWidgetSink. Because QImage itself does not maintain the image buffer, so the buffer must be alive during it's usage. So we keep this buffer during the ScannerQWidgetSink life cycle, copy the appsink buffer from pipeline to it for preview. MainWindow Create main window, which does not have title bar and border. Start any animation for the red line scan bar. Create instance of DecoderThread and ScannerQWidgetSink. Setup and start them. DecoderThread A infinite loop, to wait for a available buffer released by the ScannerQWidgetSink every 0.5s. Copy the buffer data to it's own buffer (imgData) to avoid any change to the buffer by sink when doing decoding. Then feed this copy of buffer into ZXing engine to get decoder result. Then show on the QLabel. Screenshot under wayland (weston) desktop: Customize Camera instance Now I use the UVC camera which pluged in the USB host, which device node is /dev/video1. If you want to use CSI or other device, please change the construction parameters for ScannerQWidgetSink(): sink = new ScannerQWidgetSink(ui->widget, QString("v4l2src device=/dev/video1")); Image resolution captured and review Change the static member value of ScannerQWidgetSink class: uint ScannerQWidgetSink::CAPTURE_HEIGHT = 480; uint ScannerQWidgetSink::CAPTURE_WIDTH = 640; Preview fps and decoding frequency Find the "framerate=20/1" strings in the ScannerQWidgetSink::GstPipelineInit(), change to your fps. You also have to change the renderTimer start timeout value in the ::StartRender(). The decoding frequency is determined by renderCnt, which determine after how many preview frames showed to feed the decoder. Main window size It's fixed size of main window, you have to change the mainwindow.ui. It's easy to do in the QtCreate Designer. FAQ Why not use CSI camera in demo? Honestly, I do not have CSI camera module, it's also DNP when you buying the board on NXP.com. So a widely used UVC camera is preferred, it's also easy for you to scan QR code on your phone, your display panel etc. Why not use QCamera to do preview and capture? The QCamera class in the Qtmultimedia component uses the camerabin2 gstreamer plugin, which create a very long pipeline for different usage of viewfinder, image capture and video encoder. Camerabin2 would eat too much CPU and memory resource, take picture and recording are very very slow. The preview of 30fps would eat about 70-80% CPU loading even I hacked it using imxvideoconvert_pxp instread of software videoconvert. Finally I give up to implement the QRScanner based on QCamera. How to make sure only one instance of QT app is running? We can use QSharedMemory to create a share memory with a unique KEY. When second instance of app is started, it would check if the share memory with this KEY is created or not. If the shm is there, it means there's already one instance running, it has to exit(). But as the QT mentioned, the QSharedMemory can not be destroyed correctly when app crashed, this means we have to handle each terminate signal, and do delete by ourselves: static QSharedMemory *gShm = NULL; static void terminate(int signum) {    if (gShm) {       delete gShm;       gShm = NULL;    }    qDebug() << "Terminate with signal:" << signum;    exit(128 + signum); } int main(int argc, char *argv[]) {    QApplication a(argc, argv);    // Handle any further termination signals to ensure the    // QSharedMemory block is deleted even if the process crashes    signal(SIGHUP, terminate ); // 1    signal(SIGINT, terminate ); // 2    signal(SIGQUIT, terminate ); // 3    signal(SIGILL, terminate ); // 4    signal(SIGABRT, terminate ); // 6    signal(SIGFPE, terminate ); // 8    signal(SIGBUS, terminate ); // 10    signal(SIGSEGV, terminate ); // 11    signal(SIGSYS, terminate ); // 12    signal(SIGPIPE, terminate ); // 13    signal(SIGALRM, terminate ); // 14    signal(SIGTERM, terminate ); // 15    signal(SIGXCPU, terminate ); // 24    signal(SIGXFSZ, terminate ); // 25    gShm = new QSharedMemory("QRScannerNXP");    if (!gShm->create(4, QSharedMemory::ReadWrite)) {       delete gShm;       qDebug() << "Only allow one instance of QRScanner";       exit(0);    } .....
View full article
Check new updated version for with Morty here Step 1 : Get iMX Yocto AVS setup environment Review the steps under Chapter 3 of the i.MX_Yocto_Project_User'sGuide.pdf on the L4.X LINUX_DOCS to prepare your host machine. Including at least the following essential Yocto packages $ sudo apt-get install gawk wget git-core diffstat unzip texinfo \   gcc-multilib build-essential chrpath socat libsdl1.2-dev u-boot-tools Install the i.MX NXP AVS repo Create/Move to a directory where you want to install the AVS yocto build enviroment. Let's call this as <yocto_dir> $ cd <yocto_dir> $ repo init -u https://source.codeaurora.org/external/imxsupport/meta-avs-demos -b master -m imx7d-pico-avs-sdk_4.1.15-1.0.0.xml Download the AVS BSP build environment: $ repo sync Step 2: Setup yocto for Alexa_SDK image with AVS-SETUP-DEMO script: Run the avs-setup-demo script as follows to setup your environment for the imx7d-pico board: $ MACHINE=imx7d-pico DISTRO=fsl-imx-x11 source avs-setup-demo.sh -b <build_sdk> Where <build_sdk> is the name you will give to your build folder. After acepting the EULA the script will prompt if you want to enable: Sound Card selection The following Sound Cards are supported on the build: SGTL (In-board Audio Codec for PicoPi) 2-Mic Conexant The script will prompt if you are going to use the Conexant Card. If not then SGTL will be assumed as your selection Are you going to use Conexant Sound Card [Y/N]? Install Alexa SDK Next option is to select if you want to pre-install the AVS SDK software on the image. Do you want to build/include the AVS_SDK package on this image(Y/N)? If you select YES, then your image will contain the AVS SDK ready to use (after authentication). Note this AVS_SDK will not have WakeWord detection support, but it can be added on runtime. If your selection was NO, then you can always manually fetch and build the AVS_SDK on runtime. All the packages dependencies will be already there, so only fetching the AVS_SDK source code and building it is required. Finish avs-image configuration At the end you will see a text according with the configuration you select for your image build. Next is an example for a Preinstalled AVS_SDK with Conxant Sound Card support and WiFi/BT not enabled. ==========================================================   AVS configuration is now ready at conf/local.conf             - Sound Card = Conexant                                     - AVS_SDK pre-installed                                       You are ready to bitbake your AVS demo image now:               bitbake avs-image                                        ========================================================== Step 3: Build the AVS image Go to your <build_sdk> directory and start the build of the avs-image There are 2 options Regular Build: $ cd <yocto_dir>/<build_sdk> $ bitbake avs-image With QT5 support included: $ cd <yocto_dir>/<build_sdk> $ bitbake avs-image-qt5 The image with QT5 is useful if you want to add some GUI for example to render DisplayCards. Step 4 : Deploying the built images to SD/MMC card to boot on target board. After a build has succesfully completed, the created image resides at <build_sdk>/tmp/deploy/images/imx7d-pico/ In this directory, you will find the imx7d-pico-avs.sdcard image or imx7d-pico-avs-qt5.sdcard, depending on the build you chose on Step3. To Flash the .sdcard image into the eMMC device of your PicoPi board follow the next steps: Download the bootbomb flasher Follow the instruction on Section 4. Board Reflashing of the Quick Start Guide for AVS kit to setup your board on flashing mode. Copy the built SDCARD file $ sudo dd if=imx7d-pico-avs.sdcard of=/dev/sd bs=1M && sync $ sync Properly eject the pico-imx7d board: $ sudo eject /dev/sd NXP Documentation Refer to the Quick Start Quide for AVS SDK to fully setup your PicoPi board with Synaptics 2Mic and PicoPi i.mx7D For a more comprehensive understanding of Yocto, its features and setup; more image build and deployment options and customization, please take a look at the i.MX_Yocto_Project_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document. For a more detailed description of the Linux BSP, u-boot use and configuration, please take a look at the i.MX_Linux_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document.
View full article
Audio, from a file gst-launch filesrc location=test.wav ! wavparse ! mfw_mp3encoder ! filesink location=output.mp3 Audio Recording gst-launch alsasrc num-buffers=$NUMBER blocksize=$SIZE ! mfw_mp3encoder ! filesink location=output.mp3 # where #     duration = $NUMBER*$SIZE*8 / (samplerate *channel *bitwidth) # Example: 60 seconds recording # gst-launch alsasrc num-buffers=240 blocksize=44100 ! mfw_mp3encoder ! filesink location=output.mp3 # # To verify that is correct, do a normal audio playback gst-launch filesrc location=output.mp3 typefind=true ! beepdec ! audioconvert ! 'audio/x-raw-int,channels=2' ! alsasink Video, from a test source gst-launch videotestsrc ! queue ! vpuenc ! matroskamux ! filesink location=./test.avi Video, from a file gst-launch filesrc location=sample.yuv blocksize=$BLOCK_SIZE ! 'video/x-raw-yuv,format=(fourcc)I420, width=$WIDTH, height=$HEIGHT, framerate=(fraction)30/1' ! vpuenc codec=$CODEC ! matroskamux ! filesink location=output.mkv sync=false # where #     BLOCK_SIZE = WIDTH * HEIGHT * 1.5 #     CODEC = 0(MPEG4), 5(H263), 6(H264) or 12(MJPG). # # For example, encoding a CIF raw file gst-launch filesrc location=sample.yuv blocksize=152064 ! 'video/x-raw-yuv,format=(fourcc)I420, width=352, height=288, framerate=(fraction)30/1' ! vpuenc codec=0 ! matroskamux ! filesink location=sample.mkv sync=false Video, from Web camera # when the web cam is connected, the device node /dev/video0 should be present. In order to test the camera, without encoding gst-launch v4l2src ! mfw_v4lsink # in recording, run: # gst-launch v4l2src num-buffers=-1 ! queue max-size-buffers=2 ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false # # where sync=false indicates filesink to to use a clock sync # # In case a specific width/height is needed, just add the filter caps gst-launch v4l2src num-buffers=-1  ! 'video/x-raw-yuv,format=(fourcc)I420, width=352, height=288, framerate=(fraction)30/1' ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false # # In case you want to see in the screen what the camera is capturing, add a tee element # gst-launch v4l2src num-buffers=-1 ! tee name=t ! queue ! mfw_v4lsink t. ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false Video, from Parallel/MIPI camera # The camera driver needs to be loaded before executing the pipeline, refer to the BSP document to see which driver to load # MIPI (J5 port): modprobe ov5640_camera_mipi modprobe mxc_v4l2_capture   # Parallel (J9 port): modprobe ov5642_camera modprobe mxc_v4l2_capture   gst-launch mfw_v4lsrc ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false   # Do a 'gst-inspect mfw_v4lsrc' or 'gst-inspect vpuenc' to see other possible settings (resolution, fps, codec, etc.)
View full article
If you want to use a USB camera (these types of cameras are also called 'Web Cameras') with GStreamer on i.MX6 devices (Linux Kernel version >= 3.035), you need to either load the module dynamically or compile and link statically selecting (Y) the following config on the Kernel configuration      Device Drivers -> Multimedia support -> Video capture adapters -> V4L USB devices -> <*> USB Video Class (UVC) After the Kernel image has been built, flash it into the target, plug the web cam, then on a (target) terminal run      gst-launch v4l2src ! mfw_v4lsink You should see what the camera is capturing on the display. In case you need to encode the camera src data, you need to place the encoder into the pipeline      gst-launch v4l2src num-buffers=100  ! queue ! vpuenc codec=0 ! matroskamux ! filesink location=output.mkv sync=false We are using a certain codec (codec=0 means mpeg4), check options using 'gst-inspect vpuenc'.
View full article
Video, bad performance gst-launch filesrc location=test.mp4 typefind=true ! aiurdemux ! vpudec ! mfw_v4lsink Video, better performance gst-launch filesrc location=sample.mp4 typefind=true ! aiurdemux ! queue max-size-time=0 ! vpudec ! mfw_v4lsink # typefind=true allows to 'type find' the source file before negotiating # max-size-time=0 indicates to ignore possible blocking issues # In case of ASF files gst-launch filesrc location=sample.asf typefind=true ! aiurdemux ! queue max-size-time=0 ! mfw_wmvdecoder ! mfw_v4lsink Audio gst-launch filesrc location=sample.mp3  typefind=true ! beepdec ! audioconvert  ! 'audio/x-raw-int, channels=2' ! alsasink Audio with visualization gst-launch filesrc location=sample.mp3 typefind=true ! beepdec ! tee name=t ! queue ! audioconvert  ! 'audio/x-raw-int, channels=2' ! alsasink t. ! queue ! audioconvert ! goom ! autovideoconvert ! autovideosink Video/Audio long version gst-launch filesrc location=sample.avi typefind=true ! aiurdemux name=demux demux. ! queue max-size-buffers=0 max-size-time=0 ! vpudec ! mfw_v4lsink demux. ! queue max-size-buffers=0 max-size-time=0 ! beepdec ! audioconvert ! 'audio/x-raw-int, channels=2' ! alsasink # queue properties, max-size-buffers=0 and max-size-time=0, allows a smoother playback; type 'gst-inspect queue' for more info VA short version gplay sample.avi VA short version gst-launch playbin2 uri=file://<full path to sample file>
View full article
Test digital zoom with ipu for camera preview.   Board :sarbre-sd (imx6dq) BSP   : android 13.4ga In the above flow, one frame buffer is processed in four steps at camera preview. Add the step to change the frame buffer before step 4 , the added step which  zoom one preview frame.   The figure below shows the crop function of ipu lib, we use this function scale the frame.   Test result: preview zoom levle 0:   preview zoom level max:     When taking pictures with 5M pixels and the zoom is over level 1, the picture size is not 2592x1944 but 2016x1512. The underlying reason for it is that ipu crop function only supports the 2048x2048 maximum output .   Thumbnails of test result :  
View full article