i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
This is based on L5.10.35 BSP where you have to install QT static build: Qt 5.15 static build: Assuming your sysroot is at "/sysroot-cross" and your toolchain is at "/Toolchain" your qt-source is at /Qt-5.15 PATH=/sysroot-cross/bin:/sysroot-cross/sbin:/Toolchain/bin mkdir /Qt-5.15/mkspecs/qws/linux-imx6-g++ create in this dir the textfile "qmake.conf" with this content: ####################### snip qmake.conf ############################## include(../../common/linux.conf) include(../../common/qws.conf) # modifications to g++.conf QMAKE_CC                = arm-linux-gnueabi-gcc QMAKE_CFLAGS            = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include QMAKE_CXX               = arm-linux-gnueabi-g++ QMAKE_CXXFLAGS          = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include QMAKE_INCDIR            = /sysroot-cross/include /sysroot-cross/usr/include QMAKE_LIBDIR            = /sysroot-cross/lib /sysroot-cross/usr/lib QMAKE_LINK              = arm-linux-gnueabi-g++ QMAKE_LINK_SHLIB        = arm-linux-gnueabi-g++ QMAKE_LFLAGS            = -L/sysroot-cross/lib -L/sysroot-cross/usr/lib -Wl,-rpath-link -Wl,/sysroot-cross/lib QMAKE_LFLAGS           += -Wl,-rpath-link -Wl,/sysroot-cross/usr/lib #Opengl QMAKE_INCDIR_OPENGL = /Vivante/include QMAKE_INCDIR_OPENGL += /Vivante/include/GL QMAKE_INCDIR_OPENGL += /Vivante/include/EGL QMAKE_INCDIR_OPENGL += /Vivante/include/GLES2 QMAKE_LIBDIR_OPENGL = /Vivante/lib QMAKE_INCDIR_OPENGL_ES1 = $$QMAKE_INCDIR_OPENGL QMAKE_LIBDIR_OPENGL_ES1 = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_OPENGL_ES1CL = $$QMAKE_INCDIR_OPENGL QMAKE_LIBDIR_OPENGL_ES1CL = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_OPENGL_ES2 = /Vivante/include QMAKE_INCDIR_OPENGL_ES2 += /Vivante/include/EGL QMAKE_INCDIR_OPENGL_ES2 += /Vivante/include/GLES2 QMAKE_LIBDIR_OPENGL_ES2 = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_EGL = $$QMAKE_INCDIR_OPENGL_ES2 QMAKE_LIBDIR_EGL = $$QMAKE_LIBDIR_OPENGL QMAKE_LIBS_EGL = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_ES2 = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_QT = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_ES1 = QMAKE_LIBS_OPENGL_ES1CL = # modifications to linux.conf QMAKE_AR                = arm-linux-gnueabi-ar cqs QMAKE_OBJCOPY           = arm-linux-gnueabi-objcopy QMAKE_STRIP             = arm-linux-gnueabi-strip QMAKE_CFLAGS_RELEASE   = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include load(qt_config) ####################### snip qmake.conf ############################## create in the same dir the text file "qplatformdefs.h" ####################### snip qplatformdefs.h ############################## #include "../../linux-g++/qplatformdefs.h" ####################### snip qplatformdefs.h ############################## now goto dir /Qt-5.15 cd /Qt-5.15 call configure with ./configure -opensource -confirm-license -release -no-rpath -no-fast \     -no-sql-ibase -no-sql-mysql -no-sql-odbc -no-sql-psql -no-sql-sqlite2 \     -no-qt3support -no-mmx -no-3dnow -no-sse -no-sse2 -no-sse3 -no-ssse3 \     -no-sse4.1 -no-sse4.2 -no-avx -no-optimized-qmake -no-nis -no-cups -pch \     -reduce-relocations -force-pkg-config -prefix /usr -no-armfpa -make libs \     -nomake docs -little-endian -embedded armv6 -qt-decoration-styled \     -depths all -xplatform qws/linux-imx6-g++ -iconv -largefile -qt-gfx-linuxfb \     -qt-gfx-multiscreen -qt-mouse-pc -qt-mouse-linuxinput -qt-libpng \     -plugin-gfx-directfb -system-zlib -no-accessibility -no-gfx-transformed \     -no-gfx-qvfb -no-gfx-vnc -no-kbd-tty -no-kbd-linuxinput -no-kbd-qvfb \     -no-mouse-linuxtp -no-mouse-tslib -no-mouse-qvfb -no-libmng -no-libtiff \     -no-gif -no-libjpeg -no-freetype -no-stl -no-glib -no-openssl -no-egl \     -no-xmlpatterns -no-exceptions -no-multimedia -no-audio-backend -no-phonon \     -no-phonon-backend -no-webkit -no-script -no-scripttools -no-svg -no-script \     -no-declarative -no-sql-sqlite -no-qdbus -no-opengl -static -nomake tools \     -nomake examples -nomake demos when configuring is finished call make after a looong time, when everything goes right, we have a staticly compiled Qt. DO NOT call "make install". We will install manually: copy from /Qt-5.15/bin the files moc, uic, rcc and qmake to somewhere in PATH, eg. /sysroot-cross/bin copy the contents of dir /Qt-5.15/mkspecs to /sysroot-cross/usr/mkspec copy the contents of dir /Qt-5.15/plugins to /sysroot-cross/usr/plugins copy the contents of dir /Qt-5.15/include to /sysroot-cross/usr/include copy the contents of dir /Qt-5.15/lib to /sysroot-cross/usr/lib Test application camtest: if you don't have/want directfb plugin remove from camtest.pro the lines LIBS += -L/sysroot-cross/usr/plugins/gfxdrivers QTPLUGIN += QDirectFBScreen and the lines from main.cpp #include <QtPlugin> Q_IMPORT_PLUGIN(qdirectfbscreen) generate makefile by typing /sysroot-cross/bin/qmake -spec /sysroot-cross/usr/mkspecs/qws/linux-imx6-g++ camtest.pro then make you should set and activate your framebuffers with this script ################# snip ################################ fbset -fb /dev/fb0 -g 1024 768 1024 2304 16 echo -n 0 > /sys/class/graphics/fb0/blank fbset -fb /dev/fb1 -g 1024 768 1024 1536 32 echo -n 0 > /sys/class/graphics/fb1/blank modprobe galcore modprobe uvcvideo modprobe mxc_v4l2_capture ################# snip ################################ if you use directfb then your /etc/directfbrc file should look like this: ######################## snip /etc/directfbrc ############# system=fbdev fbdev=/dev/fb1 mode=1024x768 depth=32 pixelformat=ARGB no-cursor window-surface-policy=systemonly ######################## snip /etc/directfbrc ############# to start the application with directfb: ./camtest -qws -display directfb without directfb using linuxfb: ./camtest -qws -display linuxfb:/dev/fb1 Notes about application: 1. The application shows 2 webcams in background-framebuffer (BG-FB). The foreground-framebuffer (FG-FB) shows the qt-gui. FG-FB is configured to be fully opaque and uses color-keying. On the BG-FB one cam is overlayed on the other cam using IPU. Optimization possibilities: the app copies the frames from the cams with memcpy. This wouldn't be necessary, when the kernel usb-webcam interface (uvc) would support V4L2_MEMORY_USERPTR method. through this way, you could pass the mapped IPU mmapped inbufs directly to v4l2 output buffers. If you get errors like NOSPC (-28) from uvc, this is a limitation of USB. My board is a MX6QSabre, where the two webcams are connected to the same usb-controller. With both webcams I had to limit the frame size to 320x250 and 160x120 at 25Hz. You might try higher res if you have other type of webcams (not usb). Have fun  
View full article
The i.MX 8QuadXPlus Multisensory Enablement Kit (MEK) is a NXP development platform based on Cortex A-35 + Cortex-M4 cores. Built with high-level integration to support graphics, video, image processing, audio, and voice functions, the i.MX 8X processor family is ideal for safety-certifiable and efficient performance requirements. This tutorial shows how to enable the Cortex-M4 using the MCUXpresso SDK package and loading the binary from the network. NOTE: It is also possible to load the Cortex-M4 image from the SCFW using the imx-mkimage utility. But now we are going to focus on MCUXpresso. Setting up the machine   Install cmake on the host machine: $ sudo apt-get install cmake Download the armgcc toolchain and export the location as ARMGCC_DIR: $ export ARMGCC_DIR=<your_path_to_arm_gcc>/gcc-arm-none-eabi-9-2020q2/ NOTE: The ARMGCC_DIR variable needs to be exported on the terminal used for compilation. To setup the TFTP server on the host machine: Configuring your Host PC for TFTPPermalink   The first step is to install all the prerequisite packages for TFTP: $ sudo apt-get install xinetd tftpd tftp Create a TFTP folder in your desired location with root owner and the “rwx” permission for all users: $ sudo mkdir /tftpboot $ sudo chmod –R 777 /tftpboot $ sudo chown –R root /tftpboot Create a configuration file for the TFTP with the following content. (The server_args parameter must match with the folder created above) $ cat /etc/xinetd.d/tftp service tftp { protocol = udp port = 69 socket_type = dgram wait = yes user = root server = /usr/sbin/in.tftpd server_args = -s /tftpboot disable = no } Restart the xinetd service: $ sudo /etc/init.d/xinetd restart You can place any file at the TFTP folder and load it through U-Boot, you can also create symbolic links from your building directory avoiding to copy and paste your zImage and dtb files every time. Configuring your Host PC for NFSPermalink   Install all the needed packages for NFS: $ sudo apt-get install nfs-kernel-server Create a folder for placing your rootfs: $ mkdir /tftpboot/rfs Add the following line in the end of your /etc/exports file: /tftpboot/rfs *(rw,no_root_squash,no_subtree_check) Restart the NFS service: $ sudo service nfs-kernel-server restart Place your rootfs or create a symbolic link for the NFS folder.    Downloading the SDK Download the MCUXpresso following these steps: Click on “Select Development Board”; Select MEK-MIMX8QX under “Select a Device, Board, or Kit” and click on “Build MCUXpresso SDK” on the right; Select “Host OS” as Linux and “Toolchain/IDE” as GCC ARM Embedded; Add “FreeRTOS” and all the wanted Middleware and hit “Request Build”; Wait for the SDK to build and download the package. Building the image All demos and code examples available on the SDK package are located in the directory <<SDK_dir>>/boards/mekmimx8qx/. This tutorial shows how to build and flash the hello_world demo but similar procedures can be applied for any example (demo, driver, multicore, etc) on the SDK. To build the demo, enter the armgcc folder under the demo directory and make sure that the ARMGCC_DIR variable is set correctly. $ cd ~/SDK_2.3.0_MEK-MIMX8QX/boards/mekmimx8qx/demo_apps/hello_world/armgcc $ export ARMGCC_DIR=<your_path_to_arm_gcc>/gcc-arm-none-eabi-9-2020q2/ Run the build_release.sh script to build the code. $ ./build_release.sh NOTE: If needed, give the script execution permission by running chmod +x build_release.sh. This generates the M4 binary (hello_world.bin) under the release folder. Copy this image to the /tftpboot/ directory on the host PC. NOTE: This procedure shows how to build the M4 image that runs on TCM. To run the image from DDR, use the build_ddr_release.sh script to build the binary under the ddr_release folder. Flashing the image Open two serial consoles, one for /dev/ttyUSB0 for Cortex-A35 to boot Linux, and one for /dev/ttyUSB1 for Cortex-M4 to boot the SDK image. On the A35 console, with a SD Card with U-Boot, stop the booting process and enter the following commands to load the M4 binary to TCM: => dhcp => setenv serverip <ip_from_host_pc> => tftp 0x88000000 hello_world.bin => dcache flush => bootaux 0x88000000 Then the M4 core will load the image to the /dev/ttyUSB1 console.    
View full article
SW Environment Setup: 1. Prepare L5.10.35 Yocto and build Image  The prebuilt image also is available and useable. 2. Flash image to the SD card  Refer to the Yocto User Guide. 3. Compile flash.bin without M4 and flash it to sdcard (flash.bin as attachment)  make SOC=iMX8QM flash sudo dd if=flash.bin of=/dev/sde bs=1k seek=32 conv=fsync HW Environment Setup: Prepare the imx8qm MEK CPU board and base board and DB9 male cable, connect to CAN0 and CAN1 female connector on base board. (Pin to Pin connection) User Case: 1. Power on board and configure specify dtb file in uboot  setenv fdt_file imx8qm_mek.dtb 2. Boot up and config bitrate for can0 and can1 in kernel root@imx8qmmek:~# ip link set can0 up type can bitrate 500000 root@imx8qmmek:~# ip link set can1 up type can bitrate 500000 3. Check CAN0 and CAN1 devices root@imx8qmmek:~# ifconfig can0: flags=193<UP,RUNNING,NOARP> mtu 16 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 10 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 85 can1: flags=193<UP,RUNNING,NOARP> mtu 16 unspec 00-00-00-00-00-00-00-00-00-00-00-00-00-00-00-00 txqueuelen 10 (UNSPEC) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0 device interrupt 86 3. Run candump for CAN1 root@imx8qmmek:~# candump can1 & [1] 1215 [ 65.624580] can: controller area network core [ 65.630225] NET: Registered protocol family 29 [ 65.641158] can: raw protocol 4. Run cansend for CAN0 root@imx8qmmek:~# cansend can0 5A1#11.2233.44556677.88 can1 5A1 [8] 11 22 33 44 55 66 77 88   The above red is output result from CAN1.
View full article
This note show how to use the open source gstreamer1.0-rtsp-server package on i.MX6QDS and i.MX8x to stream video files and camera using RTP protocol.  The i.MX 6ULL and i.MX 7 doesn't have Video Processing Unit (VPU). Real Time protocol is a very common network protocol for delivering media over IP networks. On the board, you will need a GStreamer pipeline that encodes the raw video, adds the RTP payload, and sends over a network sink. A generic pipeline would look as follows: video source ! video encoder ! RTP payload ! network sink Video source: often it is a camera, but it can be a video from a file or a test pattern, for example. Video encoder: a video encoder as H.264, H.265, VP8, JPEG and others. RTP payload: an RTP payload that matches the video encoder. Network sink: a video sync that streams over the network, often via UDP.   Prerequisites: MX6x o MX8x board with the L5.10.35 BSP installed. A host PC with either Gstreamer or VLC player installed. Receiving h.264/h.265 Encoded RTP Video Stream on a Host Machine Using GStreamer GStreamer is a low-latency method for receiving RTP video. On your host machine, install Gstreamer and send the following command: $ gst-launch-1.0 -v udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink sync=false   Using Host PC: VLC Player Optionally, you can use VLC player to receive RTP video on a PC. First, in your PC, create a sdp file with the following content:  stream.sdpv=0m=video 5000 RTP/AVP 96c=IN IP4 127.0.0.1a=rtpmap:96 H264/90000 After this, with the GStreamer pipepline on the device running, open this .sdp file with VLC Player on the host PC. Sending h.264 and h.265 Encoded RTP Video Stream GStreamer provides an h.264 encoding element by software named x264enc. Use this plugin if your board does not support h.264 encoding by hardware or if you want to use the same pipeline on different modules. Note that the video performance will be lower compared with the plugins with encoding accelerated by hardware. # gst-launch-1.0 videotestsrc ! videoconvert ! x264enc ! rtph264pay config-interval=1 pt=96 ! udpsink host=<host-machine-ip> port=5000 Note: Replace <host-machine-ip> by the IP of the host machine. In all examples you can replace videotestsrc by v4l2src element to collect a stream from a camera   i.MX8X # gst-launch-1.0 videotestsrc ! videoconvert ! v4l2h264enc ! rtph264pay config-interval=1 pt=96 ! udpsink host=<host-machine-ip> port=5000   i.MX 8M Mini Quad/ 8M Plus # gst-launch-1.0 videotestsrc ! videoconvert ! vpuenc_h264 ! rtph264pay config-interval=1 pt=96 ! udpsink host=<host-machine-ip> port=5000 i.MX6X The i.MX6QDS does not support h.265 so the h.264 can work: # gst-launch-1.0 videotestsrc ! videoconvert ! vpuenc_h264 ! rtph264pay config-interval=1 pt=96 ! udpsink host=<host-machine-ip> port=5000   Using Other Video Encoders While examples of streaming video with other encoders are not provided, you may try it yourself. Use the gst-inspect tool to find available encoders and RTP payloaders on the board: # gst-inspect-1.0 | grep -e "encoder"# gst-inspect-1.0 | grep -e "rtp" -e " payloader" Then browse the results and replace the elements in the original pipelines. On the receiving end, you will have to use a corresponding payload. Inspect the payloader element to find the corresponding values. For example: # gst-inspect-1.0 rtph264pay   Install rtp in your yocto different form L5.10.35 BSP, to install gstreamer1.0-rtsp-server in any Yocto Project image, please follow the steps below: Enable meta-multimedia layer: Add the following on your build/conf/bblayers.conf: BBLAYERS += "$"${BSPDIR}/sources/meta-openembedded/meta-multimedia" Include gstreamer1.0-rtsp-server into the image: Add the following on your build/conf/local.conf: IMAGE_INSTALL_append += "gstreamer1.0-rtsp-server" Run bitbake and mount your sdcard. Copy the binaries: Access the gstreamer1.0-rtsp-server examples folder: $ cd /build/tmp/work/cortexa9hf-vfp-neon-poky-linux-gnueabi/gstreamer1.0-rtsp-server/$version/build/examples/.libs Copy the test-uri and test-launch to the rootfs /usr/bin folder. $ sudo cp test-uri test-launch /media/USER/ROOTFS_PATH/usr/bin Be sure that the IPs are correctly set: SERVER: => ifconfig eth0 $SERVERIP CLIENT: => ifconfig eth0 $CLIENTIP Video file example SERVER: => test-uri file:///home/root/video_file.mp4 CLIENT: => gst-launch-1.0 playbin uri=rtsp://$SERVERIP:8554/test You can try to improve the framerate performance using manual pipelines in the CLIENT with the rtspsrc plugin instead of playbin. Follow an example: => gst-launch-1.0 rtspsrc location=rtsp://$SERVERIP:8554/test caps = 'application/x-rtp'  ! queue max-size-buffers=0 ! rtpjitterbuffer latency=100 ! queue max-size-buffers=0 ! rtph264depay ! queue max-size-buffers=0 ! decodebin ! queue max-size-buffers=0 ! imxv4l2sink sync=false   Camera example SERVER: => test-launch "( imxv4l2src device=/dev/video0 ! capsfilter caps='video/x-raw, width=1280, height=720, framerate=30/1, mapping=/test' ! vpuenc_h264 ! rtph264pay name=pay0 pt=96 )" CLIENT: => gst-launch-1.0 rtspsrc location=rtsp://$SERVERIP:8554/test ! decodebin ! autovideosink sync=false The rtspsrc has two properties very useful for RTSP streaming: Latency: Useful for low-latency RTSP stream playback (default 200 ms); Buffer-mode: Used to control buffer mode. The slave mode is recommended for low-latency communications. Using these properties, the example below gets 29 FPS without a sync=false property in the sink plugin. The key achievement here is the fact that there is no dropped frame: => gst-launch-1.0 rtspsrc location=rtsp://$SERVERIP:8554/test latency=100 buffer-mode=slave ! queue max-size-buffers=0 ! rtph264depay ! vpudec ! imxv4l2sink      
View full article
Use case: iMX8QXP system can be a video input source to another system.   Hardware Pins: LCDIF_D00 ~ LCDIF_D07 LCDIF_CLK LCDIF_VSYNC LCDIF_HSYNC LCDIF_EN   Reference patch: It is based on L5.4.70_2.3.0 GA BSP.  File: L5.4.70_2.3.0-iMX8QXP-LCDIF-add-YUV422-8-bits-output.patch Customer can change the timing parameters in file "panel-lcdif-yuv422.c" as needed, the default timing is a 1280x720 P30 mode: static const struct display_timing yuv422_lcd_timing = {     .pixelclock = { 74250000, 74250000, 74250000 },     .hactive = { 1280, 1280, 1280 },     .hfront_porch = { 220, 220, 220 },     .hback_porch = { 110, 110, 110 },     .hsync_len = { 40, 40, 40 },     .vactive = { 720, 720, 720 },     .vfront_porch = { 20, 20, 20 },     .vback_porch = { 5, 5, 5 },     .vsync_len = { 5, 5, 5 },     .flags = DISPLAY_FLAGS_DE_HIGH, };   Test application: drm_test_yuv.zip: it can set framebuffer to UYVY mode, in this case, no CSC is needed, the data in framebuffer memory will be same as output on display data interface. drm_test_rgb.zip: it can set framebuffer to RGBA mode, in this case, RGB to YUV CSC is needed, application can draw RGB data into framebuffer as normal, the LCDIF will convert it to YUV422 format on the fly, then output the YUV data to display interface.    
View full article
This article will describe one suggestion for one issue that UART continuously generate RX interrupts and receive 0xFF even when Rx line is continuously high in some cases on imx6 series. Below I will explain with imx6DL. Some settings are just to make it easier to reproduce. BSP version: L5.4.70-2.3.0 Board HW: MCIMX6DL-SDB When issue happen Config imx6DL UART3 as the serial port to 1200 baud, 8-N-1 format. Keep the RX Line high. Make the RX line low and keep it for a short time (360 usec-370 usec).   At this condition, you will find that the UART will continuously generate RX interrupts and show receiving 0xFF even you make the RX line return to be high. Why issue happen The low time is not in the correct range and out of our spec. In the imx6DL AEC document, there is one chapter named UART Receiver like blow   If using 1200 band, that means one valid bit time is 833 usec. And there is a definition that “tolerate 1/(16 x Fbaud_rate) tolerance in each bit”. That’s means in the case of 1200 baud. A range of valid bit is 781 to 885 usec. But is reproducing, the Low level time is 360 usec. This time out of range will make UART state machine to be confused. How to fix Actually, the best way is following our spec. If there is such an unknown situation in the customer’s environment, then the following method could be regarded as a suggestion to fix the issue meet by the customer. The interrupt handler will check USR1[AWAKE]. 2    If AWAKE is asserted, clear it and proceed as usual (assume we have valid data), else, check if USR1[AGTIM] is asserted. 3    if AGTIM is asserted, clear it and proceed as usual, else perform software reset (assume we have invalid data). Checking AGTIM is for one race condition when the RX fifo has some characters (less than RXTL) but no more data is coming in. When following this procedure, the UART will perform a software reset when a block interrupt occurs. Notes: From customer report, error could be cleared if a valid start-bit is detected on the RX line. This needs to be verified by the customer themselves. Test code has been included in attachment.   Besst Regards
View full article
The A53 Debug Console Changing consists in several major updates like: RDC settings, Pinmux, Clocks and Ecosystem Updates.
View full article
Some case need configure the GPIO as power off button. One solution is to use “gpio-keys” to send the “KEY_POWER” event to the system. Co-work with systemd, system gets power off.  
View full article
Steps to replace the Wi-Fi/Bluetooth firmware on the i.MX 8M series on Linux    Applicable to versions L5.4.47, L5.4.70, L5.10.9   1. Download the newest firmware. you can download the attachment in this thread and unzip it. 2. Copy it to the EVK board. 3. Copy the firmware to /lib/firmware/nxp   root@imx8mmevk: cp pcieuart8997_combo_v4.bin sdiouart8987_combo_v0.bin  /lib/firmware/nxp If the Linux version is L5.4.3,Then the step3 is to copy firmware to lib/firmware/mrvl/ root@imx8mmevk: cp pcieuart8997_combo_v4.bin sdiouart8987_combo_v0.bin  /lib/firmware/mrvl    
View full article
Purpose This is early communication to notify i.MX 8M Dual/8M QuadLite/8M Quad customers of a potential incorrect PCIe power supply configuration on certain NXP BSP Linux and Android versions. Description The PCIE_VPH power supply is selectable in software  between 1.8V and 3.3V. When the PCIE_VPH supply is configured to operate at 3.3V, the 1.8V internal regulator (disabled by default) must be enabled to prevent overstress conditions on the PCIe PHY. If the 1.8V internal regulator is left disabled when the PCIE_VPH supply is configured to operate at 3.3V, it could potentially impact the product lifetime of the device.   Impact •i.MX 8M Dual/8M QuadLite/8M Quad (other i.MX processors are not impacted) •Only Impacts Linux/Android kernel versions earlier than L5.4.70_2.3.2 or Linux 5.10.9_1.0.0 releases MITIGATION •When the PCIE_VPH supply is configured to operate at 3.3V users need to enable the internal regulator by setting the IOMUXC_GPR_GPR14 and IOMUXC_GPR_GPR16 registers - PCIE1_VREG_BYPASS and PCIE2_VREG_BYPASS bit to 0. •There are 3 software patches for each release. Software patch details in the Code Aurora Forum (CAF): •For L5.4.70_2.3.2 patch release, the git log references are: •MLK-25349-3 PCI: imx: clear vreg bypass when pcie vph voltage is 3v3 •MLK-25349-2 arm64: dts: imx8mq-evk: add one regulator used to power up pcie phy •MLK-25349-1 dt-bindings: imx6q-pcie: add one regulator used to power up pcie phy • •The L5.4.70_2.3.2, LF_5.10 Q2 and later BSP releases correctly configure and enable the internal regulator by setting the IOMUXC_GPR_GPR14 and IOMUXC_GPR_GPR16 registers The Patch MLK-25349 which correctly enables the internal regulator is already included in the L5.4.70_2.3.2 patch release and release versions after it. MITIGATION •The following branches of Linux/Android BSP releases contain the MLK-25349 patch. The patch is attached below for each respective release.   •Other branches which are not listed should try to apply the nearest Patch version patch. If a user encounters any conflicts in applying, they should back porting from below nearest patch release version below. imx_4.9.51_ga, imx_4.9.y_android_imx8m_ga_v2                           - Patch attached  imx_4.9.88_ga, imx_4.9.y_android_2.0.0_ga                                   - Patch attached  imx_4.14.y and imx_4.14.98_2.3.0, imx_4.14.98_2.3.0_android     - Patch attached  imx_4.19.y and imx_4.19.35_1.1.0, imx_4.19.35_1.1.0_android     - Patch attached  imx_5.4.y, imx_5.4.3_2.0.0, imx_5.4.3_2.0.0_android                     - Patch attached Documentation Change Description – 1 of 3 for Datasheet Updated Datasheets and Reference Manual will be published to nxp.com. Updated Hardware Design guide and Schematics have already been published on nxp.com.  Updated the descriptions of PCIE_VPH in the Datasheet Table 8, "Operating ranges"     Documentation Change Description – 2 of 3 for Reference Manual (RM) Updated the description of field 12 "PCIE1_VREG_BYPASS" in 8.2.4.15 GPR14 General Purpose Register (IOMUXC_GPR_GPR14)           Documentation Change Description – 3 of 3 for RM Updated the description of field 12 "PCIE2_VREG_BYPASS" in 8.2.4.17 GPR16 General Purpose Register (IOMUXC_GPR_GPR16)   REFERENCES •i.MX 8M Dual / 8M QuadLite / 8M Quad Product Lifetime Usage  •i.MX 8M Dual / 8M QuadLite / 8M Quad Applications Processors Data Sheet for Industrial Products •i.MX 8M Dual / 8M QuadLite / 8M Quad Applications Processors Data Sheet for Consumer Products •i.MX 8MDQLQ Hardware Developer’s Guide  •i.MX 8M Dual/8M QuadLite/8M Quad Applications Processors Reference Manual  
View full article
Software environment: L5.4.47_2.2.0 Hardware i.MX8QXPC0 EVK board In the uuu script we can see the bootloader imx-boot-imx8qxpc0mek-sd.bin-flash is necessary. The default BSP build generate in the yocto project is with the spl, some customers are confused about the how to build the imx-boot-imx8qxpc0mek-sd.bin-flash. Here I give the manually compile way and generate it in yocto. In the yocto generate it is more convenient than the manually compile way. Hope this can do help for you.
View full article
i.MX8DXL DDR3L EVK board, nor flash using is MT25QU512ABB8ESF-0SIT. This doc will show reference of FlexSPI configuration parameters to make booting from MT25Q flash, with QUAD pad and DDR mode. HW: i.MX8DXL DDR3L EVK board SW: Linux 5.4.70 BSP From RM 5.9.3.2 FlexSPI serial flash BOOT operation, the FlexSPI boot flow as :   FlexSPI configuration parameters,  could be think as two kind group: parameter for FlexSPI controller,  parameter related to the operation on nor flash.   Full parameter table check check i.MX8DXL RM Table 5-20. FlexSPI Configuration block. Let us check MT25Q data sheet for its feature, note our target is DDR mode(80MHZ) and QUAD pad:     Now let us change the FlexSPI configuration parameters: 1>readSampleClkSrc , set as 2 , that is loop back from SCK pad; this filed default set as 0, as found default value booting will met failure in this use case, so change to 2. 2>deviceModeCfgEnable set 1, deviceModeSeq.seqNum set 1 , deviceModeSeq. seqId set to 4; deviceModeArg set 0x5f. i.MX8DXL will send some cmd to flash to make MT25Q enter DDR mode and QUAD mode, so deviceModeCfgEnable =1. For seqNum=1, seqId =4; means index 4 of LUT table will store this sequence, and cost one LUT entry. We will explain how to change LUT entry later. For deviceModeArg=0x5f, check MT25Q data sheet, its enhanced volatile register could be write to configure the flash working mode:  3>controllerMiscOption as 0x40, this parameter only for FlexSPI controller itself, means as” External device works using DDR commands”. 4>deviceType=1(Serial Nor),  sflashPadType=4 (QUAD pad),  serialClkFreq=4(80MHZ CLK), these parameter also only for FlexSPI controller. 5>sflashA1Size fill actual size, in terms of bytes 6>LUT entry changes, check 8DXL RM Table 5-21: So LUT entry 0 is sequence for Read command, entry 1 is for Read Status sequence, entry 3 is for Write Enable sequence,  entry 15 is for Dummy command sequence. Other index LUT entry(for example 2,4,6,7,8,10,12,13,14) is could be used for store your sequence for some cmd your flash device neede. We store sequence of writing MT25Q enhance volatile register as LUT entry 4. Check 8DXL RM,  Figure 15-6. LUT and sequence structure:   Each LUT entry (sequence) will using 16 byte,  one sequence consists of up to 8 instructions, each instruction will using 16bit. Each instruction  format as opcode—num_pads—operand. Check RM 15.2.4.8 Programmable Sequence Engine, for supported instructions:   Actually the Write enable sequence is run first before the other sequence, as we will write Mt25Q volatile register, before that need issue Write enable sequence. Check MT25Q data sheet: For this sequence only need one instruction, that is 0x0406, at this time still using is SDR and one pad mode:  Opcode (CMD_SDR),  one pad (0), operand (6).   LUT entry 1, Read status sequence, it is READ STATUS REGISTER (05h) of MT25Q , check data sheet: It use two instructions: 0x0405: opcode(CMD_SDR), pad (one pad), operand (0x5, READ STATUS REGISTER) 0x2404: opcode(READ_SDR), pad (one pad), operand (0x4 , byte number)   LUT entry 4, that is for make MT25Q enter DDR mode and quad pad: From MT25Q data sheet: It will use two instructions, that is 0x0461: opcode (CMD_SDR),  one pad (0), operand (0x61 WRITE ENHANCED VOLATILE CONFIGURATION REGISTER) 0x2001: opcode (WRITE_SDR 08), one pad(0), operand (1 byte data size) The 0x5f will be send out as data.   Next check LUT entry read , at this time MT25Q had enter QUAD pad and DDR mode: LUT entry 0, Read sequence, it is fast read data from MT25Q, from data sheet: will use four instructions , that is : 86ED, opcode (CMD_DDR ), pad ( four pad), operand (0xEDh fast read) 8a18, opcode (RADDR_DDR), pad (four pad), operand (0x18 , three byte address) B210, opcode(DUMMY_ADDR), pad (four pad), operand(0x10, dummy cycle) A604, opcode (READ_DDR), pad (four pad) , operand (0x4, data byte)   Reference: 1.i.MX8DXL Reference Manual 2.MT25Q data sheet              
View full article
1.  Introduction 1.1.        Purpose This application note introduces a procedure of how to port AVB/TSN stack and run referring feature demos on i.MX8DXL board. This can help users who want to run AVB/TSN demos to quickly understand and customized their own codes. Since many of the standards are only for TSN switch/bridges and i.MX8DXL is design to be a TSN/AVB endpoint, the demos did not implement a full stack or full standards. They only demonstrated the basic end-to-end point (talker to listener) A/V streaming without bridge or switch. The software used for example in this documentation are based on the opensource such as gstreamer and alsa utils.   1.2.        Overview 1.2.1.     AVB/TSN AVB (Audio Video Bridging) is a common name for the set of technical standards which provide improved synchronization, low-latency, and reliability for switched ethernet network. AVB was initially developed by the IEEE Audio Video Bridging task group of the IEEE 802.1 standards committee. In November 2012, AVB group was renamed to TSN (Time-Sensitive Networking) task group to reflect the expanded scope of its work, which is to provide the specifications that will allow time-synchronized low latency streaming services through IEEE 802 networks. The referring standards shows as follows:     TSN protocol additions QoS components supported in HW TSN MAC + SW driver Managed Object components expose i/f to allow support of standardized network config protocols (local & remote) Transport API to allow other transport layer to use TSN QoS Stack extensions to map traffic priority to application task scheduling Real Time, gPTP based, Best Effort 1.2.2.     Demo introduction   The two streams are defined as below to grantee time sensitive (sub-microsecond synchronization), low latency and bandwidth on the ethernet: Stream A: SR class A, AVTP Audio Format, PCM 16-bit sample, 48 kHz, stereo, 12 frames per AVTPDU. Stream B: SR class B, AVTP Compressed Video Format, H.264 profile High, 1920x1080, 30 fps. The two TSN streams would be allocated into different TC (traffic control) class for egress. Different TC class would be mapped to different hardware queues with specific DMA channel which supported by ENET_OoS IP. The demos were built by follow blocks:   Linux Traffic Control: streams egress control Linux ptp: clock sync in network Libavtp: Time Sensitive Applications AV Transport protocol Gstreamer: avtp plugin uses the libavtp to transmit and receive AVTP audio/video (audio pcm, video h264).   1.2.3.     Traffic control Multiply queue qdiscs + CBS: The CBS class is actually handled by hardware IP to select which queue for transmitting.   CBS parameters come straight from the IEEE 802.1Q-2018 specification. They are the following: idleSlope: rate credits are accumulated when queue isn’t transmitting; sendSlope: rate credits are spent when queue is transmitting; hiCredit: maximum amount of credits the queue is allowed to have; loCredit: minimum amount of credits the queue is allowed to have;     2.  Build demo 2.1.        Build yocto $ DISTRO=fsl-imx-xwayland MACHINE=imx8dxlevk source imx-setup-release.sh -b ./xwayland $ bitbake imx-image-full Prepare a SD card and burn it with the built out images. 2.2.        Rebuild kernel Rebuild the kernel after applying the 0001-qenet-add-queue-avoid-panic.patch, and overwrite the Image and imx8dxl-evk.dtb on the boot partition of the SD card. 2.3.        Install the toolchain $ bitbake -f fsl-image-validation-imx -c populate_sdk $ sh tmp/deploy/sdk/fsl-imx-xwayland-glibc-x86_64-fsl-image-validation-imx-aarch64-imx8dxlevk-toolchain-5.4-zeus.sh The toolchain would be installed into /opt/fsl-imx-xwayland/5.4-zeus   2.4.        Create a install folder $ mkdir <your install folder> Create a folder to install all of the shared libraries, binaries and configure files which built out manually in this doc. After built done, you should copy all of the contents in this folder to target board root.   2.5.        Build libavtp $ source /opt/fsl-imx-xwayland/5.4-zeus/environment-setup-aarch64-poky-linux $ git clone https://github.com/Avnu/libavtp.git $ cd libavtp $ meson build --prefix=<your install folder>/usr $ ninja -C build Copy the built out .so and .pc into the toolchain rootfs: $ sudo cp build/libavtp.so* /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/lib $ sudo cp build/meson-private/*.pc /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/lib/pkgconfig/ Copy the .so into the install folder: $ cp build/libavtp.so* <install folder>/usr/lib/ To make sure you have avtp package installed correctly:     $ pkg-config --list-all | grep avtp   2.6.        Build ALSA aaf plugin $ cd <yocto build>/tmp/work/aarch64-poky-linux/alsa-plugins/1.1.9-r0/alsa-plugins-1.1.9 $ ./configure --build=x86_64-linux --host=aarch64-poky-linux --target=aarch64-poky-linux --prefix=<install folder>/usr --disable-silent-rules --disable-dependency-tracking --with-libtool-sysroot=<yocto build>/xwayland/tmp/work/aarch64-poky-linux/alsa-plugins/1.1.9-r0/recipe-sysroot --disable-static --enable-aaf --disable-jack --disable-libav --disable-maemo-plugin --disable-maemo-resource-manager --enable-pulseaudio --enable-samplerate --with-speex=lib $ make $ make install   2.7.        Build Gstreamer AVTP plugins (1.17.x) 2.7.1.     Build Gstreamer core $ git clone https://gitlab.freedesktop.org/gstreamer/gstreamer.git $ patch -p1 < gstreamer-1.0-pass-build.patch $ meson build --prefix=<install folder>/usr $ ninja -C build $ sudo ninja -C build install After Gstreamer is installed into <your install folder>, please fix the “prefix” path in the .pc files by, and copy to the toolchain folders: $ cd <your install folder> $ grep -lR <your install folder> ./lib/pkgconfig/ | xargs sed -i 's/<your install folder>/\/usr/g' $ cp -rf ./usr/* /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/ 2.7.2.     Build gst-plugins-base $ git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-base.git $ cd gst-plugins-base $ patch -p1 < gst-plugins-base-pass-build.patch $ meson build --prefix=<your install folder>/usr $ ninja -C build $ sudo ninja -C build install   2.7.3.     Build gst-plugins-bad $ git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad.git $ cd gst-plugins-bad $ meson build --prefix=<your install folder>/usr $ ninja -C build $ sudo ninja -C build install   After gst-plugins-base and gst-plugins-bad installed into <your install folder>, please fix the “prefix” path in the .pc files and copy them into the toolchain folders: $ cd <your install folder> $ grep -lR <your install folder> ./lib/pkgconfig/ | xargs sed -i 's/<your install folder>/\/usr/g' $ cp -rf ./usr/* /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/   2.8.        Build H.264 SW plugins 2.8.1.     Build x264 As the yocto actually has the x264 recipes, but not included in our bblayers, we need to copy the x264 source into our bblayers path under <yocto>/source to build: $ cp -rf ./poky/meta/recipes-multimedia/x264 ./meta-openembedded/meta-multimedia/recipes-multimedia/ $ vi ./meta-openembedded/meta-multimedia/recipes-multimedia/x264_git.bb Remove the LICENSE_FLAGS line $ bitbake -f x264 -c do_install $ sudo cp -rf tmp/work/aarch64-poky-linux/x264/r2917+gitAUTOINC+72db437770-r0/image/usr/* /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/ 2.8.2.     Build gst-plugins-ugly $ git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-ugly.git $ cd gst-plugins-ugly $ meson build --prefix=<your install folder>/usr $ ninja -C build $ sudo ninja -C build install   2.8.3.     Build libav $ cp -rf poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav meta-openembedded/meta-multimedia/recipes-multimedia/gstreamer-1.0/ Remove the LICENSE_FLAGS line $ vim ./poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav_1.16.2.bb $ bitbake -f gstreamer1.0-libav -c do_install $ cp /opt/samba/nxf39444/imx-yocto-bsp-i.mx8dxl/xwayland/tmp/work/aarch64-poky-linux/gstreamer1.0-libav/1.16.2-r0/image/usr/lib/gstreamer-1.0/libgstlibav.so <your install folder>/usr/lib/gstreamer-1.0   2.8.4.     Install binaries Final step is to copy all of your built out files from <your install folder> into your board / root, and boot up the board. $ export GST_PLUGIN_PATH=/usr/lib/gstreamer-1.0/ $ gst-inspect-1.0 To check if the above Gstreamer plugins we built out can be found by gst-instpect.   3.  System Setup 3.1.        VLAN The ENTE_QoS is assigned to eth0 instance. So create eth0.5 for vlan id 5: $ ip link add link eth0 name eth0.5 type vlan id 5 egress-qos-map 2:2 3:3 $ ip link set eth0.5 up   3.2.        Qdiscs The TSN control plane is implemented through the TC (Traffic Control) system. The transmission algorithms specified in the FQTSS (Forwarding and Queuing for Time-Sensitive Streams) chapter of IEEE 802.1Q-2018 are supported via TC Qdiscs (Queuing Discipline). 3.2.1.     MQPRIO qdisc $ tc qdisc add dev eth0 parent root handle 100 mqprio num_tc 3 map 0 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 queues 1@0 1@1 1@2 hw 1 3.2.2.     CBS qdisc Q1 CBS for audio, Q2 CBS for video: $ tc qdisc replace dev eth0 parent 100:3 handle 888 cbs idleslope 3648 sendslope -996352 hicredit 12 locredit -113 offload 1 $ tc qdisc replace dev eth0 parent 100:2 handle 777 cbs idleslope 98688 sendslope -901312 hicredit 153 locredit -1389 offload 1 3.2.3.     TimeSync Run the ptp4l and phc2sys in background, and use check_clocks to check the ptp sync works. $ ptp4l -i eth0 -f ./gPTP.cfg --step_threshold=1 & $ pmc -u -b 0 -t 1 "SET GRANDMASTER_SETTINGS_NP clockClass 248 clockAccuracy 0xfe offsetScaledLogVariance 0xffff currentUtcOffset 37 leap61 0 leap59 0 currentUtcOffsetValid 1 ptpTimescale 1 timeTraceable 1 frequencyTraceable 0 timeSource 0xa0" $ ./check_clocks -d eth0 4.  Run demo 4.1.        ALSA AAF audio To run the alsa AAF demo, please add aaf0 and converter0 plugin device into /etc/asound.conf: pcm.aaf0 {    type aaf    ifname eth0.5    addr 01:AA:AA:AA:AA:AA    prio 2    streamid AA:BB:CC:DD:EE:FF:000B    mtt 50000    time_uncertainty 1000    frames_per_pdu 12    ptime_tolerance 100 } pcm.converter0 {    type linear    slave {                  pcm "hw:0,0"                  format S16_LE    } } The “aaf0” plugin device defines the ethernet interface which AAF runs on, the socket priority which mapping to Traffic Class in kernel TC, the stream-id for the aaf streaming. The “converter0” plugin device is used for convert the S16_BE format to S16_LE for the wm8960 PCM audio.   Select one device as AVB talker, and run: $ speaker-test -p 25000 -F S16_BE -c 2 -r 48000 -D aaf0   Select one device as AVB listener, and run: $ arecord -F 25000 -t raw -f S16_BE -c 2 -r 48000 -D aaf0 | aplay -F 25000 -t raw -f S16_BE -c 2 -r 48000 -D converter0   You can hear the sound on the listener device.   You can also check which qdisc queue is used for AAF by: $ tc -s qdisc   4.2.        Gstreamer AAF audio Select one device as AVB talker, and run: $ gst-launch-1.0 clockselect. \( clock-id=realtime audiotestsrc samplesperbuffer=12 is-live=true ! audio/x-raw,format=S16BE,channels=2,rate=48000 ! avtpaafpay mtt=50000000 tu=1000000 streamid=0xAABBCCDDEEFF000B processing-deadline=0 ! avtpsink ifname=eth0.5 address=06:98:c0:22:df:35 priority=3 processing-deadline=0 \)   Select one device as AVB listener, and run: $ gst-launch-1.0 clockselect. \( clock-id=realtime avtpsrc ifname=eth0.5 ! avtpaafdepay streamid=0xAABBCCDDEEFF000B ! queue max-size-bytes=0 max-size-buffers=0 max-size-time=0 ! audioconvert ! audioresample !  alsasink device="hw:0,0" \)   5.  Packet sniffer Use tcpdump on board to dump the L2 ethernet packet: $ tcpdump -i eth0 ether proto 0x22f0 -w dump.pcap The AVTP ether protocol code is 0x22f0 embedded inside the ether frame, or you can use "vlan 5" VLAN id for tcpdump parameters to dump. Then open this dump.pcap in the windows/Linux PC by the wireshark tool, it will automatically show the protocol inside the package, it can also parser the IEEE1722 (AVTP) CVF/AFF package header as below:   To measure the package latency from transmit port (talker) to receive port (listener), you can use the tcpdump on both end-points. And compare the Epoch Time the packet dumped: "Epoch Time: 1596252905.688243000 seconds". The delta of the epoch time of the same packet is around 100us~500us. This latency actually includes the AF_PACKET clone cost in kernel netfilter, also the tcpdump application schedule latency.   6.  Revision history summarizes the changes done to this document since the initial release. Table2. Revision history Revision number Date Substantive changes 1 5/2021 Initial release    
View full article
This doc share one OpenGL ES sample code, it is running on i.MX8 MEK board with QNX SDP7.1. HW: i.MX8 MEK board, HDMI display SW: QNX SDP7.1, i.MX8 MEK board BSP, and this sample code   This sample code will draw 3D object model, and with some animation. Reference: https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors/i-mx-8-family-arm-cortex-a53-cortex-a72-virtualization-vision-3d-graphics-4k-video:i.MX8 https://github.com/NXPmicro/gtec-demo-framework https://github.com/syoyo/tinyobjloader-c https://github.com/nothings/stb https://3dhaupt.com/futuristic-car-game-ready-download/ https://wallpapersafari.com/w/Y5JZNh https://www.pngwing.com/en/free-png-ysaus https://www.shadertoy.com/view/Ms2SWW#
View full article
Important: If you have any questions or would like to report any issues with the DDR tools or supporting documents please create a support ticket in the i.MX community. Please note that any private messages or direct emails are not monitored and will not receive a response. i.MX 6/7 Series Family DDR Tools Overview This page contains the latest releases for the i.MX 6/7 series DDR Tools. The tools described on this page cover the following i.MX 6/7 series SoCs: i.MX 6DQP (Dual/Quad Plus) i.MX 6DQ (Dual/Quad) i.MX 6DL/S (Dual Lite/Solo) i.MX 6SoloX i.MX 6SL i.MX 6SLL i.MX 6UL i.MX 6ULL/ULZ i.MX 7D/S i.MX 7ULP The purpose of the i.MX 6/7 series DDR Tools is to enable users to generate and test a custom DRAM initialization based on their device configuration (density, number of chip selects, etc.) and board layout (data bus bit swizzling, etc.). This process equips the user to then proceed with the bring-up of a boot loader and an OS. Once the OS is brought up, it is recommended to run an OS-based memory test (like Linux memtester) to further verify and test the DDR memory interface. The i.MX 6/7 series DDR Tools consist of: DDR Register Programming Aid (RPA) DDR Stress test _________________________________________________________ i.MX 6/7 Series DDR Stress Test The i.MX 6/7 Series DDR stress test tool is a Windows-based software tool that is used as a mechanism to verify that the DDR initialization is operational prior for use in u-boot and OS bring-up. The DDR Stress Test tool can be found here: i.MX 6/7 DDR Stress Test Tool Note that the DDR Stress test tool supports all of the above i.MX SoCs, however, some of the supported i.MX SoCs named in the tool support multiple i.MX SoCs as follows: MX6DQ – when selected, this supports both i.MX 6DQ and i.MX 6DQP (Plus) MX6DL – when selected, this supports both i.MX 6DL and i.MX 6S (i.MX 6DLS family) MX6ULL – when selected, this supports both i.MX 6ULL and i.MX6 ULZ MX7D – when selected, this supports both i.MX 7D and i.MX 7S _____________________________________________________________________________ i.MX 6/7 Series DDR Register Programming Aid (RPA) The i.MX 6/7 series DDR RPA (or simply RPA) is an Excel spreadsheet tool used to develop DDR initialization for a user’s specific DDR configuration (DDR device type, density, etc.). The RPA generates the DDR initialization script for use with the DDR Stress Test tool. For a history of the previous versions of an RPA, refer to the Revision History tab of the respective RPA. To obtain the latest RPAs, please refer to the following links: i.MX 6DQP i.MX6DQP Register Programming Aids i.MX 6DQ i.MX6DQ Register Programming Aids i.MX 6DL/S i.MX6DL Register Programming Aids i.MX 6SoloX i.MX6SX Register Programming Aids i.MX 6SL i.MX6SL Register Programming Aids  i.MX6SLL i.MX6SLL Register Programming Aids i.MX 6UL/ULL/ULZ i.MX6UL/ULL/ULZ DRAM Register Programming Aids i.MX7D i.MX7D DRAM Register Programming Aids i.MX 7ULP i.MX7ULP DRAM Register Programming Aids _____________________________________________________________________________ DRAM Register Programming Aids FAQ    
View full article
  Question: How can we generate an ARM DS5 DStream format DDR initialization script using the DRAM Register Programming Aid?  Answer: Some RPAs include a  "DStream .ds file" tab for the ARM DS5 debugger specific commands. The i.MX6UL/ULL/ULZ DRAM Register Programming Aids for example already has this supported. However, the user can easily create  the .ds format from the existing .inc format. The basic steps to convert .inc files to .ds format are as follows: 1)  Replace the one instance of setmem /16 with mem set 2)  In that same line, replace 0x020bc000 = with 0x020bc000 16 3)  Use a Replace All command to change setmem /32 with mem set 4)  Use a Replace All command to change = with 32 5)  Use a Replace All command to change // with # 6)  Save as a .ds file.   Question: When using a 528MHz DRAM Controller interface with a DDR memory of a faster speed bin, which speed bin timing options should one use? Answer: For example, let’s assume our MX6DQ design is using a DDR3 memory from a DDR3-1600 speed bin.  However, the maximum speed of the MMDC interface for the MX6DQ using DDR3 is 528MHz.  Should we use the 1600 speed bin (800MHz clock speed) or the 1066 speed bin (533MHz clock speed)?  In short, the user should use the timings rated for the maximum speed (frequency) with which you are running, in this case DDR3-1066 (533MHz).  In some cases, like when using the MX6DL, the maximum DDR frequency is 400MHz.  In this case, you would want to try and use 800 timings found in the AC timing parameters table.  However, most DDR3 devices have speed bin tables that may go only as low as 1066, in which case you would use the closest speed bin to your operational frequency (i.e. the 1066 speed bin table).     Question: Some timing parameters may specify a min and max number, which should I use? Answer: In most cases, you will want to choose the minimum timings.  Some DRAM controllers may have a tRAS_MAX timing parameter, in which case you would obviously use the maximum tRAS parameter given in the DRAM data sheet. Also, for timing parameters tAONPD and tAOFPD, we also want to use the maximum values given in the DDR3 data sheet. These represent the maximum amount of time the DDR3 device takes to turn on or off the RTT (termination), therefore, we should wait at least this amount of time before issuing any commands or accesses.   Question: Some timing parameters state things like “Greater of 3CK or 7.5ns”; which should I use? Answer: This depends on your clock speed.  Say you are running at 533MHz.  At 533MHz, 7.5ns equates to 4CKs.  In this case, 7.5ns at 533MHz is GREATER than 3CK, so we would use the 7.5ns number, or 4CKs. At 400MHz, 7.5ns equates to 3CKs.  In this case, we’d simply use 3CKs.   Question: I have a design that will throttle the DDR frequency (dynamic frequency scaling).  At full speed, I plan to run at 533MHz, and then I plan to throttle down to say 400MHz whenever possible.  Do I need to re-calculate my 400 MHz timing parameters that were initially set for 533MHz? Answer: It is not necessary to re-calculate timing parameters for 400MHz, and you can re-use the ones for 533MHz.  The timings at 533 MHz are much tighter than 400 MHz, and the key here is to NOT violate timings.  Also, it may be a bit of a hassle maintaining two sets of timing parameters, especially if later in the design, you swap DDR vendors that might require you to re-calculate some timing parameters.  It’s easier to do it once and to come up with a combined worse-case timing parameters for 533MHz, which you know will work at 400MHz.  But, if you don’t mind maintaining two sets of timing parameters, and really want to optimize timings down to the last pico-second for 400MHz, then knock yourself out.   Question: Can I use these Register programming aids for both Fly by and T- Topology ? Answer Yes The DDR register programming aid is agnostic to the DDR layout. The same spreadsheet works for both topologies. We recommend running write leveling calibration for both topologies and the values returned by the Write Leveling routine from the Freescale DDR stress test should be incorporated back to the customer specific initialization script. The DDR stress test also has a feature whereby it evaluates the write leveling values returned from calibration and increments WALAT to 1 if the values exceed a defined limit. The DDR stress test informs the user when the Write Additional latency (WALAT) exceeds the limit and should be increased by 1, and reminds the user to add it back in the customer specific initialization script if required.   WALAT - 0 00000000 WALAT: Write Additional latency. Recommend to clear these bits. Proper board design should ensure that the DDR3 devices are placed close enough to the MMDC to ensure the skew between CLK and DQS is less than 1 cycle.     Question: Can I use the DEFAULT Register programming aid values for MDOR when using an Internal OSC instead of the recommended 32.768 KHZ XTAL ? Answer No, NXP recommends reprogramming these values based on the worse case frequency (Max clock) of the internal OSC of the device to guarantee JEDEC timings are met. Please refer to Internal Oscillator Accuracy considerations for the i.MX 6 Series for more details  
View full article
Overview AVB/TSN Wikipedia: Audio Video Bridging (AVB) is a common name for the set of technical standards which provide improved synchronization, low-latency, and reliability for switched Ethernet networks. AVB was initially developed by the Institute of Electrical and Electronics Engineers (IEEE) Audio Video Bridging task group of the IEEE 802.1 standards committee. In November 2012, Audio Video Bridging task group was renamed to Time-Sensitive Networking (TSN) task group to reflect the expanded scope of its work, which is to "provide the specifications that will allow time-synchronized low latency streaming services through IEEE 802 networks". Further standardization efforts are ongoing in IEEE 802.1 TSN task group.   AVB and TSN technologies and standards: Standard Area of Definition Title of Standard AVB/TSN IEEE 802.1AS Timing and synchronization Timing and Synchronization for Time-Sensitive Applications (gPTP) AVB TSN IEEE 802.1Qav Forwarding and queuing Forwarding and Queuing for Time-Sensitive Streams (FQTSS) AVB IEEE 802.1Qat Path control and reservation Stream Reservation Protocol (SRP) AVB IEEE 802.1BA Bridging Audio Video Bridging (AVB) Systems AVB IEEE 1722 AVTP AV transport Layer 2 Transport Protocol for Time Sensitive Applications AVB IEEE 1722 AVDECC Device manage and control Device Discovery, Enumeration, Connection Management and Control Protocol AVB IEEE 802.1Qbu and IEEE 802.3br Forwarding and queuing Frame preemption TSN IEEE 802.1Qbv Forwarding and queuing Enhancements for scheduled traffic TSN IEEE 802.1Qca Path control and reservation Path control and reservation TSN IEEE 802.1Qcc Central configuration method Enhancements and performance improvements TSN IEEE 802.1Qci Time-based ingress policing Per-stream filtering and policing TSN IEEE 802.1CB Seamless redundancy Frame replication and elimination for reliability TSN   Since many of the standards are only for TSN switch/bridges and i.MX8MP is design to be a TSN/AVB endpoint, this demo does not implement a full stack or full standards. It only demonstrates the basic end-to-end point (talker to listener) A/V streaming without bridge or switch.   Below table shows what this demo supports: Standard Software Hardware IEEE 802.1AS Linuxptp ENET_QoS IP IEEE 802.1Qav/Qbu/Qbv TC qdisc (taprio, mqprio, etf, cbs) ENET_QoS IP (multi queue + EST, FPE, CBS) IEEE 1722 AVTP Libavtp N/A   Demo introduction Two i.MX8MP EVK boards are used for this demo, one act as a AVB talker to send A/V streams, the other one act as a AVB listener to receive A/V streams who can be playback to audio codec and sink video to screen. The two boards are connected by a RJ45 ethernet cable on each ENET2 port (only ENET2 port has TSN features). Three streams’ type and SR (Stream Reservation) class are defined as below to grantee time sensitive (sub-microsecond synchronization), low latency and bandwidth on the ethernet: Stream A: SR class A, AVTP Compressed Video Format, H.264 profile High, 1920x1080, 30 fps. Stream B: SR class B, AVTP Audio Format, PCM 16-bit sample, 48 kHz, stereo, 12 frames per AVTPDU. Other stream: Best-effort streams These three TSN streams would be allocated into different traffic control (TC) class for egress in Linux. Different TC class would be mapped to different hardware queues with dedicated DMA channel, thanks to the multi-queue support by ENET_QoS IP. Then the traffic control apply different scheduling policy on each queues for different types of streams egress, to make sure highest priority or critical packet can be sent in the correct time slot. The TSN streams are transmitted over Virtual LANs (VLANs). Bridges use the VLAN priority information (PCP) to identify Stream Reservation (SR) traffic classes which are handled according to the Forward and Queuing Enhancements for Time-Sensitive Streams (FQTSS) mechanisms described in Chapter 34 of the IEEE 802.1Q standard.   The demo is built up by following blocks: Linux TC (traffic control): streams egress control to meet AVB/TSN requirements, which take advantage of the i.MX8MP TSN ENET IP. Linux PTP: clock sync in network, which take advantage of the i.MX8MP TSN ENET IP. Libavtp: Time Sensitive Applications AV Transport protocol. ALSA: AVTP audio format plugin uses the libavtp to transmit and receive AVTP audio PCM streams. Gstreamer: avtp plugin uses the libavtp to transmit and receive AVTP audio/video streams (video should be encoded as H264, audio PCM). x264enc and libav is a software H.264 video encoder/decoder, which alternative to the on chip VPU acceleration plugin, due to the VPU plugin is not supported in the latest Gstreamer version (1.17.x).     Traffic control This demo can use two different traffic control qdisc settings: mqprio + cbs: use CBS features taprio + pfifo: use EST and FPE features (802.1Qbu/bv). The pfifo is the default traffic control class, which use FIFO schedule policy for egress packets. The CBS class is actually handled by hardware IP to select which queue for transmit in a certain time slot.   Credit Base Shaper (CBS) CBS parameters come straight from the IEEE 802.1Q-2018 specification. They are the following: idleSlope: rate credits are accumulated when queue isn’t transmitting; sendSlope: rate credits are spent when queue is transmitting; hiCredit: maximum amount of credits the queue is allowed to have; loCredit: minimum amount of credits the queue is allowed to have;     Enhancements to Scheduled Traffic (EST) The IEEE 802.1Qbv defines the schedule for each of the queues on every egress port which makes the implementation aware of traffic arrival schedule. This information can be used to block the lower priority traffic from transmission in this time window/slot. This ensures that scheduled traffic is forwarded from sender to receiver through all the network nodes with a deterministic delay. The i.MX8MP uses the gate control list with configurable time slot and frame preempt (IEEE 802.1Qbu) features to support EST. Other than the CBS, the gate control list control the egress transmission by a fixed open interval for that queue.   Build demo Build L5.4.24_2.1.0 $MACHINE=imx8mpevk DISTRO=fsl-imx-xwayland source imx-setup-release.sh -b build-8mp $bitbake  fsl-image-validation-imx Prepare a SD card and burn it with the built out images. To add tc/tcpdump command in iproute2 package, please add package into IMAGE_INSTALL_append variable into conf/local.conf: IMAGE_INSTALL_append = " iproute2 iproute2-tc tcpdump Rebuild kernel Rebuild the kernel after applying the 0001-qenet-add-queue-avoid-panic.patch (attached), and overwrite the Image and imx8mp-evk.dtb on the boot partition of the SD card.   Install the toolchain $bitbake -f fsl-image-validation-imx -c populate_sdk $sh tmp/deploy/sdk/fsl-imx-xwayland-glibc-x86_64-fsl-image-validation-imx-aarch64-imx8mpevk-toolchain-5.4-zeus.sh The toolchain would be installed into /opt/fsl-imx-xwayland/5.4-zeus/   Create a install folder $mkdir <your install folder> Create a folder to install all of the shared libraries, binaries and configure files which built out manually in this doc. After built done, you should copy all of the contents in this folder to target board root.   Build libavtp $source /opt/fsl-imx-xwayland/5.4-zeus/environment-setup-aarch64-poky-linux $git clone https://github.com/Avnu/libavtp.git $cd libavtp $meson build $ninja -C build Copy the built out .so and .pc into the toolchain rootfs: $sudo cp build/libavtp.so* /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/lib/ $sudo cp build/meson-private/*.pc /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/lib/pkgconfig/ To make sure you have avtp package installed correctly: $pkg-config --list-all | grep avtp Copy the .so into the install folder: $cp build/libavtp.so* <install folder>/usr/lib/ Copy header files: $cp libavtp/include/* files <toolchain_path>/sysroots/aarch64-poky-linux/usr/lib Build ALSA aaf plugin The AAF (ALSA AVTP Audio Format) plugin is a PCM plugin that uses Audio Video Transport Protocol (AVTP) to transmit/receive audio samples through a Time-Sensitive Network (TSN) capable network. The plugin enables media applications to easily implement AVTP Talker and Listener functionalities.   $cd <yocto build>/tmp/work/aarch64-mx8mp-poky-linux/alsa-plugins/1.1.9-r0/alsa-plugins-1.1.9 $./configure --build=x86_64-linux --host=aarch64-poky-linux --target=aarch64-poky-linux --prefix=<your install folder>/usr --disable-silent-rules --disable-dependency-tracking --with-libtool-sysroot=/opt/samba/nxa23059/jailhouse/yocto-5.x/build-8mp/tmp/work/aarch64-mx8mp-poky-linux/alsa-plugins/1.1.9-r0/recipe-sysroot  --disable-static  --enable-aaf --disable-jack --disable-libav --disable-maemo-plugin --disable-maemo-resource-manager --enable-pulseaudio --enable-samplerate --with-speex=lib $make $make install   Build Gstreamer AVTP plugins Build Gstreamer 1.17.x (commit e4f7cdb0df7b) $git clone https://gitlab.freedesktop.org/gstreamer/gstreamer.git $cd gstreamer $patch -p1 < gstreamer-1.0-pass-build.patch $meson build --prefix=<your install folder>/usr $ninja -C build $ninja -C build install After Gstreamer is installed into <your install folder>, please fix the “prefix” path in the .pc files by, and copy to the toolchain folders: $cd <your install folder> $grep -lR <your install folder> ./lib/pkgconfig/ | xargs sed -i 's/<your install folder>/\/usr/g' NOTE: Make sure you use the ‘\’ for ‘/’ convert in your path. $cp -rf ./usr/* /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/   Build gst-plugins-base (commit 22827e8f) $git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-base.git $cd gst-plugins-base $patch -p1 < gst-plugins-base-pass-build.patch $meson build --prefix=<your install folder>/usr $ninja -C build $ninja -C build install   Build gst-plugins-bad (commit ed14e0d5a) $git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-bad.git $cd gst-plugins-bad $meson build --prefix=<your install folder>/usr $ninja -C build $ninja -C build install If you met gdbus-codegen issue, please remove the “--c-generate-autocleanup” and “--output-directory” parameters in the build/build.ninja If you met issue of include file like: “…/gst/gl/gstglapi.h:24:10: fatal error: gst/gl/gstglconfig.h: No such file or directory.” Please modify the build/build.ninja to correct the -I header file parameter of the build. After gst-plugins-base and gst-plugins-bad installed into <your install folder>, please fix the “prefix” path in the .pc files and copy them into the toolchain folders: $cd <your install folder> $grep -lR <your install folder> ./lib/pkgconfig/ | xargs sed -i 's/<your install folder>/\/usr/g' NOTE: Make sure you use the ‘\’ for ‘/’ convert in your path. $cp -rf ./usr/* /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/   Build H.264 SW plugins AVTPDU uses H.264 as payload for streaming, this requires H.264 encoder/decoder plugins, either software or hardware accelerations. Since we upgrade the Gstreamer and its plugins to a new version, the VPU plugins cannot be used anymore. So software H.264 plugins are required: x264 for encoder, libav for decoder.   Build x264 As the yocto actually has the x264 recipes, but not included in our bblayers, we need to copy the x264 source into our bblayers path under <yocto>/source to build: $cp -rf ./poky/meta/recipes-multimedia/x264 ./meta-openembedded/meta-multimedia/recipes-multimedia/ $vi ./meta-openembedded/meta-multimedia/recipes-multimedia/x264_git.bb Remove the LICENSE_FLAGS line $bitbake -f x264 -c do_install $sudo cp -rf tmp/work/aarch64-poky-linux/x264/r2917+gitAUTOINC+72db437770-r0/image/usr/* /opt/fsl-imx-xwayland/5.4-zeus/sysroots/aarch64-poky-linux/usr/ Build gst-plugins-ugly (commit 995a135d) $git clone https://gitlab.freedesktop.org/gstreamer/gst-plugins-ugly.git $cd gst-plugins-ugly$meson build --prefix=<your install folder>/usr $ninja -C build $ninja -C build install If you met build issue, please remove "if have_cxx" from meson.build Build libav As the yocto actually has the libav recipes, but not included in our bblayers, we need to copy the its source into our bblayers path under <yocto>/source: $cp -rf ./poky/meta/recipes-multimedia/gstreamer/gstreamer1.0-libav ./meta-openembedded/meta-multimedia/recipes-multimedia/gstreamer $vi ./meta-openembedded/meta-multimedia/recipes-multimedia/gstreamer/gstreamer1.0-libav_1.16.1.bb Remove the LICENSE_FLAGS line $bitbake -f gstreamer1.0-libav -c do_install $cp tmp/work/aarch64-mx8mp-poky-linux/gstreamer1.0-libav/1.14.0-r0/image/usr/lib/gstreamer-1.0/libgstlibav.so <your install folder>/usr/lib/gstreamer-1.0 Now you have all of the Gstreamer plugins which required for AVB/TSN audio/video demo: avtpsrc, avtpsink, avtpaafpay, avtpaafdepay, avtpcvfpay, avtpcvfdepay x264_enc (encoder), avdec_h264 (decoder)   Install binaries Final step is to copy all of your built out files from <your install folder> into your board / root, and boot up the board. Verify the Gstreamer plugins install correctly or not: $export GST_PLUGIN_PATH= /usr/lib/gstreamer-1.0/ $gst-inspect-1.0 Check if the above Gstreamer plugins we built out can be found by gst-instpect.   System Setup VLAN The ENTE_QoS is assigned to eth1 instance. So create eth1.5 for vlan id 5: $ip link add link eth1 name eth1.5 type vlan id 5 \         egress-qos-map 2:2 3:3 $ip link set eth1.5 up Qdiscs The TSN control plane is implemented through the Linux Traffic Control (TC) System. The transmission algorithms specified in the Forwarding and Queuing for Time-Sensitive Streams (FQTSS) chapter of IEEE 802.1Q-2018 are supported via TC Queuing Disciplines (qdiscs). Linux currently provides the following qdiscs relating to TSN: Multiply queue qdiscs (These two qdiscs are mutually exclusive, select one for your demo): MQPRIO: Mapping the TC class into different hardware queue in IP. TAPRIO: Implements the Enhancements for Scheduled Traffic introduced by IEEE 1Qbv/Qbu. This is supported by i.MX8MP ENET_QoS IP through EST features: Qbv – Time aware shaper Qbu - frame preemption. CBS qdisc: Implements the Credit-Based Shaper introduced by the IEEE 802.1Qav This is supported by i.MX8MP ENET_QoS IP through EST (Enhancements to Scheduled Traffic) features. It works with mqprio qdisc together. ETF qdisc: While not an FQTSS feature, Linux also provides the Earliest TxTime First (ETF) qdisc which enables the LaunchTime Since the i.MX8MP ENET_QoS IP does not support LaunchTime feature, this qdisc configurations would be ignored.   MQPRIO qdisc $tc qdisc add dev eth1 parent root handle 100 mqprio \         num_tc 3 \         map 0 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 \         queues 1@0 1@1 1@2 \         hw 1 NOTE, since ENET_QoS Q0 does not support hardware CBS, we have to avoid using Q0 for AVB streaming. Here’s the mapping: socket SO_PRIORITY 2 -> TC CLASS 2 -> Q2 socket SO_PRIORITY 3 -> TC CLASS 1 -> Q1 Other socket priority -> TC CLASS 0 -> Q0   TAPRIO qdisc Create a root qdisc handle to map the different CLASS streams to hardware queues w/ GCL (gate control list). This root handle maps the CLASS A stream to queue Q1, CLASS B stream to Q2, and others to Q0 by the “map” and “queues” parameters, same as mqprio above. The Q1/Q2(CLASS1/2) gates are open in the first 300us interval, then only Q1(CLASS1) gate is open in the next 300us with all other CLASS gated. The last sched-entry defines in after Q1 is open after 300us, all of the CLASS gates are open for next 200us. Then loopback to the first sched-entry for gate control. Please note in this demo, we only enable features to support the 802.1Qbv, the Qbu (Frame preempt) requires patches for iproute2. $tc qdisc add dev eth1 parent root handle 100 taprio \         num_tc 3 \         map 0 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 \         queues 1@0 1@1 1@2 \         base-time 1000000000 \         sched-entry S 0x6 300000 \         sched-entry S 0x2 300000 \         sched-entry S 0x7 200000 \         flags 0x2   CBS qdisc Q1 CBS for video, Q2 CBS for audio: $tc qdisc replace dev eth1 parent 100:2 handle 777 cbs \         idleslope 98688 sendslope -901312 hicredit 153 locredit -1389 \         offload 1 $tc qdisc replace dev eth1 parent 100:3 handle 888 cbs \         idleslope 3648 sendslope -996352 hicredit 12 locredit -113 \        offload 1 NOTE: By default, the Q0 will be allocated for pfifo qdisc class if we do not define them.   ETF qdisc As ENET_QOS does not support hardware launch time in IP, the ETF qdisc would not be used here.   TimeSync Run the ptp4l and phc2sys in background, and use check_clocks to check the ptp sync works. $ptp4l -i eth1 -f ./gPTP.cfg --step_threshold=1 & $pmc -u -b 0 -t 1 "SET GRANDMASTER_SETTINGS_NP clockClass 248 \         clockAccuracy 0xfe offsetScaledLogVariance 0xffff \         currentUtcOffset 37 leap61 0 leap59 0 currentUtcOffsetValid 1 \         ptpTimescale 1 timeTraceable 1 frequencyTraceable 0 \         timeSource 0xa0" $phc2sys -s eth1 -c CLOCK_REALTIME --step_threshold=1 \         --transportSpecific=1 -w &  $./check_clocks -d eth1 The PTP Hardware Clock (PHC)  synced between PTP master/slave means the RMS offset between PHC and GM (master clock) is < 100ns. PHC and system clock (CLOCK_REALTIME) synced means the clock offset < 100ns   NOTE: The check_clocks source code can be downloaded from here. cfg is described below, if you want to identify which is master and which is slave, use “masterOnly 1” or “slaveOnly 1” in this configuration file. #                                                                 # 802.1AS example configuration containing those attributes which # differ from the defaults.  See the file, default.cfg, for the   # complete list of available options.                              #                                                                 [global]                                                          gmCapable               1                                         priority1               248                                        priority2               248                                       logAnnounceInterval     0                                         logSyncInterval         -3                                        syncReceiptTimeout      3                                          neighborPropDelayThresh 800                                       min_neighbor_prop_delay -20000000                                 assume_two_step         1                                         path_trace_enabled      1                                         follow_up_info          1                                         transportSpecific       0x1                                        ptp_dst_mac             01:80:C2:00:00:0E                         network_transport       L2                                        delay_mechanism         P2P                                        Run demo ALSA AAF audio To run the alsa AAF demo, please add aaf0 and converter0 plugin device into /etc/asound.conf: pcm.aaf0 {         type aaf         ifname eth1.5         addr 01:AA:AA:AA:AA:AA         prio 2         streamid AA:BB:CC:DD:EE:FF:000B         mtt 50000         time_uncertainty 1000         frames_per_pdu 12         ptime_tolerance 100 }    pcm.converter0 {         type linear         slave {                 pcm "hw:2,0"                 format S16_LE         } }   The “aaf0” plugin device defines the ethernet interface which AAF runs on, the socket priority which mapping to Traffic Class in kernel TC, the stream-id for the aaf streaming. The “converter0” plugin device is used for convert the S16_BE format to S16_LE for the wm8960 PCM audio.   Select one device as AVB talker, and run: $speaker-test -p 25000 -F S16_BE -c 2 -r 48000 -D aaf0 Select one device as AVB talker, and run: $arecord -F 25000 -t raw -f S16_BE -c 2 -r 48000 -D aaf0 | aplay -F 25000 -t raw -f S16_BE -c 2 -r 48000 -D converter0 You can hear the sound on the listener device. You can also check which qdisc queue is used for AAF by: $tc -s qdisc   Gstreamer AAF audio Select one device as AVB talker, and run: $gst-launch-1.0 clockselect. \( clock-id=realtime \     audiotestsrc samplesperbuffer=12 is-live=true ! \     audio/x-raw,format=S16BE,channels=2,rate=48000 ! \     avtpaafpay mtt=50000000 tu=1000000 streamid=0xAABBCCDDEEFF000B processing-deadline=0 ! \     avtpsink ifname=eth1.5 address=01:AA:AA:AA:AA:AA priority=2 processing-deadline=0 \)   Select one device as AVB listener, and run: $gst-launch-1.0 clockselect. \( clock-id=realtime \     avtpsrc ifname=eth1.5 ! avtpaafdepay streamid=0xAABBCCDDEEFF000B ! \     queue max-size-bytes=0 max-size-buffers=0 max-size-time=0 ! \     audioconvert ! audioresample !  alsasink device="hw:2,0" \)   Gstreamer CVF video Select one device as AVB talker, and run: $gst-launch-1.0 clockselect. \( clock-id=realtime \     videotestsrc is-live=true ! video/x-raw,width=480,height=320,framerate=15/1 ! \     clockoverlay ! x264enc bframes=0  key-int-max=1 speed-preset=1 tune=4 ! h264parse config-interval=-1 ! \     avtpcvfpay processing-deadline=20000000 mtu=1400 mtt=2000000 tu=125000 streamid=0xAABBCCDDEEFF000A ! \     avtpsink ifname=eth1.5 priority=3 processing-deadline=20000000 \) NOTE: To eliminate the h.264 software encoding/decoding overhead with acceptable latency for this demo, we use several parameters for x264enc element: bframes: x264enc and avdec_h264 together was found to have issues, remove bframes in the stream would help. key-int-max: decoder can only decode the frame when a keyframe is present on stream to make sure decoder can work as faster as it can, the distance between two keyframes must be set to closest. speed-preset: to low down the CPU loading for encoding, we use the option to make encoding as faster as we can. (=1 means ultrafast) tune: 4 means zero latency ‘mtu=1400’ parameters for avtpcvfpay element is very important, if using the default mtu=1500, the listener cannot get the AVTPDUs package correctly from VLAN. The reason is unknown yet.   Select one device as AVB listener, and run: $gst-launch-1.0 clockselect. \( clock-id=realtime \     avtpsrc ifname=eth1.5 ! avtpcvfdepay streamid=0xAABBCCDDEEFF000A ! \     queue max-size-bytes=0 max-size-buffers=0 max-size-time=0 ! \     avdec_h264 ! videoconvert ! clockoverlay halignment=right ! waylandsink \)   The demo screenshot below: there are two clocks show on the videotestsrc stream: left one is the timestamp recorded before x264enc encoding on the AVB talker side, right one is the timestamp recorded after avdec_h265 decoding and do video convert to YUV frames on AVB listener side. You can see the timestamp is sync in seconds.     Deep dive Packet sniffer Use tcpdump on board to dump the L2 ethernet packet: ./tcpdump -i eth1 ether proto 0x22f0 -w dump.pcap The AVTP ether protocol code is 0x22f0 embedded inside the ether frame, or you can use "vlan 5" VLAN id for tcpdump parameters to dump. Then open this dump.pcap in the windows/Linux PC by the wireshark tool, it will automatically show the protocol inside the package, it can also parser the IEEE1722 (AVTP) CVF/AFF package header as below:   Precise latency measurement The clockoverlay plugin used in the above talker/listener is actually seconds level precision, which can not reflect the latency from talker videotestsrc -> listener avdec_h264 decoding finish. Here need a little hack to the clockoverlay element in the gst-plugins-base to get the millisecond precision. The patch is attached (gst-plugins-base-clockoverlay-us.patch), please apply and rebuild the gst-plugins-base, then replace the libgstpango.so on the board /usr/lib/gstreamer-1.0/. When doing CVF demo, you can take a picture of the screen, and check the two clock's diff. During my test, the latency is about 50ms, which include all the cost of encoding, AVTP packaging, streaming, ethernet transmit, ethernet receive, AVTP unpack and frame decoding. To measure the package latency from transmit port (talker) to receive port (listener), you can use the tcpdump on both end-points. And compare the Epoch Time the packet dumped: "Epoch Time: 1596252905.688243000 seconds". The delta of the epoch time of the same packet is around 100us~500us. This latency actually includes the AF_PACKET clone cost in kernel netfilter, also the tcpdump application schedule latency.   References Getting started with AVB on Linux: https://tsn.readthedocs.io/avb.html TSN vs AVB: https://dornerworks.com/blog/avb-vs-tsn-choose-the-best-deterministic-ethernet-solution-for-your-networked-devices/ ALSA aaf: https://fossies.org/linux/alsa-plugins/doc/aaf.txt Gstreamer avtp: https://gstreamer.freedesktop.org/documentation/avtp/index.html AVTP: https://avnu.org/wp-content/uploads/2014/05/AVnu-AAA2C_Audio-Video-Transport-Protocol-AVTP_Dave-Olsen.pdf MX8MP RM
View full article