i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
  This guide assumes that the developer has knowledge of the V4L2 API and has worked or is familiar with sensor drivers and their operation within the Linux kernel. This guide does not focus on the details of the sensor driver development that you want to port. It is assumed that you already have an existing driver for your sensor, before making the port. The version of the ISP's was 6.6.36 Linux BSP. If a different version is used, it is the developer's responsibility to review the API documentation for the corresponding version, since there may be changes that affect what is indicated in this guide. To port the camera sensor, the following steps must be taken as described in the following sections: Define sensor attributes and create instances. ISS Driver and ISP Media Server. Sensor Calibration Files. VVCAM Driver Creation. Device Tree Modifications. Define Sensor Attributes and Create Instances The following three steps are already implemented in CamDevice and are included for reference only. Step 1: Define the sensor attributes in the IsiSensor_s data structure. Step 2: Define the IsiSensorInstanceConfig_t configuration structure that will be used to create a new sensor instance. Step 3: Call the IsiCreateSensorIss() function to create a new sensor instance. ISS Driver and ISP Media Server Step 0 - Use a driver template as base code: Drivers can be found in $ISP_SOURCES_TOP/units/isi/drv/. For example, the ISP sources, come with the OV4656 and OS08a20 drivers. $ISP_SOURCES_TOP indicates the path of your working directory, where the respective sources are located. Step 1 - Add your <SENSOR> ISS Driver: Create the driver entry for your sensor in the path $ISP_SOURCES_TOP/units/isi/drv/<SENSOR>/source/<SENSOR>.c. Change all occurrences of the respective sensor name within the code, for instance, OV4656 -> <SENSOR>, respecting capital letters where applicable. Step 2 - Check the information on the IsiCamDrvConfig_s data structure: Data members defined in this data structure include the sensor ID (CameraDriverID) and the function pointer to the IsiSensor data structure. By using the address of the IsiCamDrvConfig_s structure, the driver can then access the sensor API attached to the function pointer. The following is an example of the structure: /***************************************************************************** * Each sensor driver needs to declare this struct for ISI load *****************************************************************************/ IsiCamDrvConfig_t IsiCamDrvConfig = {     .CameraDriverID = 0x0000,     .pIsiHalQuerySensor = <SENSOR>_IsiHalQuerySensorIss,     .pfIsiGetSensorIss = <SENSOR>_IsiGetSensorIss, };   Important Note: Modify the CameraDriverID according to the chip ID of your sensor. Apply this change to any Chip ID occurrence within the code. Step 3 - Check sensor macro definitions: In case there is any macro definition in the ISS Driver code, which involves specific properties of the sensor, you should modify it according to your requirements. For example: #define <SENSOR>_MIN_GAIN_STEP         (1.0f/16.0f)   Step 4 - Modify ISP Media Server build tools: Changes required in this step include: Add a CMakeLists.txt file in $ISP_SOURCES_TOP/units/isi/drv/<SENSOR>/ that builds your sensor module. Modify the CMakeLists.txt located at $ISP_SOURCES_TOP/units/isi/drv/CMakeLists.txt to include and reference your sensor directory. Modify the $ISP_SOURCES_TOP/appshell/ and $ISP_SOURCES_TOP/mediacontrol/ build tools, since by default they refer to the construction of a particular sensor, for example, the OV4656, so it is necessary to change the name of the corresponding sensor. Modify the $ISP_SOURCES_TOP/build-all-isp.sh script to reference the sensor modules and generate the corresponding binaries when building the ISP media server instance.   Step 5 - ISP Media Server run script: You need to add the operation modes defined for your sensor in the script. Each operating mode is associated with an order (mode 0, mode 1 ... mode N), a name used to execute the command in the terminal (e.g <sensor>_custom_mode_1), a resolution, and a specific calibration file for the sensor. The script is located at $ISP_SOURCES_TOP/imx/run.sh .   Step 6 - Sensor<X> config: At $ISP_SOURCES_TOP/units/isi/drv/ you can find the files to configure each sensor entry to the ISP, called Sensor0_Entry.cfg and Sensor1_Entry.cfg. There, the associated calibration files are indicated for each sensor operating mode, including the calibration files in XML format and the Dewarp Unit configuration files in JSON format. In addition, the .drv file generated for your sensor is referenced, creating the association between the respective /dev/video<X> node and the sensor driver module outputted from the ISP Media Server. In case you are using only one ISP channel, just modify Sensor0_Entry.cfg. In case you require both instances of the ISP, you will need to modify both files. Sensor Calibration Files It is a requirement for using the ISP, to have a calibration file in XML format, specific to the sensor you are using and according to the resolution and working mode. To obtain the calibration files in XML format, there are 3 options: Use the NXP ISP tuning tool for this you will need to ask for access or sign a NDA document. Pay NXP professional services to do the tune. Pay a third-party vendor to do the tune   VVCAM Driver Creation The changes indicated below are based on the assumption that there is a functional sensor driver in its base form, and that it is compatible with the V4L2 API. From now on we focus on applying the changes suggested in the NXP documentation, specifically to establish the communication of the VVCAM Driver (kernel side) and the ISI Layer. Step 0 - Create the sensor driver entry: Developers must add the driver code to the file located at $ISP_SOURCES_TOP/vvcam/v4l2/sensor/<sensor>/<sensor>_xxxx.c, along with a Makefile for the sensor driver module. In the same way, as indicated in the ISS Driver section, you can refer to one of the sample drivers that are included as part of the ISP sources, to review details about the implementation of the driver and the structure of the required Makefile.   Step 1 - Add the VVCAM mode info data structure array: This array stores all the supported modes information for your sensor. The ISI layer can get all the modes with the VVSENSORIOC_QUERY command. The following is an example of the structure, please fill in the information using the attributes of your sensor and the modes it supports. #include "vvsensor.h" . . .   static struct vvcam_mode_info_s <sensor>_mode_info[] = {         {         .index = 0,         .width = ... ,         .height = ... ,         .hdr_mode = ... ,         .bit_width = ... ,         .data_compress.enable = ... ,         .bayer_pattern = ... ,         .ae_info = {                        .                        .                        .                        },         .mipi_info = {                        .mipi_lane = ... ,                        },         },         {         .index = 1,         .         .         .         }, }; Step 2 - Define sensor client to i2c : Define the client_to_sensor macro (in case you don't have any already) and check the segments of the driver code that require this macro. #define client_to_<sensor>(client)\         container_of(i2c_get_clientdata(client), struct <sensor>, subdev)   Step 3 - Define the V4L2-subdev IOCTL function: Define and implement the <sensor>_priv_ioctl, which is used to receive the commands and parameters passed down by the user space through ioctl() and control the sensor. long <sensor>_priv_ioctl(struct v4l2_subdev *subdev, unsigned int cmd, void *arg) {         struct i2c_client *client = v4l2_get_subdevdata(subdev);         struct <sensor> *sensor = client_to_<sensor>(client);         struct vvcam_sccb_data_s reg;         uint32_t value = 0;         long ret = 0;           if(!sensor){                return -EINVAL;         }           switch (cmd) {         case VVSENSORIOC_G_CLK: {                ret = custom_implementation();                break;         }         case VIDIOC_QUERYCAP: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_QUERY: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_CHIP_ID: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_RESERVE_ID: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_SENSOR_MODE:{                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_SENSOR_MODE: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_STREAM: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_WRITE_REG: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_READ_REG: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_EXP: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_POWER:         case VVSENSORIOC_S_CLK:         case VVSENSORIOC_RESET:         case VVSENSORIOC_S_FPS:         case VVSENSORIOC_G_FPS:         case VVSENSORIOC_S_LONG_GAIN:         case VVSENSORIOC_S_GAIN:         case VVSENSORIOC_S_VSGAIN:         case VVSENSORIOC_S_LONG_EXP:         case VVSENSORIOC_S_VSEXP:          case VVSENSORIOC_S_WB:         case VVSENSORIOC_S_BLC:         case VVSENSORIOC_G_EXPAND_CURVE:                break;         default:                break;         }           return ret; }   As you can see in the example, some cases are implemented but others are not. Developers are free to implement the features they consider necessary, as long as a minimum base of operation of the driver is guaranteed (query commands, read and write registers, among others). It is the developer's responsibility to implement each custom function, for each case or scenario that may arise when interacting with the sensor. In addition to what was shown previously, a link must be created to make the ioctl connection with the driver in question. Link your priv_ioctl function on the v4l2_subdev_core_ops struct, as in the example below: static const struct v4l2_subdev_core_ops <sensor>_core_ops = {         .s_power       = v4l2_s_power,         .subscribe_event = v4l2_ctrl_subdev_subscribe_event,         .unsubscribe_event = v4l2_event_subdev_unsubscribe,      // IOCTL link         .ioctl = <sensor>_priv_ioctl, };   Step 4 - Verify your sensor's private data structure: After performing the modifications suggested, it would be a good practice to double-check your sensor's private data structure properties, in case there is one missing, and also check that the properties are initialized correctly on the driver's probe.   Step 5 - Modify VVCAM V4L2 sensor Makefile : At $ISP_SOURCES_TOP/vvcam/v4l2/sensor/Makefile, include your sensor object as follows: ... obj-m += <sensor>/ ... Important Note: There is a very common issue that appears when working with camera sensor drivers in i.MX8MP platforms. The kernel log message shows something similar to the following: mxc-mipi-csi2.<X>: is_entity_link_setup, No remote pad found! The link setup callback is required by the Media Controller when performing the linking process of the media entities involved in the capture process of the camera. Normally, this callback is triggered by the imx8-media-dev driver included as part of the Kernel sources. To make sure that the problem is not related to your sensor driver, verify the link setup callback is already created in the code, and if is not, you can add the following template: /* Function needed by i.MX8MP */ static int <sensor>_camera_link_setup(struct media_entity *entity,                                    const struct media_pad *local,                                    const struct media_pad *remote, u32 flags) {     /* Return always zero */         return 0; }   /* Add the link setup callback to the media entity operations struct */ static const struct media_entity_operations <sensor>_camera_subdev_media_ops = {         .link_setup = <sensor>_camera_link_setup, };     /* Verify the initialization process of the media entity ops in the sensor driver's probe function*/ static int <sensor>_probe(struct i2c_client *client, ...) {         /* Initialize subdev */         sd = &<sensor>->subdev;         sd->dev = &client->dev;         <sensor>->subdev.internal_ops = ...         <sensor>->subdev.flags |= ...         <sensor->subdev.entity.function = ...     /* Entity ops initialization */         <sensor->subdev.entity.ops = &<sensor>_camera_subdev_media_ops; } In most cases, adding the link setup function will solve the media controller issue, or at least it discards problems on the driver side. Device Tree Modifications On the Device Tree side, it is necessary to enable the ISP channels that will be used. Likewise, it is necessary to disable the ISI channels, which are normally the ones that connect to the MIPI_CSI2 ports to extract raw data from the sensor (in case the ISP is not used). A MIPI_CSI2 port can be mapped to either an ISI channel or an ISP channel, but not both simultaneously. In this guide, we focus on using the ISP, so any other custom configuration that you want to implement may vary from what is shown. In the code below, ISP channel 0 is enabled, and the connection is made to the port where the sensor is connected (mipi_csi_0). &mipi_csi_0 {         status = "okay";         port@0 {         // Example endpoint to <sensor>_ep                mipi0_sensor_ep: endpoint@1 {                        remote-endpoint = <&<sensor>_ep>;                };         }; };   &cameradev {         status = "okay"; };   &isi_0 {         status = "disabled"; };   &isi_1 {         status = "disabled"; };   &isp_0 {         status = "okay"; };   &isp_1 {         status = "disabled"; };   &dewarp {         status = "okay"; }; What is shown above does not represent a complete device tree file, is only a general skeleton of the points you should pay attention to when working with ISP channels. For simplicity, we omitted all the attributes that are normally defined when working with camera sensor drivers and their respective configurations in the i2c port of the hardware.   Note: Due to hardware restrictions when using ISP channels, it is recommended to use the isp_0 channel, when working with only one sensor. In case you need to use two sensors, you can enable both channels, taking into account the limitations regarding the output resolutions and the clock frequency when both channels are working simultaneously. What is not recommended is to use the isp_1 channel when working with a single sensor.   References ISP Independent Sensor Interface (ISI) API reference, I.MX8M Plus Camera Sensor Porting User guide: https://www.nxp.com/webapp/Download?colCode=IMX8MPCSPUG Sensor Calibration tool: https://www.nxp.com/webapp/Download?colCode=AN13565 i.MX8M Plus reference manual: https://www.nxp.com/webapp/Download?colCode=IMX8MPRM  
View full article
P3T1755 Demo   In this space I want to show you the things that you can create usign our products.   In  this demo I demostrate a use case creating a GUI for a Temperature Sensor.   We can create modern GUIs and more with LVGL combined with our powerful processors.               CPU USAGE As we can see  the CPU usage for this demo is around 2%   Pictures         This demo is based on the previous publused articles.   References: https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/Adding-support-to-P3T1755-on-Linux/ta-p/1855874 https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/How-to-run-LGVL-on-iMX-using-framebuffer/ta-p/1853768  
View full article
This guide is a continuation from our latest Debian 12 Installation Guide for iMX8MM, iMX8MP, iMX8MN and iMX93. Here we will describe the process to install the multimedia and hardware acceleration packages, specifically GPU, VPU and Gstreamer on i.MX8M Mini, i.MX8M Plus and i.MX8M Nano. The guide is based on the one provided by our colleague Build Ubuntu For i.MX8 Series Platform - NXP Community, which requires to previously build an image using Yocto Project with the following distro and image name. Distro name - fsl-imx-wayland Image name – imx-image-multimedia For more information please check our BSP documentation i.MX Yocto Project User’s Guide.   Hardware Requirements Linux Host Computer (Ubuntu 20.04 or later) USB Card reader or Micro SD to SD adapter SD Card Evaluation Kit Board for the i.MX8M Nano, i.MX8M Mini, i.MX8M Plus   Software Requirements Linux Ubuntu (20.04 tested) or Debian for Host Computer BSP version 6.1.55 built with Yocto Project   After built the image we can start the installation by following the steps below:   GPU Installation The GPU Installation consists of copy the files from packages imx-gpu-g2d, imx-gpu-viv, libdrm to the Debian system. As our latest installation guide, we will continue naming “mountpoint” to the directory where Debian system is mounted on our host machine. Regarding the path provided on each step, we put labels <build-path> and <machine> that you will need to change based on your environment. These are the paths that Yocto Project uses to save the packages. However, this could change on your environment and you can find the work directory from each package using the following command: bitbake -e <package-name> | grep ^WORKDIR= This command will show you the absolute path of the package work directory. 1. Install GPU Packages $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-gpu-g2d/6.4.11.p2.2-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-gpu-viv/1_6.4.11.p2.2-aarch64-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/libdrm/2.4.115.imx-r0/image/* mountpoint   2. Install Linux IMX Headers and IMX Parser $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/linux-imx-headers/6.1-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-poky-linux/imx-parser/4.8.2-r0/image/* mountpoint   3. Use chroot $ sudo LANG=C.UTF-8 chroot mountpoint/ qemu-aarch64-static /bin/bash   4. Install Dependencies $ apt install libudev-dev libinput-dev libxkbcommon-dev libpam0g-dev libx11-xcb-dev libxcb-xfixes0-dev libxcb-composite0-dev libxcursor-dev libxcb-shape0-dev libdbus-1-dev libdbus-glib-1-dev libsystemd-dev libpixman-1-dev libcairo2-dev libffi-dev libxml2-dev kbd libexpat1-dev autoconf automake libtool meson cmake ssh net-tools network-manager iputils-ping rsyslog bash-completion htop resolvconf dialog vim udhcpc udhcpd git v4l-utils alsa-utils git gcc less autoconf autopoint libtool bison flex gtk-doc-tools libglib2.0-dev libpango1.0-dev libatk1.0-dev kmod pciutils libjpeg-dev   5. Create a folder for Multimedia Installation. Here we will clone all the multimedia repositories.  $ mkdir multimedia_packages $ cd multimedia_packages   6. Build Wayland $ git clone https://gitlab.freedesktop.org/wayland/wayland.git $ cd wayland $ git checkout 1.22.0 $ meson setup build --prefix=/usr -Ddocumentation=false -Ddtd_validation=true $ cd build $ ninja install   7. Build Wayland Protocols IMX $ git clone https://github.com/nxp-imx/wayland-protocols-imx.git $ cd wayland-protocols-imx $ git checkout wayland-protocols-imx-1.32 $ meson setup build --prefix=/usr -Dtests=false $ cd build $ ninja install   8. Build Weston $ git clone https://github.com/nxp-imx/weston-imx.git $ cd weston-imx $ git checkout weston-imx-11.0.3 $ meson setup build --prefix=/usr -Dpipewire=false -Dsimple-clients=all -Ddemo-clients=true -Ddeprecated-color-management-colord=false -Drenderer-gl=true -Dbackend-headless=false -Dimage-jpeg=true -Drenderer-g2d=true -Dbackend-drm=true -Dlauncher-libseat=false -Dcolor-management-lcms=false -Dbackend-rdp=false -Dremoting=false -Dscreenshare=true -Dshell-desktop=true -Dshell-fullscreen=true -Dshell-ivi=true -Dshell-kiosk=true -Dsystemd=true -Dlauncher-logind=true -Dbackend-drm-screencast-vaapi=false -Dbackend-wayland=false -Dimage-webp=false -Dbackend-x11=false -Dxwayland=false $ cd build $ ninja install   VPU Installation To install VPU and Gstreamer please follow the steps below: 1. Install firmware-imx $ sudo cp -Pra <build-path>/tmp/work/all-poky-linux/firmware-imx/1_8.22-r0/image/lib/* mountpoint/lib/   2. Install VPU Driver $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-vpu-hantro/1.31.0-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-vpuwrap/git-r0/image/* mountpoint   3. Use chroot $ sudo LANG=C.UTF-8 chroot mountpoint/ qemu-aarch64-static /bin/bash   4. Install dependencies for Gstreamer Plugins $ apt install libgirepository1.0-dev gettext liborc-0.4-dev libasound2-dev libogg-dev libtheora-dev libvorbis-dev libbz2-dev libflac-dev libgdk-pixbuf-2.0-dev libmp3lame-dev libmpg123-dev libpulse-dev libspeex-dev libtag1-dev libbluetooth-dev libusb-1.0-0-dev libcurl4-openssl-dev libssl-dev librsvg2-dev libsbc-dev libsndfile1-dev   5. Change directory to multimedia packages. $ cd multimedia-packages   6. Build gstreamer $ git clone https://github.com/nxp-imx/gstreamer -b lf-6.1.55-2.2.0 $ cd gstreamer $ meson setup build --prefix=/usr -Dintrospection=enabled -Ddoc=disabled -Dexamples=disabled -Ddbghelp=disabled -Dnls=enabled -Dbash-completion=disabled -Dcheck=enabled -Dcoretracers=disabled -Dgst_debug=true -Dlibdw=disabled -Dtests=enabled -Dtools=enabled -Dtracer_hooks=true -Dlibunwind=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   7. Build gst-plugins-base $ git clone https://github.com/nxp-imx/gst-plugins-base -b lf-6.1.55-2.2.0 $ cd gst-plugins-base $ meson setup build --prefix=/usr -Dalsa=enabled -Dcdparanoia=disabled -Dgl-graphene=disabled -Dgl-jpeg=disabled -Dopus=disabled -Dogg=enabled -Dorc=enabled -Dpango=enabled -Dgl-png=enabled -Dqt5=disabled -Dtheora=enabled -Dtremor=disabled -Dvorbis=enabled -Dlibvisual=disabled -Dx11=disabled -Dxvideo=disabled -Dxshm=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   8. Build gst-plugins-good $ git clone https://github.com/nxp-imx/gst-plugins-good -b lf-6.1.55-2.2.0 $ cd gst-plugins-good $ meson setup build --prefix=/usr -Dexamples=disabled -Dnls=enabled -Ddoc=disabled -Daalib=disabled -Ddirectsound=disabled -Ddv=disabled -Dlibcaca=disabled -Doss=enabled -Doss4=disabled -Dosxaudio=disabled -Dosxvideo=disabled -Dshout2=disabled -Dtwolame=disabled -Dwaveform=disabled -Dasm=disabled -Dbz2=enabled -Dcairo=enabled -Ddv1394=disabled -Dflac=enabled -Dgdk-pixbuf=enabled -Dgtk3=disabled -Dv4l2-gudev=enabled -Djack=disabled -Djpeg=enabled -Dlame=enabled -Dpng=enabled -Dv4l2-libv4l2=disabled -Dmpg123=enabled -Dorc=enabled -Dpulse=enabled -Dqt5=disabled -Drpicamsrc=disabled -Dsoup=enabled -Dspeex=enabled -Dtaglib=enabled -Dv4l2=enabled -Dv4l2-probe=true -Dvpx=disabled -Dwavpack=disabled -Dximagesrc=disabled -Dximagesrc-xshm=disabled -Dximagesrc-xfixes=disabled -Dximagesrc-xdamage=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   9. Build gst-plugins-bad $ git clone https://github.com/nxp-imx/gst-plugins-bad -b lf-6.1.55-2.2.0 $ cd gst-plugins-bad $ meson setup build --prefix=/usr -Dintrospection=enabled -Dexamples=disabled -Dnls=enabled -Dgpl=disabled -Ddoc=disabled -Daes=enabled -Dcodecalpha=enabled -Ddecklink=enabled -Ddvb=enabled -Dfbdev=enabled -Dipcpipeline=enabled -Dshm=enabled -Dtranscode=enabled -Dandroidmedia=disabled -Dapplemedia=disabled -Dasio=disabled -Dbs2b=disabled -Dchromaprint=disabled -Dd3dvideosink=disabled -Dd3d11=disabled -Ddirectsound=disabled -Ddts=disabled -Dfdkaac=disabled -Dflite=disabled -Dgme=disabled -Dgs=disabled -Dgsm=disabled -Diqa=disabled -Dkate=disabled -Dladspa=disabled -Dldac=disabled -Dlv2=disabled -Dmagicleap=disabled -Dmediafoundation=disabled -Dmicrodns=disabled -Dmpeg2enc=disabled -Dmplex=disabled -Dmusepack=disabled -Dnvcodec=disabled -Dopenexr=disabled -Dopenni2=disabled -Dopenaptx=disabled -Dopensles=disabled -Donnx=disabled -Dqroverlay=disabled -Dsoundtouch=disabled -Dspandsp=disabled -Dsvthevcenc=disabled -Dteletext=disabled -Dwasapi=disabled -Dwasapi2=disabled -Dwildmidi=disabled -Dwinks=disabled -Dwinscreencap=disabled -Dwpe=disabled -Dzxing=disabled -Daom=disabled -Dassrender=disabled -Davtp=disabled -Dbluez=enabled -Dbz2=enabled -Dclosedcaption=enabled -Dcurl=enabled -Ddash=enabled -Ddc1394=disabled -Ddirectfb=disabled -Ddtls=disabled -Dfaac=disabled -Dfaad=disabled -Dfluidsynth=disabled -Dgl=enabled -Dhls=enabled -Dkms=enabled -Dcolormanagement=disabled -Dlibde265=disabled -Dcurl-ssh2=disabled -Dmodplug=disabled -Dmsdk=disabled -Dneon=disabled -Dopenal=disabled -Dopencv=disabled -Dopenh264=disabled -Dopenjpeg=disabled -Dopenmpt=disabled -Dhls-crypto=openssl -Dopus=disabled -Dorc=enabled -Dresindvd=disabled -Drsvg=enabled -Drtmp=disabled -Dsbc=enabled -Dsctp=disabled -Dsmoothstreaming=enabled -Dsndfile=enabled -Dsrt=disabled -Dsrtp=disabled -Dtinyalsa=disabled -Dtinycompress=enabled -Dttml=enabled -Duvch264=enabled -Dv4l2codecs=disabled -Dva=disabled -Dvoaacenc=disabled -Dvoamrwbenc=disabled -Dvulkan=disabled -Dwayland=enabled -Dwebp=enabled -Dwebrtc=disabled -Dwebrtcdsp=disabled -Dx11=disabled -Dx265=disabled -Dzbar=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   10. Build imx-gst1.0-plugin $ git clone https://github.com/nxp-imx/imx-gst1.0-plugin -b lf-6.1.55-2.2.0 $ cd imx-gst1.0-plugin $ meson setup build --prefix=/usr -Dplatform=MX8 -Dc_args=-I/usr/include/imx $ cd build $ ninja install   11. Exit chroot $ exit   Verify Installation For verification process, boot your target from the SD Card. (Review your specific target documentation) 1. Verify Weston For this verification you will need to be root user. # export XDG_RUNTIME_DIR=/run/user/0 # weston   2. Verify VPU and Gstreamer Use the following Gstreamer pipeline for Hardware Accelerated VPU Encode. # gst-launch-1.0 videotestsrc ! video/x-raw, format=I420, width=640, height=480 ! vpuenc_h264 ! filesink location=test.mp4   Then you can reproduce the file with this command: # gplay-1.0 test.mp4   Finally, you have installed and verified the GPU, VPU and Multimedia packages. Now, you can start testing audio and video applications.
View full article
Hello, here Jorge. On this post I will explain how to enable MQS1 on i.MX8ULP. As background about how to setup the environment to build the image using Yocto, please take a look on our i.MX Yocto Project User's Guide: Requirements: i.MX 8ULP EVK. Serial console emulator (Tera Term, Putty, etc.). USB Type-C cable. Micro USB cable. Headphones/speakers. Linux PC. Build done in Linux 6.6.23_2.0.0. i.MX8ULP audio subsystem. i.MX 8ULP extends audio capabilities on i.MX 7ULP by adding dedicated DSP cores for voice trigger and audio processing, enabling lower latency and power efficiency to support variety of audio applications. Some of hardware blocks implemented on 8ULP to support audio use cases are the next: Cadence Fusion F1 DSP processor. Cadence HiFi4 DSP processor. PowerQuad hardware accelerator with fixed and floating + FFT. Digital Microphone interface with support of up-to 8 PDM channels. Up-to 8 independent SAI instances. Up-to 2 Medium Quality Sound (MQS). Sony/Philips Digital interface (SPDIF). As is described before, MQS0 and MQS1 are part of real time domain and application domain respectively. I’m going to focus this post on how to enable MQS1 on application domain. Medium Quality Sound (MQS)  This module is basically generates a PWM from PCM audio data. For the major part of typical audio applications will require an external CODEC to deliver the audio quality but, sometimes where the application does not demand this quality, MQS can provide a medium quality audio via GPIO pin that can directly drive the audio output to a speaker or headphone via inexpensive external amplifier/buffer instead of CODEC. The design of the MQS can be described as follows: Input the PCM audio data (from SAI) into a 16-bit register. Up-sample data to match PWM switching frequency. Perform a simple 2nd order Sigma-Delta smooth on the current data versus previous data. Convert the PCM register into a 6-bit PWM width register and output through a GPIO pin.   How to enable it? By default, our BSP does not enable clock for MQS1. This clock is controlled on CGC1 (AD), specifically on MQS1CLK (Multiplexer to select the audio clock connected to the MQS clock input). So, it is needed to modify imx8ulp-clock.h and clk-imx8ulp.c. Please take a look on patch attached at the end of this post to see the modification in drivers easily. These drivers have the definition/configuration for MQS1_SEL in CGC1 and needs to be added as follows: MQS1_SEL definition needs to bed added in imx8ulp-clock.h: #define IMX8ULP_CLK_MQS1_SEL 56 #define IMX8ULP_CLK_CGC1_END 57 MQS1_SEL configuration needs to be added in imx8ulp_clk_cgc1_init of clk-imx8ulp.c: clks[IMX8ULP_CLK_MQS1_SEL] = imx_clk_hw_mux2("mqs1_sel", base + 0x90c, 0, 2, sai45_sels, ARRAY_SIZE(sai45_sels)); Also, it is necessary to configure MQS1 on device tree of i.MX8ULP. Add this in soc: soc@0 of imx8ulp.dtsi: mqs1: mqs@0x29290064 { reg = <0x29290064 0x4>; compatible = "fsl,imx8qm-mqs"; assigned-clocks = <&cgc1 IMX8ULP_CLK_MQS1_SEL>; assigned-clock-parents = <&cgc1 IMX8ULP_CLK_SPLL3_PFD1_DIV1>; clocks = <&cgc1 IMX8ULP_CLK_MQS1_SEL>, <&cgc1 IMX8ULP_CLK_MQS1_SEL>; clock-names = "core", "mclk"; status = "disabled"; }; And create a new device tree, in this case is going to be named imx8ulp-evk-mqs.dts and is as follows: #include "imx8ulp-evk.dts" / { sound-simple-mqs { compatible = "simple-audio-card"; simple-audio-card,name = "imx-simple-mqs"; simple-audio-card,frame-master = <&sndcpu>; simple-audio-card,bitclock-master = <&sndcpu>; simple-audio-card,dai-link@0 { format = "left_j"; sndcpu: cpu { sound-dai = <&sai4>; }; codec { sound-dai = <&mqs1>; }; }; }; }; &cgc1 { assigned-clock-rates = <24576000>; }; &iomuxc1 { pinctrl_mqs1: mqs1grp { fsl,pins = < MX8ULP_PAD_PTF7__MQS1_LEFT 0x43 >; }; }; &mqs1 { #sound-dai-cells = <0>; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_mqs1>; status = "okay"; }; &sai4 { #sound-dai-cells = <0>; assigned-clocks = <&cgc1 IMX8ULP_CLK_SAI4_SEL>; assigned-clock-parents = <&cgc1 IMX8ULP_CLK_SPLL3_PFD1_DIV1>; status = "okay"; }; Let’s apply these changes on our BSP, in my case I’m going to create a new layer in Yocto to add these modifications with a patch that can be found at the end on this post, here the steps: Install essential Yocto Project host packages: $ sudo apt install gawk wget git diffstat unzip texinfo gcc build-essential chrpath socat cpio python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping python3-git python3-jinja2 python3-subunit zstd liblz4-tool file locales libacl1 Install the “repo” utility: $ mkdir ~/bin $ curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo $ chmod a+x ~/bin/repo $ export PATH=~/bin:$PATH Set up Git: $ git config --global user.name "Your Name" $ git config --global user.email "Your Email" $ git config –list Download the i.MX Yocto Project Community BSP recipe layers and create build folder: $ mkdir imx-yocto-bsp $ cd imx-yocto-bsp $ repo init -u https://github.com/nxp-imx/imx-manifest -b imx-linux-scarthgap -m imx-6.6.23-2.0.0.xml $ repo sync $ DISTRO=fsl-imx-wayland MACHINE=imx8ulp-lpddr4-evk source imx-setup-release.sh -b 8ulp_build Create the new layer: $ cd ~/imx-yocto-bsp/sources $ bibake-layers create-layer meta-mqs $ cd meta-mqs conf/layer.conf should be as follows: BBPATH .= ":${LAYERDIR}" BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \ ${LAYERDIR}/recipes-*/*/*.bbappend" BBFILE_COLLECTIONS += "meta-mqs" BBFILE_PATTERN_meta-mqs = "^${LAYERDIR}/" BBFILE_PRIORITY_meta-mqs = "6" LAYERSERIES_COMPAT_meta-mqs = "nanbield" Let’s change the recipe: $ sudo rm -r recipes-example $ mkdir -p recipes-kernel/linux/files 0001-8ULP-MQS-Enable.patch should be copied to ~/imx-yocto-bsp/sources/meta-mqs/recipes-kernel/linux/files Add an append (on this case is called “linux-imx_%.bbappend”)to change the recipe with next content: FILESEXTRAPATHS:prepend := "${THISDIR}/files:" SRC_URI += "file:// 0001-8ULP-MQS-Enable.patch " addtask copy_dts after do_unpack before do_prepare_recipe_sysroot do_copy_dts () { if [ -n "${DTS_FILE}" ]; then if [ -f ${DTS_FILE} ]; then echo "do_copy_dts: copying ${DTS_FILE} in ${S}/arch/arm64/boot/dts/freescale" cp ${DTS_FILE} ${S}/arch/arm64/boot/dts/freescale/ fi fi } The next step is add the layer and build the image: $ cd ~/imx-yocto-bsp/8ulp_build $ bitbake-layers add-layer ~/imx-yocto-bsp/sources/meta-mqs Confirm that the layer has been added: $ bitbake-layers show-layers Build the image: $ bitbake imx-image-multimedia i.MX8ULP EVK limitations The i.MX8ULP has the next MQS1 pins available: But, in the EVK board, the mayor part of these pins are used for other functions such as: - Push button: - MIPI DSI:  - Etc… So, take the output signal of MQS1 pins of EVK board is difficult, in this article, I’m going to configure PTF7 only (MQS1_left) for practicality. If you are working with this board and you need to use these pins for MQS function you will need to manipulate the traces and take the required signals. If you are designing a custom board, planning is essential to avoid this issue. Flash the board. One the build has been finished, we will have the necessary files to flash the board and test it. If you are not too familiarized with this process I suggest you take a look on this post. First, put the board in serial download mode changing the boot configuration switches on the board:   The next step is connecting the power cable, micro-USB cable on the debug port and USB-C type cable to USB0 connector on the board. Then, turn-on the board and run the next command in terminal of build directory: uuu -b emmc_all imx-boot-imx8ulpevk-sd.bin-flash_singleboot_m33 imx-image-multimedia-imx8ulpevk.wic Now, power-off the board, change the boot mode to single boot-eMMC and power it on to test it. Test MQS1 in i.MX8ULP. To test MQS1 it is needed to change the device tree we created, we can do it with the next commands in U-boot: u-boot=> setenv fdtfile imx8ulp-evk-mqs.dtb u-boot=> saveenv u-boot=> boot Now we can test MQS1 on i.MX8ULP EVK, let's confirm that the clock is active in MQS module with the next command: $ cat /sys/kernel/debug/clk/clk_summary -n As you can see mqs1_sel is active and running at 24576000 Hz: And the card appears if we run the next command: $ aplay -l To play audio through MQS we can do it as any sound card: $ speaker-test -D sysdefault:CARD=imxsimplemqs -c 2 -f 48000 -F S16_LE -t pink -P 3 The signal should look like this in the pin output: And like this after a filter, for example the filter used in i.MX93 EVK.   With this post we have been able check the general operation of MQS, configure and compile the image with the required changes to enable MQS1 on EVK board and measure the output on the board. There is a considerable limitation on EVK board since we cannot test left and right outputs without intervene the base board, but this can be helpful as a reference to who would like to use this audio output on i.MX8ULP processor. Best regards. References. Yocto Project customization guide - NXP Community How to add a new layer and a new recipe in Yocto - NXP Community Flashing Linux BSP using UUU - NXP.  i.MX8ULP reference manual. Embedded Linux Projects Using Yocto Project Cookbook.
View full article
On this tutorial we will review the implementation of Flutter on the i.MX8MP using the Linux Desktop Image. Please find more information about Flutter using the following link: Flutter: Option to create GUIs for Embedded System... - NXP Community Requirements: Evaluation Kit for the i.MX 8M Plus Applications Processor. (i.MX 8M Plus Evaluation Kit | NXP Semiconductors) NXP Desktop Image for i.MX 8M Plus (GitHub - nxp-imx/meta-nxp-desktop at lf-6.1.1-1.0.0-langdale) Note: This tutorial is based on the NXP Desktop Image with Yocto version 6.1.1 – Langdale. Steps: 1. First, run commands to update packages. $ sudo apt update $ sudo apt upgrade 2. Install Flutter for Linux using the following command. $ sudo snap install flutter --classic 3. Run the command to verify the correct installation. $ flutter doctor With this command you will find information about the installation. The important part for our purpose is the parameter "Linux toolchain - develop for Linux desktop". 4. Run the command “flutter create .” to create a flutter project, this framework will create different folders and files used to develop the application.  $ cd Documents $ mkdir flutter_hello $ cd flutter_hello $ flutter create .​ 5. Finally, you can run the “hello world” application using: $ flutter run Verify the program behavior incrementing the number displayed on the window.  
View full article
Hello, on this post I will explain how to record separated audio channels using an 8MIC-RPI-MX8 Board. As background about how to setup the board to record and play audio using i.MX boards, I suggest you take a look on the next post: How to configure, record and play audio using an 8MIC-RPI-MX8 Board. Requirements: I.MX 8M Mini EVK. Linux Binary Demo Files - i.MX 8MMini EVK. 8MIC-RPI-MX8 Board. Serial console emulator (Tera Term, Putty, etc.). Headphones/speakers. Waveform Audio Format WAV, known for WAVE (Waveform Audio File Format), is a subset of Microsoft’s Resource Interchange File Format (RIFF) specification for storing digital audio files. This format does not apply compression to the information and stores the audio with different sampling rates and bitrates. WAV files are larger in size compared to other formats such as MP3 which uses compression to reduce the file size while maintaining a good audio quality but, there is always some lose on quality since audio information is too random to be compressed with conventional methods, the main advantage of this format is provide an audio file without losses that is also widely used on studio. This files starts with a file header with data chunks. A WAV file consists of two sub-chunks: fmt chunk: data format. data chunk: sample data. So, is structured by a metadata that is called WAV file header and the actual audio information. The header of a WAV (RIFF) file is 44 bytes long and has the following format: How to separate the channels? To separate each audio channel from the recording we need to use the next command that will record raw data of each channel. arecord -D plughw:<audio device> -c<number of chanels> -f <format> -r <sample rate> -d <duration of the recording> --separate-channels <output file name>.wav arecord -D plughw:2,0 -c8 -f s16_le -r 48000 -d 10 --separate-channels sample.wav This command will output raw data of recorded channels as is showed below. This raw data cannot be used as a “normal” .wav file because the header information is missing. It is possible to confirm it if import raw data to a DAW and play recorded samples: So, to use this information we need to create the header for each file using WAVE library on python. Here the script that I used: import wave import os name = input("Enter the name of the audio file: ") os.system("arecord -D plughw:2,0 -c8 -f s16_le -r 48000 -d 10 --separate-channels " + name + ".wav") for i in range (0,8): with open(name + ".wav." + str(i), "rb") as in_file: data = in_file.read() with wave.open(name + "_channel_" + str(i) +".wav", "wb") as out_file: out_file.setnchannels(1) out_file.setsampwidth(2) out_file.setframerate(48000) out_file.writeframesraw(data) os.system("mkdir output_files") os.system("mv " + name + "_channel_" + "* " + "output_files") os.system("rm " + name + ".wav.*") If we run the script, will generate a directory with the eight audio channels in .wav format. Now, we will be able to play each channel individually using an audio player. References IBM, Microsoft Corporation. (1991). Multimedia Programming Interface and Data Specifications 1.0. Microsoft Corporation. (1994). New Multimedia Data Types and Data Techniques. Standford University. (2024, January 30). Retrieved from WAVE PCM sound file format: http://hummer.stanford.edu/sig/doc/classes/SoundHeader/WaveFormat/
View full article
The user interface has limited the use of the tool GUI Guider. Getting an interaction only through a mouse or touchscreen can be enough for some use cases. However, sometimes the use case requires to go beyond its limitations. This video/appnote explores the possibility of integrating voice by creating a bridge between a speech recognition technology, such as VIT, and the interface creator GUI Guider. It uses a universal way to link all the voice recognition commands and a wakeword to any interaction created by GUI Guider. The following video shows the steps necessary to create that connection by creating the voice recognition using VIT voice commands and wakewords, create an interface of GUI Guider using a template, how to connect between them using the board i.MX 93 evk and testing it. For more information consult the following links AppNote HTML: https://docs.nxp.com/bundle/AN14270/page/topics/abstract.html?_gl=1*1glzg9k*_ga*NDczMzk4MDYuMTcxNjkyMDI0OA..*_ga_WM5LE0KMSH*MTcxNjkyMDI0OC4xLjEuMTcxNjkyMDcyMy4wLjAuMA AppNote PDF: https://www.nxp.com/docs/en/application-note/AN14270.pdf Associated File: AN14270SW  
View full article
Information about the transition from the NXP Demo Experience to GoPoint for i.MX Application Processors.
View full article
One of the most popular use cases for embedded systems are projects destinated to show information and interact with users. These views are called GUI or Graphic User Interface which are designed to be intuitive, attractive, consistent, and clear. There are many tools that we can use to achieve great GUIs, mostly implemented for platforms such as Web, Android, and iOS. Here, we will need to introduce the concept of framework, basically, it is a set of tools and rules that provides a minimal structure to start with your development. Frameworks usually comes with configuration files, code snippets, files and folders organization helping us to save time and effort. Also, it is important to review the concept of SDK or Software Development Kit which is a set of tools that allows to build software for specific platforms. Usually supplies debugging tools, documentation, libraries, API’s, emulators, and sample code. Flutter is an open-source UI software development kit by Google that help us to create applications with great GUIs on different platforms from a single codebase. Depends on the reference, you can find Flutter defined as a framework or SDK and both are correct, however, an SDK could be a best definition thanks to Flutter supplies a wide and complete package to create an application in which framework is also included. This article is aimed at those that are in a prototyping stage looking for a different tool to develop projects. Also, this article pretends to be a theoretical introduction explaining the most important concepts. However, is a good practice to learn more about reviewing the official documentation from Flutter. (Flutter documentation | Flutter) Here is the structure used throughout this article: What is Flutter? Flutter details Platforms Programming language Official documentation Flutter for embedded systems What is Flutter? Flutter was officially released by Google in December 2018 with a main aim, to give developers a tool to create applications natively compiled for mobile (Android, iOS), web and desktop (Windows, Linux) from a single codebase. It means that as a developer, Flutter will create a structure with minimal code, configuration files, build files for each operating system, manifests, etc. in which we will add our custom code and finally build this code for our preferred OS. For example, we can create an application to review fruit and vegetable information and compile for Android and iOS with the same code. A basic Flutter development process based on my experience looks like the following diagram: Flutter has the following key features: Cross-platform development. Flutter allows the developer to create applications for different platforms using a single codebase. It means that you will not need to recreate the application for each platform you want to support.   Hot-reload. This feature allows the developer to see changes in real time without restarting the whole application, this results in time savings for your project.   High Performance Flutter apps achieve high performance due to the app code is compiled to native ARM code. With this tool no interpreters are involved.   UI Widgets Flutter supplies a set of widgets (UI components such as boxes, inputs text, buttons, etc.) predefined by UI systems guidelines Material on Android and Cupertino for iOS. Source: Material 3 Design Kit | Figma Community Source: Design - Apple Developer   Great community support. This feature could be subjective but, it is useful when we are developing our project find solutions to known issues or report new ones. Because of Flutter is an open source and is widely implemented in the industry this tool owns a big community, with events, forums, and documentation. Flutter Details Supported Platforms With Flutter you can create applications for: Android iOS Linux Debian Linux Ubuntu macOS web Chrome, Firefox, Safari, Edge Windows Supported deployment platforms | Flutter Programming Language Flutter use Dart, a programming language is an open-source language supported by Google optimized to use on the creation of user interfaces. Dart key features: Statically typed. This feature helps catching errors making the code robust ensuring that the variable’s value always match with the declared variable’s type. Null safety. All variables on Dart are non-nullable which means that every variable must have a non-null value avoiding errors at execution time. This feature also, make the code robust and secure. Async/Await. Dart is client-optimized which means that this language was specially created to ensure the best performance as a client application. Async/Await is a feature part of this optimization making easier to manage network requests and other asynchronous operations. Object oriented. Dart is an object-oriented language with classes and mixin. This is especially useful to use on Flutter with the usage of widgets. Compiler support of Just-In-Time (JIT) and Ahead-of-Time (AOT) JIT provides the support that enables the Hot Reload Flutter feature that I mentioned before. It is a complex mechanism, but Dart “detects” changes in your code and execute only these changes avoiding recompiling all the code. AOT compiler produces efficient ARM code improving start up time and performance. Official documentation Flutter has a rich community and documentation that goes from UI guidelines to an Architectural Overview. You can find the official documentation at the following links: Flutter Official Documentation: Flutter documentation | Flutter Flutter Community: Community (flutter.dev) Dart Official Documentation: Dart documentation | Dart Flutter for embedded systems So far, we know all the excellent features and platforms that Flutter can support. But, what about the embedded systems? On the official documentation we can find that Flutter may be used for embedded systems but in fact there is no an official supported platform. This SDK has been supported by their community, specially there is one repository on GitHub supported by Sony that provides documentation and Yocto recipes to support Flutter on embedded Linux. To understand the reason to differentiate between Flutter for Linux Desktop with official support and to create a specific Flutter support for embedded Linux is important to describe the basics of Flutter architecture. Based on the Flutter documentation the system is designed using layers that can be illustrated as follows:   Source: Flutter architectural overview | Flutter We can see as a top level “Framework” which is a high-level layer that includes widgets, tools and libraries that are in contact with developers. Below “Framework,” the layer “Engine” is responsible of drawing the widgets specified in the previous layer and provides the connection between high-level and low-level code. This layer is mostly written in C++ for this reason Flutter can achieve high performance running applications. Specifically for graphics rendering Flutter implements Impeller for iOS and Skia for the rest of platforms. The bottom layer is “Embedder” which is specific for each target and operating system this layer allows Flutter application to run as a native app providing the access to interact with different services managed by the operating systems such as input, rendering surfaces and accessibility. This layer for Linux Desktop uses GTK/GDK and X11 as backend that is highly dependent of unnecessary libraries and expensive for embedded systems which have constrained resources for computation and memory. The work around founded by Sony’s Flutter for Embedded Linux repository is to change this backend using a widely implemented backend for embedded systems Wayland. The following image illustrates the difference between Flutter for Linux Desktop and Flutter for Embedded Linux.   Source: What's the difference between Linux desktop and Embedded Linux · sony/flutter-embedded-linux Wiki · GitHub   Source: What's the difference between Linux desktop and Embedded Linux · sony/flutter-embedded-linux Wiki · GitHub Here is the link to the mentioned repository: GitHub - sony/flutter-elinux: Flutter tools for embedded Linux (eLinux) Finally, I would like to encourage you to read the official Flutter documentation and consider this tool as a great option compared to widely used tools on embedded devices such as Qt or Chromium. Also, please have a look to a great article written by Payam Zahedi delving into the implementation of Flutter for Embedded Linux measuring performance and giving conclusions about the usage of Flutter in embedded systems. (Flutter on Embedded Devices. Learn how to run Flutter on embedded… | by Payam Zahedi | Snapp Embedded | Medium).    
View full article
GUI Guider version: 1.6.0 LVGL version: v8.3.5 Host software requirements: Ubuntu 20.04, Ubuntu 22.04 or Debian 12 Hardware requirements: Evaluation Kit for the i.MX 93 Applications Processor. (i.MX 93 Evaluation Kit | NXP Semiconductors) On this guide we will use the IMX-MIPI-HDMI accessory board to connect the iMX93 with a HDMI Monitor. (IMX-MIPI-HDMI Product Information|NXP) This board is usually provided with the iMX8M Mini and the iMX8M Nano.  Steps: 1. Copy your project from the folder GUI-Guider-Projects to your Linux PC.  2. Build an image for iMX93 using The Yocto Project.    a. Based on iMX Yocto Porject Users Guide set directories and download the repo $ mkdir imx-bsp-6.1.1-1.0.0 $ cd imx-bsp-6.1.1-1.0.0 $ repo init -u https://github.com/nxp-imx/imx-manifest -b imx-linux-langdale -m imx-6.1.1-1.0.0.xml $ repo sync Use distro fsl-imx-xwayland and select machine imx93evk and use this commnad with a build folder name: $ MACHINE=imx93evk DISTRO=fsl-imx-xwayland source ./imx-setup-release.sh - b bld-imx93evk b. Use bitbake command to start the build process. Also, add the -c populate_sdk to get the toolchain. $ bitbake imx-image-multimedia -c populate_sdk  c. Install the Yocto toolchain located on <build-folder>/tmp/deploy/sdk/.  $ sudo sh ./fsl-imx-xwayland-glibc-x86_64-imx-image-multimedia-armv8a-imx93evk-toolchain-6.1-langdale.sh d. Install ninja utility on the build host $ sudo apt install ninja-build e. For Ubuntu 20.04 and Ubuntu 22.04, copy the lv_conf.h file from lvgl-simulator to lvgl $ cp lvgl-simulator/lv_conf.h lvgl/ f. Change the interpreter on build.sh from #!/bin/sh to #!/bin/bash. This is an important step! g. Then, enter to linux folder and use the following commands to make build.sh executable $ dos2unix build.sh $ chmod +x build.sh h. Execute the build.sh $ ./build.sh i. Copy the binary to the iMX93 using a USB or SCP.  2. On the target iMX93 follow these steps. a. On Uboot, use fatls interface device:partition fatls mmc 0:1 (Device 0 : Partition 1) With this command, we will be able to list device tree files. => fatls mmc 0:1 b. Select imx93-11x11-evk-rm67199.dtb and use the command editenv fdtfile  => editenv fdtfile Output example edit: imx93-11x11-evk-rm67199.dtb c. In edit command line put the selected device tree .dtb d. Use saveenv command to save environment and continue with the boot process. e. Finally, run the GUI Application $ ./gui_guider&   I hope this article will be helpful. Best regards, Brian.
View full article
vpuwraper can fulfill VPU decoder/encoder, if customer’s user case is simple, for example they just need to encode yuv stream to H264, or decode H264 stream to yuv, There is no need to use gstreamer or V4L2 complex framework, you can use vpuwraper. Platform: i.MX8MP + L5.4.70.2.3.0 Build Procedure: mkdir vpu cd vpu git clone https://github.com/nxp-imx/imx-vpuwrap   cd imx-vpuwrap/ git tag -l   git switch -c rel_imx_5.4.70_2.3.0   source ../../.././5.4.70.2.3.0/sdk/environment-setup-aarch64-poky-linux   make -f Makefile_8mp   Test on i.MX8MP EVK board Pls find attached test log for decode and encode If busChromaU in YUV file is null, you will failed to encode it,pls apply patch vpuwraper patch for L5.4.70.2.3.0.patch to fix t If YUV file is interleave format, you need to add add interleave parameter : -interleave 1 ./test_enc_arm_elinux -i test.yuv -o aaa.h264 -f 2 -w 176 -h 96 -interleave 1   Thanks, Lambert
View full article
In this article, I will explain how to set up the iMX8M Plus to use the 4K Dart BCON Basler Camera module. Requirements: Evaluation Kit for the i.MX 8M Plus Applications Processor. (i.MX 8M Plus Evaluation Kit | NXP Semiconductors) Basler Camera for i.MX 8M Plus (4K dart BCON for MIPI camera module for i.MX 8M Plus | NXP Semiconductors). Embedded Linux for i.MX Applications Processors (Embedded Linux for i.MX Applications Processors | NXP Semiconductors) (For this example we will use BSP version Linux 5.15.71_2.2.0) Serial Console Emulator Basler Camera Specifications and Manuals: Basler Camera Specifications at this link: Embedded Vision Kits daA3840-30mc-IMX8MP-EVK - Embedded Vision Kits (baslerweb.com). Basler Manual to identify and setting up the hardware at this link: daA3840-30mc-IMX8MP-EVK | Basler Product Documentation (baslerweb.com) Basler Camera Module out-of-box with i.MX 8M Plus Applications Processor. (Video: Basler Camera Module out-of-box with i.MX 8M Plus Applications Processor | NXP Semiconductors) Steps After setting up the hardware we will need to turn on the iMX8M Plus and follow these steps: 1. Stop the boot process on Uboot by pressing any key. 2. Use the following command to list interfaces. => mmc list Output example => FSL_SDHC: 1 (SD) => FSL_SDHC: 2 The above command will show you the device number in this example for SD, the device number is 1. 3. Then use fatls <interface> <device[:partition]> [<directory>] fatls mmc 1:1 (Device 1 : Partition 1) With this command, we will be able to list device tree files. => fatls mmc 1:1 4. Select imx8mp-evk-basler.dtb or imx8mp-evk-dual-basler.dtb and use the command editenv fdtfile.  => editenv fdtfile Output example edit: imx8mp-evk-basler.dtb 5. In edit command line put the selected device tree (*.dtb). 6. Use saveenv command to save environment and continue with the boot process. 7. Using the terminal and go to /opt/imx8-isp/bin and execute the script run.sh. $ ./run.sh -c basler_1080p60 -lm 8. Use the command gst-device-monitor-1.0 to list devices. Here you will find the path to the camera device. $ gst-device-monitor-1.0 Output example Device found: name : VIV class : Video/Source caps : video/x-raw, format=YUY2, width=[ 176, 4096, 16 ], height=[ 144, 3072, 8 ], pixel-aspect-ratio=1/1, framerate={ (fraction)30/1, (fraction)29/1, (fraction)28/1, (fraction)27/1, (fraction)26/1, (fraction)25/1, (fraction)24/1, (fraction)23/1, (fraction)22/1, (fraction)21/1, (fraction)20/1, (fraction)19/1, (fraction)18/1, (fraction)17/1, (fraction)16/1, (fraction)15/1, (fraction)14/1, (fraction)13/1, (fraction)12/1, (fraction)11/1, (fraction)10/1, (fraction)9/1, (fraction)8/1, (fraction)7/1, (fraction)6/1, (fraction)5/1, (fraction)4/1, (fraction)3/1, (fraction)2/1, (fraction)1/1 } ... properties: udev-probed = true device.bus_path = platform-vvcam-video.0 sysfs.path = /sys/devices/platform/vvcam-video.0/video4linux/video2 device.subsystem = video4linux device.product.name = VIV device.capabilities = :capture: device.api = v4l2 device.path = /dev/video2 v4l2.device.driver = viv_v4l2_device v4l2.device.card = VIV v4l2.device.bus_info = platform:viv0 v4l2.device.version = 393473 (0x00060101) v4l2.device.capabilities = 2216693761 (0x84201001) v4l2.device.device_caps = 69206017 (0x04200001) gst-launch-1.0 v4l2src device=/dev/video2 ! ... 9. Finally, use gstreamer to verify proper operation. (With this gstreamer pipeline you will see a new window with the camera output. Then, just rotate the lens to acquire the correct focus) $ gst-launch-1.0 -v v4l2src device=/dev/video2 ! "video/x-raw,format=YUY2,width=1920,height=1080" ! queue ! imxvideoconvert_g2d ! waylandsink Basic description of Gstreamer Pipeline gst-launch-1.0 -v: The option -v enables the verbose mode to get detailed information of process. v4l2src device=/dev/video2: Select input device in this case the camera is on path /dev/video3. "video/x-raw,format=YUY2,width=1920,height=1080": Received format from camera. queue: This command is a buffer between camera recording process and the following image process, this command help us to interface two process and prevent blocking where each process has different speeds, in other words, when a process A is faster than process B. imxvideoconvert_g2d: This proprietary plugin uses hardware acceleration to perform rotation, scaling, and color space conversion on video frames. waylandsink : This command creates its own window and renders the decoded frames processed previously. 10. Result     I hope this article will be helpful. Best regards, Brian.
View full article
Hello everyone, We have recently migrated our Source code from CAF (Codeaurora) to Github, so i.MX NXP old recipes/manifest that point to Codeaurora eventually will be modified so it points correctly to Github to avoid any issues while fetching using Yocto. Also, all repo init commands for old releases should be changed from: $ repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b <branch name> [ -m <release manifest>] To: $ repo init -u https://github.com/nxp-imx/imx-manifest -b <branch name> [ -m <release manifest>] This will also apply to all source code that was stored in Codeaurora, the new repository for all i.MX NXP source code is: https://github.com/nxp-imx For any issues regarding this, please create a community thread and/or a support ticket. Regards, Aldo.
View full article
i.MX8 series contains internal HiFi4 DSP. It is targeted for Audio related signal processing. SOF (Sound Open Firmware) is open source audio DSP firmware, driver and SDK. This document introduces basic theory about IIR/FIR digital filters, how to design IIR/FIR digital filters and the Equalizer filters implementation by SOF. After that, the document also describes how HiFi4 DSP MAC engine accelerate the EQ filters calculation.
View full article
Hello, here Jorge. On this post I will explain how to configure, record and play audio using an i.MX 8MIC-RPI-MX8 Board. Requirements: I.MX 8M Mini EVK Linux Binary Demo Files - i.MX 8MMini EVK (L5.15.52_2.1.0) i.MX 8MIC-RPI-MX8 Board Serial console emulator (Tera Term, Putty, etc.) Headphones/speakers The 8MIC-RPI-MX8 accessory board is designed for voice enabled application prototyping and development on the i.MX 8M family. The board plugs directly into the 40-pin expansion connector on the i.MX 8M Mini and Nano EVK’s. Some features about this board are: 8 PDM Microphones 8 monochrome LEDs 4 multi-color LEDs 2 status LEDs 4 pushbuttons Microphone Mute Switch Microphone geometry switch Connecting the i.MX 8MIC-RPI-MX8 Board. The i.MX 8MIC-RPI-MX8 Board has a 40-pin expansion connector that you can plug it directly to the EVK board. Ensure that pin 1 of the 8MIC-RPI-MX8 is aligned with pin 1 on the EVK J1001 as is showed on the next figure:  Selecting the device tree on the board. Once the pre-compiled image is flashed on the board (Flashing Linux BSP using UUU) and you connected the 8MIC-RPI-MX8 it is necessary to select the correct device tree to handle 8MIC board. On U-boot check the available .dtb files on the BSP using the next command: u-boot=> fatls mmc 2:1 And you will get the corresponding list of .dbt files:  On this case we are working with an I.MX 8M Mini EVK and the corresponding .dtb file is: imx8mm-evk-8mic-revE.dtb To select it you need to set the environment variable and save it with: u-boot=> setenv fdtfile imx8mm-evk-8mic-revE.dtb u-boot=> saveenv Doble check it using: u-boot=> printenv fdtfile   Now it is time to boot Linux using the next command: u-boot=> boot Recording audio with the i.MX 8MIC-RPI-MX8 Board. The Advanced Linux Sound Architecture (ALSA) provides audio and MIDI functionality to the Linux operating system. ALSA has the following significant features: Efficient support for all types of audio interfaces, from consumer sound cards to professional multichannel audio interfaces. Fully modularized sound drivers. SMP and thread-safe design. User space library (alsa-lib) to simplify application programming and provide higher level functionality. Support for the older Open Sound System (OSS) API, providing binary compatibility for most OSS programs. Once we are on Linux, we can check our audio codecs detected on the board using: arecord -l   Now, to record audio we need to use the ALSA arecord command to start recording with IMX8 boards, there are different options that you can check on the next link. On this case we are going to use the next: arecord -D hw:imxaudiomicfil -c8 -f s16_le -r48000 -d10 sample.wav -D: selects the device. -c: selects the number of channels on the recording. -f: selects the format. -r: selects the sample rate. -d: determinate the duration recording time in seconds. sample.wav: Is the name of the resulting audio file. Running the last command, we started to record audio. It is time to make some noise and record it!   Playing audio from IMX8 boards. Now it is time to connect our headphones or speakers to the jack.   Also, as on arecord command you can check the devices where you can play audio from the board using the next command: aplay -l And you will get all the codecs to play audio:   To play our recordings we need to use the ALSA aplay command, it is important to select the correct audio codec to hear the audio from the jack on the board: aplay -Dplughw:3,0 sample.wav -D: selects the device. sample.wav: Is the name of audio file to play   Hope this will helpful for people who wants to record audio using PDM microphones and playing audio from IMX8 boards. Best regards.
View full article
i.MX8 VPU hardware decoder support below video codec: H.265 HEVC Main Profile 4Kp60 Level 5.1 H.264 AVC Constrained Baseline, Main and High profile H.264 MVC WMV9 / VC-1 Simple, Main and Advanced Profile MPEG 1 and 2 Main Profile at High Level AVS Jizhun Profile (JP) MJPEG4.2 ASP, H.263, Sorenson Spark Divx 3.11, with Global Motion Compensation (GMC) ON2/Google VP6/VP8 RealVideo 8/9/10 JPEG and MJPEG A/B Baseline   i.MX8 VPU Linux driver is implemented based on V4L2 standard. Chromium beside software video decoding, it also support hardware video decoder(VideoDecodeAccelerator),  there are some kind of VideoDecodeAccelerator, one of them is V4L2VDA. Please note V4L2VDA is using V4l2 api, so it is possible that change V4L2VDA to enable Chromium hardware video playback on i.MX8.   This doc share patch to add chromium video decode accelerate by using i.MX8QM/i.MX8QXP VPU. It will support chromium H.264, H.265, VP8 hardware video decode. H.264 and H.265 need use mp4 container. VP8 use webm container.   HW: i.MX8QM/i.MX8QXP MEK board, 1080P HDMI display, mouse, keyboard SW: i.MX8 5.10.72_2.2.2 yocto bsp release(which included chromium 91.0), and patch in this doc   Patch description: imx8-5.10.72-vpudrv-update.diff, update i.MX8  5.10.72_2.2.2 kernel vpu driver to https://source.codeaurora.org/external/imx/linux-imx/commit/drivers/mxc/vpu_malone?h=lf-5.15.y&id=fa7c67e2c9ed4fb8392fa258f931d6996339a17a chromium-ozone-wayland_91.0.4472.114.bb.diff, change meta-browser/meta-chromium/recipes-browser/chromium/chromium-ozone-wayland_91.0.4472.114.bb for adding some compile flags, etc. 5.10.72-merge.patch, this patch change chromium source code to add video decode accelerate by using i.MX8 VPU.   Build steps: 1>Download i.MX8 5.10.72_2.2.2 yocto release from nxp.com 2>apply chromium-ozone-wayland_91.0.4472.114.bb.diff to change meta-browser/meta-chromium/recipes-browser/chromium/chromium-ozone-wayland_91.0.4472.114.bb 3>put 5.10.72-merge.patch to folder path_of_yocto-5.10.72-2.2.2/sources/meta-browser/meta-chromium/recipes-browser/chromium/files/ 3>apply imx8-5.10.72-vpudrv-update.diff to i.MX8 5.10.72_2.2.2 kernel 4>under the yocto image build folder, add "CORE_IMAGE_EXTRA_INSTALL += "chromium-ozone-wayland" to file path_of_yocto-5.10.72-2.2.2/folder-of-bld/conf/local.conf 5>run bitbake to build rootfs image   Test steps: After system boot up, put some video clip under /home/root/video then run below cmd (do not run chromium without any parameter, as that will start chromium with some other setting, you can check /usr/lib/chromium/chromium-wrapper) "/usr/lib/chromium/chromium-bin   --no-sandbox --ozone-platform=wayland --enable-features=VaapiVideoDecoder  --enable-accelerated-video-decode   --enable-clear-hevc-for-testing --ignore-gpu-blacklist --window-size=1920,1180  /home/root/video" then use mouse to click video clip and will start playback.   Reference: https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors:IMX8-SERIES https://www.nxp.com/design/software/embedded-software/i-mx-software/embedded-linux-for-i-mx-applications-processors:IMXLINUX https://www.chromium.org/audio-video/#:~:text=codec%20and%20container%20support https://github.com/igel-oss/meta-browser-hwdecode/blob/master/recipes-chromium/chromium/files/0001-Add-support-for-V4L2VDA-on-Linux.patch      
View full article
On behalf of Gopise Yuan. A collection of several GST debugging tips and known-how. When you need to play onto a DRM layer/plane directly without going through compositor, kmssink should be a good choice: // kmssink, with scale and adjust alpha property (opaque) and zpos (this requires kmssink>=1.16): gst-launch-1.0 filesrc location=/media/AVC-AAC-720P-3M_Alan.mov ! decodebin ! imxvideoconvert_g2d ! kmssink plane-id=37 render-rectangle="<100,100,720,480>" can-scale=false plane-properties=s,alpha=65535,zpos=2 When using playbin, you can still customize the pipeline besides the sink plugin, e.g. add a converter plugin: // Playbin with additional customization on converter before sink: gst-launch-1.0 playbin uri=file:///mnt/MP4_H264_AAC_1920x1080.mp4 video-sink="imxvideoconvert_g2d ! video/x-raw,format=BGRA,width=1920,height=1080 ! kmssink plane-id=44" GST can generate a pipeline graph for analyzing the pipeline in a intuitive manner: // Generate pipeline graph: 1. Export GST_DEBUG_DUMP_DOT_DIR=<dump-folder>, GST_DEBUG=4 2. Run pipeline with gst-launch or others. 3. Copy all dump files (.dot) from <dump-folder>. Note: one dump file will be created for each state transaction. Normally, what we need will be PAUSE_READY or READY_PAUSE, after which pipeline has been setup. 4. Convert the .dot file to PDF with Graphviz: dot -Tpdf 0.00.03.685443250-gst-launch.PAUSED_READY.dot > pipeline_PAUSED_READY.pdf  
View full article
1. Intro   This document contains instructions to run run the SAI low power audio demo on the i.MX 8M Plus EVK. Here, the  RPSMG to allows audio to be passed from the A53 cluster running Linux to the M7 core. The latter controls the on board WM8960 audio codec,  which is connected to a 3.5 mm audio jack that allow us to play music using headphones. I will show the necessary steps to make the demo work and will add some GStreamer examples to demonstrate the demo's capabilities.   TBD: update this with a nice diagram that depicts the A53 and M7 RPMSG channel. 2. Requirements   Hardware  MX 8M Plus EVK Headphones with 3.5 mm audio jack Type-C power supply for i.MX 8M Plus EVK Micro USB to USB adapter cable Software  A recent prebuilt Linux BSP image from NXP.com ( we tested this on 5.15.35 and 5.15.5 releases) Windows 10 or Ubuntu 20.04 Workstation MCUXpresso SDK for i.MX 8M Plus ( available from:  Welcome | MCUXpresso SDK Builder (nxp.com)) 3. Reference documentation for this example   MCUXpresso SDK   [1] Getting Started with MCUXpresso SDK for EVK-MIMX8MP     Available within the MCUXpresso SDK package:  \{INSTALL PATH}\SDK_X_X_X_EVK-MIMX8MP\docs    [2] SAI low power audio README file Contains instructions for the SAI Low Power Audio Demo.  Available within the MCUXpresso SDK package: \{INSTALL PATH}\SDK_X_X_X_EVK-MIMX8MP\boards\evkmimx8mp\demo_apps\sai_low_power_audio   4. Downloading a pre-built Linux BSP image for the i.MX 8M Plus   I will make use of the prebuilt Linux Image for the i.MX 8M Plus EVK for demonstrating the demo works.  At the moment of writing this time, I used the 5.15.32 release, although there are older releases like 5.10.5 that I tested and proved to work with no issues. This SAI Low Power Audio Demo shall work for other processors on the i.MX 8M family. Although specific instructions ( e.g. load address for M-core binary load) might require some adaptation. For M-core load address, please refer to the specific MCUXpresso SDK documentation for each processor. The prebuilt Linux image (5.15.32) for the i.MX 8M Plus EVK can be downloaded from here: https://www.nxp.com/webapp/Download?colCode=L5.15.32_2.0.0_MX8MP&appType=license You can download other releases from here: Embedded Linux for i.MX Applications Processors | NXP Semiconductors . Select a version and a board and select download. 5. Flashing the BSP image   If you are using an Ubuntu 20.04 workstation, I recommend you to flash the image using dd. For this, you can refer to the i.MX Linux User's Guide: Section - 4.3.2 Copying the full SD card image - https://www.nxp.com/docs/en/user-guide/IMX_LINUX_USERS_GUIDE.pdf sudo dd if=.wic of=/dev/sdx bs=1M && sync NOTE: when using dd, ALWAYS, double check the of device that you are about to writing. Messing up with another location or partition will harm your system   If you are following this document on a Windows machine: You can use the Universal Update Utility (UUU) to flash your image on either the board's eMMC or SD card. Document named UUU.pdf shall serve as your reference guide for further instructions and flashing examples. It is available along with UUU binary here: https://github.com/NXPmicro/mfgtools/releases Two examples are shown below for your convenience:                                     SD card flash                                                 uuu -b sd_all bootloader rootfs.sdcard.bz2                                     eMMC flash                                                 uuu -b emmc_all bootloader rootfs.sdcard.bz2        uuu uuu.auto NOTE: UUU is also compatible with Ubuntu NOTE: there are other engineers who like to use BalenaEtcher for flashing their BSP images. I have tested it and works on both Ubuntu and Windows 10 machines.   6. Preparing the BSP and booting up M7 core  using U-Boot   I am writing this upon the instructions contained on the README file for the low power audio example  [2]. Instructions ready to copy and paste will follow:   Instruct U-Boot to pass to the kernel the rpmsg device tree to enable communication between the A53 cluster and the M7 one: u-boot=>setenv fdtfile imx8mp-evk-rpmsg.dtb u-boot=>saveenv Load the M7 example: u-boot=>setenv mmcargs 'setenv bootargs ${jh_clk} console=${console} root=${mmcroot} snd_pcm.max_alloc_per_card=134217728' u-boot=>saveenv Now, we need to load the M4 with the demo. Refer to [1] for further information. If running the BSP on an SD card, make sure the example binary is listed on the boot partition as follows: fatls mmc 1:1 You shall see something similar to this:             imx8mp_m7_TCM_sai_low_power_audio.bin Open the serial terminal emulator for the M7. Out of the fourth ports listed when we plug the i.MX 8M Plus serial debug cable to the PC, the M7 is typically the last one listed.   All the serial ports available to the workstation when the i.MX 8M Plus serial cable is connected to it. NOTE: you may require to install addtitional COM drivers if you are running on Windows. I like doing the previous step so I can see the result of the next commands issued in U-boot to load the M7 image. fatload mmc 1:1 0x48000000 imx8mp_m7_TCM_sai_low_power_audio.bin; cp.b 0x48000000 0x7e0000 20000; bootaux 0x7e0000 Here is an screenshot that shows how the U-Boot's response should look: U-Boot response when loading the SAI low power audio example to the Cortex M7 That should have prompted the following message on the M7 terminal: M7-core is up!   Now, let’s move to user space! u-boot=> boot 7. Testing the example using a simple GStreamer pipeline   As soon as the O.S. finishes booting. We can see that M7 terminal prompts the following: M7 is now in STOP mode; waiting for some audio to beat the room! Confirm that the WM8960 is listed as audio card as follows: cat /proc/asound/cards             Listing avaialable audio cards. WM8960 should be present. Make note of the list. The wm8960 is listed a the third sound card. This is where I like to differ a bit from [2] and I suggest a quicker test in case of not having an audio file ready. We just simply use GStreamer to play an audiotest source. Please make sure to plug in your headphones onto the board’s 3.5 mm jack before.   The following GStreamer pipeline is using the WM8960 as an audiosink.  gst-launch-1.0 audiotestsrc ! alsasink device=hw:3   NOTE: please be cautious and not put the headphones directly in your head at the first attempt. The sound can be too loud to some people. This is what you should see on the M7 side: Stop the GStreamer pipeline issuing CTRL + C. M7 shall warn you about that: NOTE: you can use the aplay command to play audio as shown on [2]. However, I consider using a testsrc is much quicker and flexible for a quick test.  8. Additional information   Feel free to go ahead and tweak the GStreamer pipeline to change audio test source properties. audiotest src. This command will let you know the available options:            gst-inspect-1.0 audiotestsrc                         NOTE: you can navigate through the displayed list using the “d”key. Press “q’’ to quit. For example:     For example, I am reproducing sound using a different setup based on the list above: gst-launch-1.0 audiotestsrc freq=4000 volume=0.8 wave=8 ! alsasink device=hw:3 9.  Errata and future updates   TBD:     Add an example on how to define the default audio card and play the audio either using gst-play or building the pipeline using filesrc Comment on the limitations of the M7 core regarding sample rate and audio formats  
View full article
This is based on L5.10.35 BSP where you have to install QT static build: Qt 5.15 static build: Assuming your sysroot is at "/sysroot-cross" and your toolchain is at "/Toolchain" your qt-source is at /Qt-5.15 PATH=/sysroot-cross/bin:/sysroot-cross/sbin:/Toolchain/bin mkdir /Qt-5.15/mkspecs/qws/linux-imx6-g++ create in this dir the textfile "qmake.conf" with this content: ####################### snip qmake.conf ############################## include(../../common/linux.conf) include(../../common/qws.conf) # modifications to g++.conf QMAKE_CC                = arm-linux-gnueabi-gcc QMAKE_CFLAGS            = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include QMAKE_CXX               = arm-linux-gnueabi-g++ QMAKE_CXXFLAGS          = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include QMAKE_INCDIR            = /sysroot-cross/include /sysroot-cross/usr/include QMAKE_LIBDIR            = /sysroot-cross/lib /sysroot-cross/usr/lib QMAKE_LINK              = arm-linux-gnueabi-g++ QMAKE_LINK_SHLIB        = arm-linux-gnueabi-g++ QMAKE_LFLAGS            = -L/sysroot-cross/lib -L/sysroot-cross/usr/lib -Wl,-rpath-link -Wl,/sysroot-cross/lib QMAKE_LFLAGS           += -Wl,-rpath-link -Wl,/sysroot-cross/usr/lib #Opengl QMAKE_INCDIR_OPENGL = /Vivante/include QMAKE_INCDIR_OPENGL += /Vivante/include/GL QMAKE_INCDIR_OPENGL += /Vivante/include/EGL QMAKE_INCDIR_OPENGL += /Vivante/include/GLES2 QMAKE_LIBDIR_OPENGL = /Vivante/lib QMAKE_INCDIR_OPENGL_ES1 = $$QMAKE_INCDIR_OPENGL QMAKE_LIBDIR_OPENGL_ES1 = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_OPENGL_ES1CL = $$QMAKE_INCDIR_OPENGL QMAKE_LIBDIR_OPENGL_ES1CL = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_OPENGL_ES2 = /Vivante/include QMAKE_INCDIR_OPENGL_ES2 += /Vivante/include/EGL QMAKE_INCDIR_OPENGL_ES2 += /Vivante/include/GLES2 QMAKE_LIBDIR_OPENGL_ES2 = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_EGL = $$QMAKE_INCDIR_OPENGL_ES2 QMAKE_LIBDIR_EGL = $$QMAKE_LIBDIR_OPENGL QMAKE_LIBS_EGL = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_ES2 = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_QT = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_ES1 = QMAKE_LIBS_OPENGL_ES1CL = # modifications to linux.conf QMAKE_AR                = arm-linux-gnueabi-ar cqs QMAKE_OBJCOPY           = arm-linux-gnueabi-objcopy QMAKE_STRIP             = arm-linux-gnueabi-strip QMAKE_CFLAGS_RELEASE   = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include load(qt_config) ####################### snip qmake.conf ############################## create in the same dir the text file "qplatformdefs.h" ####################### snip qplatformdefs.h ############################## #include "../../linux-g++/qplatformdefs.h" ####################### snip qplatformdefs.h ############################## now goto dir /Qt-5.15 cd /Qt-5.15 call configure with ./configure -opensource -confirm-license -release -no-rpath -no-fast \     -no-sql-ibase -no-sql-mysql -no-sql-odbc -no-sql-psql -no-sql-sqlite2 \     -no-qt3support -no-mmx -no-3dnow -no-sse -no-sse2 -no-sse3 -no-ssse3 \     -no-sse4.1 -no-sse4.2 -no-avx -no-optimized-qmake -no-nis -no-cups -pch \     -reduce-relocations -force-pkg-config -prefix /usr -no-armfpa -make libs \     -nomake docs -little-endian -embedded armv6 -qt-decoration-styled \     -depths all -xplatform qws/linux-imx6-g++ -iconv -largefile -qt-gfx-linuxfb \     -qt-gfx-multiscreen -qt-mouse-pc -qt-mouse-linuxinput -qt-libpng \     -plugin-gfx-directfb -system-zlib -no-accessibility -no-gfx-transformed \     -no-gfx-qvfb -no-gfx-vnc -no-kbd-tty -no-kbd-linuxinput -no-kbd-qvfb \     -no-mouse-linuxtp -no-mouse-tslib -no-mouse-qvfb -no-libmng -no-libtiff \     -no-gif -no-libjpeg -no-freetype -no-stl -no-glib -no-openssl -no-egl \     -no-xmlpatterns -no-exceptions -no-multimedia -no-audio-backend -no-phonon \     -no-phonon-backend -no-webkit -no-script -no-scripttools -no-svg -no-script \     -no-declarative -no-sql-sqlite -no-qdbus -no-opengl -static -nomake tools \     -nomake examples -nomake demos when configuring is finished call make after a looong time, when everything goes right, we have a staticly compiled Qt. DO NOT call "make install". We will install manually: copy from /Qt-5.15/bin the files moc, uic, rcc and qmake to somewhere in PATH, eg. /sysroot-cross/bin copy the contents of dir /Qt-5.15/mkspecs to /sysroot-cross/usr/mkspec copy the contents of dir /Qt-5.15/plugins to /sysroot-cross/usr/plugins copy the contents of dir /Qt-5.15/include to /sysroot-cross/usr/include copy the contents of dir /Qt-5.15/lib to /sysroot-cross/usr/lib Test application camtest: if you don't have/want directfb plugin remove from camtest.pro the lines LIBS += -L/sysroot-cross/usr/plugins/gfxdrivers QTPLUGIN += QDirectFBScreen and the lines from main.cpp #include <QtPlugin> Q_IMPORT_PLUGIN(qdirectfbscreen) generate makefile by typing /sysroot-cross/bin/qmake -spec /sysroot-cross/usr/mkspecs/qws/linux-imx6-g++ camtest.pro then make you should set and activate your framebuffers with this script ################# snip ################################ fbset -fb /dev/fb0 -g 1024 768 1024 2304 16 echo -n 0 > /sys/class/graphics/fb0/blank fbset -fb /dev/fb1 -g 1024 768 1024 1536 32 echo -n 0 > /sys/class/graphics/fb1/blank modprobe galcore modprobe uvcvideo modprobe mxc_v4l2_capture ################# snip ################################ if you use directfb then your /etc/directfbrc file should look like this: ######################## snip /etc/directfbrc ############# system=fbdev fbdev=/dev/fb1 mode=1024x768 depth=32 pixelformat=ARGB no-cursor window-surface-policy=systemonly ######################## snip /etc/directfbrc ############# to start the application with directfb: ./camtest -qws -display directfb without directfb using linuxfb: ./camtest -qws -display linuxfb:/dev/fb1 Notes about application: 1. The application shows 2 webcams in background-framebuffer (BG-FB). The foreground-framebuffer (FG-FB) shows the qt-gui. FG-FB is configured to be fully opaque and uses color-keying. On the BG-FB one cam is overlayed on the other cam using IPU. Optimization possibilities: the app copies the frames from the cams with memcpy. This wouldn't be necessary, when the kernel usb-webcam interface (uvc) would support V4L2_MEMORY_USERPTR method. through this way, you could pass the mapped IPU mmapped inbufs directly to v4l2 output buffers. If you get errors like NOSPC (-28) from uvc, this is a limitation of USB. My board is a MX6QSabre, where the two webcams are connected to the same usb-controller. With both webcams I had to limit the frame size to 320x250 and 160x120 at 25Hz. You might try higher res if you have other type of webcams (not usb). Have fun  
View full article
This note show how to use the open source gstreamer1.0-rtsp-server package on i.MX6QDS and i.MX8x to stream video files and camera using RTP protocol.  The i.MX 6ULL and i.MX 7 doesn't have Video Processing Unit (VPU). Real Time protocol is a very common network protocol for delivering media over IP networks. On the board, you will need a GStreamer pipeline that encodes the raw video, adds the RTP payload, and sends over a network sink. A generic pipeline would look as follows: video source ! video encoder ! RTP payload ! network sink Video source: often it is a camera, but it can be a video from a file or a test pattern, for example. Video encoder: a video encoder as H.264, H.265, VP8, JPEG and others. RTP payload: an RTP payload that matches the video encoder. Network sink: a video sync that streams over the network, often via UDP.   Prerequisites: MX6x o MX8x board with the L5.10.35 BSP installed. A host PC with either Gstreamer or VLC player installed. Receiving h.264/h.265 Encoded RTP Video Stream on a Host Machine Using GStreamer GStreamer is a low-latency method for receiving RTP video. On your host machine, install Gstreamer and send the following command: $ gst-launch-1.0 -v udpsrc port=5000 caps = "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! decodebin ! videoconvert ! autovideosink sync=false   Using Host PC: VLC Player Optionally, you can use VLC player to receive RTP video on a PC. First, in your PC, create a sdp file with the following content:  stream.sdpv=0m=video 5000 RTP/AVP 96c=IN IP4 127.0.0.1a=rtpmap:96 H264/90000 After this, with the GStreamer pipepline on the device running, open this .sdp file with VLC Player on the host PC. Sending h.264 and h.265 Encoded RTP Video Stream GStreamer provides an h.264 encoding element by software named x264enc. Use this plugin if your board does not support h.264 encoding by hardware or if you want to use the same pipeline on different modules. Note that the video performance will be lower compared with the plugins with encoding accelerated by hardware. # gst-launch-1.0 videotestsrc ! videoconvert ! x264enc ! rtph264pay config-interval=1 pt=96 ! udpsink host=<host-machine-ip> port=5000 Note: Replace <host-machine-ip> by the IP of the host machine. In all examples you can replace videotestsrc by v4l2src element to collect a stream from a camera   i.MX8X # gst-launch-1.0 videotestsrc ! videoconvert ! v4l2h264enc ! rtph264pay config-interval=1 pt=96 ! udpsink host=<host-machine-ip> port=5000   i.MX 8M Mini Quad/ 8M Plus # gst-launch-1.0 videotestsrc ! videoconvert ! vpuenc_h264 ! rtph264pay config-interval=1 pt=96 ! udpsink host=<host-machine-ip> port=5000 i.MX6X The i.MX6QDS does not support h.265 so the h.264 can work: # gst-launch-1.0 videotestsrc ! videoconvert ! vpuenc_h264 ! rtph264pay config-interval=1 pt=96 ! udpsink host=<host-machine-ip> port=5000   Using Other Video Encoders While examples of streaming video with other encoders are not provided, you may try it yourself. Use the gst-inspect tool to find available encoders and RTP payloaders on the board: # gst-inspect-1.0 | grep -e "encoder"# gst-inspect-1.0 | grep -e "rtp" -e " payloader" Then browse the results and replace the elements in the original pipelines. On the receiving end, you will have to use a corresponding payload. Inspect the payloader element to find the corresponding values. For example: # gst-inspect-1.0 rtph264pay   Install rtp in your yocto different form L5.10.35 BSP, to install gstreamer1.0-rtsp-server in any Yocto Project image, please follow the steps below: Enable meta-multimedia layer: Add the following on your build/conf/bblayers.conf: BBLAYERS += "$"${BSPDIR}/sources/meta-openembedded/meta-multimedia" Include gstreamer1.0-rtsp-server into the image: Add the following on your build/conf/local.conf: IMAGE_INSTALL_append += "gstreamer1.0-rtsp-server" Run bitbake and mount your sdcard. Copy the binaries: Access the gstreamer1.0-rtsp-server examples folder: $ cd /build/tmp/work/cortexa9hf-vfp-neon-poky-linux-gnueabi/gstreamer1.0-rtsp-server/$version/build/examples/.libs Copy the test-uri and test-launch to the rootfs /usr/bin folder. $ sudo cp test-uri test-launch /media/USER/ROOTFS_PATH/usr/bin Be sure that the IPs are correctly set: SERVER: => ifconfig eth0 $SERVERIP CLIENT: => ifconfig eth0 $CLIENTIP Video file example SERVER: => test-uri file:///home/root/video_file.mp4 CLIENT: => gst-launch-1.0 playbin uri=rtsp://$SERVERIP:8554/test You can try to improve the framerate performance using manual pipelines in the CLIENT with the rtspsrc plugin instead of playbin. Follow an example: => gst-launch-1.0 rtspsrc location=rtsp://$SERVERIP:8554/test caps = 'application/x-rtp'  ! queue max-size-buffers=0 ! rtpjitterbuffer latency=100 ! queue max-size-buffers=0 ! rtph264depay ! queue max-size-buffers=0 ! decodebin ! queue max-size-buffers=0 ! imxv4l2sink sync=false   Camera example SERVER: => test-launch "( imxv4l2src device=/dev/video0 ! capsfilter caps='video/x-raw, width=1280, height=720, framerate=30/1, mapping=/test' ! vpuenc_h264 ! rtph264pay name=pay0 pt=96 )" CLIENT: => gst-launch-1.0 rtspsrc location=rtsp://$SERVERIP:8554/test ! decodebin ! autovideosink sync=false The rtspsrc has two properties very useful for RTSP streaming: Latency: Useful for low-latency RTSP stream playback (default 200 ms); Buffer-mode: Used to control buffer mode. The slave mode is recommended for low-latency communications. Using these properties, the example below gets 29 FPS without a sync=false property in the sink plugin. The key achievement here is the fact that there is no dropped frame: => gst-launch-1.0 rtspsrc location=rtsp://$SERVERIP:8554/test latency=100 buffer-mode=slave ! queue max-size-buffers=0 ! rtph264depay ! vpudec ! imxv4l2sink      
View full article