i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
[中文翻译版] 见附件   原文链接: Add a new shared memory region on Android Auto P9.0.0_GA2.1.0 BSP 
View full article
This guide is a continuation from our latest Debian 12 Installation Guide for iMX8MM, iMX8MP, iMX8MN and iMX93. Here we will describe the process to install the multimedia and hardware acceleration packages, specifically GPU, VPU and Gstreamer on i.MX8M Mini, i.MX8M Plus and i.MX8M Nano. The guide is based on the one provided by our colleague Build Ubuntu For i.MX8 Series Platform - NXP Community, which requires to previously build an image using Yocto Project with the following distro and image name. Distro name - fsl-imx-wayland Image name – imx-image-multimedia For more information please check our BSP documentation i.MX Yocto Project User’s Guide.   Hardware Requirements Linux Host Computer (Ubuntu 20.04 or later) USB Card reader or Micro SD to SD adapter SD Card Evaluation Kit Board for the i.MX8M Nano, i.MX8M Mini, i.MX8M Plus   Software Requirements Linux Ubuntu (20.04 tested) or Debian for Host Computer BSP version 6.1.55 built with Yocto Project   After built the image we can start the installation by following the steps below:   GPU Installation The GPU Installation consists of copy the files from packages imx-gpu-g2d, imx-gpu-viv, libdrm to the Debian system. As our latest installation guide, we will continue naming “mountpoint” to the directory where Debian system is mounted on our host machine. Regarding the path provided on each step, we put labels <build-path> and <machine> that you will need to change based on your environment. These are the paths that Yocto Project uses to save the packages. However, this could change on your environment and you can find the work directory from each package using the following command: bitbake -e <package-name> | grep ^WORKDIR= This command will show you the absolute path of the package work directory. 1. Install GPU Packages $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-gpu-g2d/6.4.11.p2.2-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-gpu-viv/1_6.4.11.p2.2-aarch64-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/libdrm/2.4.115.imx-r0/image/* mountpoint   2. Install Linux IMX Headers and IMX Parser $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/linux-imx-headers/6.1-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-poky-linux/imx-parser/4.8.2-r0/image/* mountpoint   3. Use chroot $ sudo LANG=C.UTF-8 chroot mountpoint/ qemu-aarch64-static /bin/bash   4. Install Dependencies $ apt install libudev-dev libinput-dev libxkbcommon-dev libpam0g-dev libx11-xcb-dev libxcb-xfixes0-dev libxcb-composite0-dev libxcursor-dev libxcb-shape0-dev libdbus-1-dev libdbus-glib-1-dev libsystemd-dev libpixman-1-dev libcairo2-dev libffi-dev libxml2-dev kbd libexpat1-dev autoconf automake libtool meson cmake ssh net-tools network-manager iputils-ping rsyslog bash-completion htop resolvconf dialog vim udhcpc udhcpd git v4l-utils alsa-utils git gcc less autoconf autopoint libtool bison flex gtk-doc-tools libglib2.0-dev libpango1.0-dev libatk1.0-dev kmod pciutils libjpeg-dev   5. Create a folder for Multimedia Installation. Here we will clone all the multimedia repositories.  $ mkdir multimedia_packages $ cd multimedia_packages   6. Build Wayland $ git clone https://gitlab.freedesktop.org/wayland/wayland.git $ cd wayland $ git checkout 1.22.0 $ meson setup build --prefix=/usr -Ddocumentation=false -Ddtd_validation=true $ cd build $ ninja install   7. Build Wayland Protocols IMX $ git clone https://github.com/nxp-imx/wayland-protocols-imx.git $ cd wayland-protocols-imx $ git checkout wayland-protocols-imx-1.32 $ meson setup build --prefix=/usr -Dtests=false $ cd build $ ninja install   8. Build Weston $ git clone https://github.com/nxp-imx/weston-imx.git $ cd weston-imx $ git checkout weston-imx-11.0.3 $ meson setup build --prefix=/usr -Dpipewire=false -Dsimple-clients=all -Ddemo-clients=true -Ddeprecated-color-management-colord=false -Drenderer-gl=true -Dbackend-headless=false -Dimage-jpeg=true -Drenderer-g2d=true -Dbackend-drm=true -Dlauncher-libseat=false -Dcolor-management-lcms=false -Dbackend-rdp=false -Dremoting=false -Dscreenshare=true -Dshell-desktop=true -Dshell-fullscreen=true -Dshell-ivi=true -Dshell-kiosk=true -Dsystemd=true -Dlauncher-logind=true -Dbackend-drm-screencast-vaapi=false -Dbackend-wayland=false -Dimage-webp=false -Dbackend-x11=false -Dxwayland=false $ cd build $ ninja install   VPU Installation To install VPU and Gstreamer please follow the steps below: 1. Install firmware-imx $ sudo cp -Pra <build-path>/tmp/work/all-poky-linux/firmware-imx/1_8.22-r0/image/lib/* mountpoint/lib/   2. Install VPU Driver $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-vpu-hantro/1.31.0-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-vpuwrap/git-r0/image/* mountpoint   3. Use chroot $ sudo LANG=C.UTF-8 chroot mountpoint/ qemu-aarch64-static /bin/bash   4. Install dependencies for Gstreamer Plugins $ apt install libgirepository1.0-dev gettext liborc-0.4-dev libasound2-dev libogg-dev libtheora-dev libvorbis-dev libbz2-dev libflac-dev libgdk-pixbuf-2.0-dev libmp3lame-dev libmpg123-dev libpulse-dev libspeex-dev libtag1-dev libbluetooth-dev libusb-1.0-0-dev libcurl4-openssl-dev libssl-dev librsvg2-dev libsbc-dev libsndfile1-dev   5. Change directory to multimedia packages. $ cd multimedia-packages   6. Build gstreamer $ git clone https://github.com/nxp-imx/gstreamer -b lf-6.1.55-2.2.0 $ cd gstreamer $ meson setup build --prefix=/usr -Dintrospection=enabled -Ddoc=disabled -Dexamples=disabled -Ddbghelp=disabled -Dnls=enabled -Dbash-completion=disabled -Dcheck=enabled -Dcoretracers=disabled -Dgst_debug=true -Dlibdw=disabled -Dtests=enabled -Dtools=enabled -Dtracer_hooks=true -Dlibunwind=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   7. Build gst-plugins-base $ git clone https://github.com/nxp-imx/gst-plugins-base -b lf-6.1.55-2.2.0 $ cd gst-plugins-base $ meson setup build --prefix=/usr -Dalsa=enabled -Dcdparanoia=disabled -Dgl-graphene=disabled -Dgl-jpeg=disabled -Dopus=disabled -Dogg=enabled -Dorc=enabled -Dpango=enabled -Dgl-png=enabled -Dqt5=disabled -Dtheora=enabled -Dtremor=disabled -Dvorbis=enabled -Dlibvisual=disabled -Dx11=disabled -Dxvideo=disabled -Dxshm=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   8. Build gst-plugins-good $ git clone https://github.com/nxp-imx/gst-plugins-good -b lf-6.1.55-2.2.0 $ cd gst-plugins-good $ meson setup build --prefix=/usr -Dexamples=disabled -Dnls=enabled -Ddoc=disabled -Daalib=disabled -Ddirectsound=disabled -Ddv=disabled -Dlibcaca=disabled -Doss=enabled -Doss4=disabled -Dosxaudio=disabled -Dosxvideo=disabled -Dshout2=disabled -Dtwolame=disabled -Dwaveform=disabled -Dasm=disabled -Dbz2=enabled -Dcairo=enabled -Ddv1394=disabled -Dflac=enabled -Dgdk-pixbuf=enabled -Dgtk3=disabled -Dv4l2-gudev=enabled -Djack=disabled -Djpeg=enabled -Dlame=enabled -Dpng=enabled -Dv4l2-libv4l2=disabled -Dmpg123=enabled -Dorc=enabled -Dpulse=enabled -Dqt5=disabled -Drpicamsrc=disabled -Dsoup=enabled -Dspeex=enabled -Dtaglib=enabled -Dv4l2=enabled -Dv4l2-probe=true -Dvpx=disabled -Dwavpack=disabled -Dximagesrc=disabled -Dximagesrc-xshm=disabled -Dximagesrc-xfixes=disabled -Dximagesrc-xdamage=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   9. Build gst-plugins-bad $ git clone https://github.com/nxp-imx/gst-plugins-bad -b lf-6.1.55-2.2.0 $ cd gst-plugins-bad $ meson setup build --prefix=/usr -Dintrospection=enabled -Dexamples=disabled -Dnls=enabled -Dgpl=disabled -Ddoc=disabled -Daes=enabled -Dcodecalpha=enabled -Ddecklink=enabled -Ddvb=enabled -Dfbdev=enabled -Dipcpipeline=enabled -Dshm=enabled -Dtranscode=enabled -Dandroidmedia=disabled -Dapplemedia=disabled -Dasio=disabled -Dbs2b=disabled -Dchromaprint=disabled -Dd3dvideosink=disabled -Dd3d11=disabled -Ddirectsound=disabled -Ddts=disabled -Dfdkaac=disabled -Dflite=disabled -Dgme=disabled -Dgs=disabled -Dgsm=disabled -Diqa=disabled -Dkate=disabled -Dladspa=disabled -Dldac=disabled -Dlv2=disabled -Dmagicleap=disabled -Dmediafoundation=disabled -Dmicrodns=disabled -Dmpeg2enc=disabled -Dmplex=disabled -Dmusepack=disabled -Dnvcodec=disabled -Dopenexr=disabled -Dopenni2=disabled -Dopenaptx=disabled -Dopensles=disabled -Donnx=disabled -Dqroverlay=disabled -Dsoundtouch=disabled -Dspandsp=disabled -Dsvthevcenc=disabled -Dteletext=disabled -Dwasapi=disabled -Dwasapi2=disabled -Dwildmidi=disabled -Dwinks=disabled -Dwinscreencap=disabled -Dwpe=disabled -Dzxing=disabled -Daom=disabled -Dassrender=disabled -Davtp=disabled -Dbluez=enabled -Dbz2=enabled -Dclosedcaption=enabled -Dcurl=enabled -Ddash=enabled -Ddc1394=disabled -Ddirectfb=disabled -Ddtls=disabled -Dfaac=disabled -Dfaad=disabled -Dfluidsynth=disabled -Dgl=enabled -Dhls=enabled -Dkms=enabled -Dcolormanagement=disabled -Dlibde265=disabled -Dcurl-ssh2=disabled -Dmodplug=disabled -Dmsdk=disabled -Dneon=disabled -Dopenal=disabled -Dopencv=disabled -Dopenh264=disabled -Dopenjpeg=disabled -Dopenmpt=disabled -Dhls-crypto=openssl -Dopus=disabled -Dorc=enabled -Dresindvd=disabled -Drsvg=enabled -Drtmp=disabled -Dsbc=enabled -Dsctp=disabled -Dsmoothstreaming=enabled -Dsndfile=enabled -Dsrt=disabled -Dsrtp=disabled -Dtinyalsa=disabled -Dtinycompress=enabled -Dttml=enabled -Duvch264=enabled -Dv4l2codecs=disabled -Dva=disabled -Dvoaacenc=disabled -Dvoamrwbenc=disabled -Dvulkan=disabled -Dwayland=enabled -Dwebp=enabled -Dwebrtc=disabled -Dwebrtcdsp=disabled -Dx11=disabled -Dx265=disabled -Dzbar=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   10. Build imx-gst1.0-plugin $ git clone https://github.com/nxp-imx/imx-gst1.0-plugin -b lf-6.1.55-2.2.0 $ cd imx-gst1.0-plugin $ meson setup build --prefix=/usr -Dplatform=MX8 -Dc_args=-I/usr/include/imx $ cd build $ ninja install   11. Exit chroot $ exit   Verify Installation For verification process, boot your target from the SD Card. (Review your specific target documentation) 1. Verify Weston For this verification you will need to be root user. # export XDG_RUNTIME_DIR=/run/user/0 # weston   2. Verify VPU and Gstreamer Use the following Gstreamer pipeline for Hardware Accelerated VPU Encode. # gst-launch-1.0 videotestsrc ! video/x-raw, format=I420, width=640, height=480 ! vpuenc_h264 ! filesink location=test.mp4   Then you can reproduce the file with this command: # gplay-1.0 test.mp4   Finally, you have installed and verified the GPU, VPU and Multimedia packages. Now, you can start testing audio and video applications.
View full article
This document is about to build an image by Yocto , and it will disable a function that normal user can’t use command line of “ su ”.
View full article
P3T1755DP is a ±0.5°C accurate temperature-to-digital converter with a -40 °C to +125 °C range. It uses an on-chip band gap temperature sensor and an A-to-D conversion technique with overtemperature detection. The temperature register always stores a 12-bit two's complement data, giving a temperature resolution of 0.0625 °C P3T1755DP which can be configured for different operation conditions: continuous conversion, one-shot mode, or shutdown mode.   The device has very good features but, unfortunately, is not supported by Linux yet!   The P31755 works very similarly to LM75, pct2075, and other compatibles.   We can add support to P3T1755 in the LM75.c program due to the process to communicate with the device is the same as LM75 and equivalents.   https://github.com/nxp-imx/linux-imx/blob/lf-6.1.55-2.2.0/drivers/hwmon/lm75.c route: drivers/hwmon/lm75.c   The modifications that we have to do are the next:    1. We have to add the configurations to the kernel on the imx_v8_defconfig file CONFIG_SENSORS_ARM_SCMI=y CONFIG_SENSORS_ARM_SCPI=y CONFIG_SENSORS_FP9931=y +CONFIG_SENSORS_LM75=m +CONFIG_HWMON=y +CONFIG_I2C=y +CONFIG_REGMAP_I2C=y CONFIG_SENSORS_LM90=m CONFIG_SENSORS_PWM_FAN=m CONFIG_SENSORS_SL28CPLD=m    2. Add the part on the list of parts compatible with the driver LM75.c enum lm75_type { /* keep sorted in alphabetical order */ max6626, max31725, mcp980x, + p3t1755, pct2075, stds75, stlm75,   3. Add the configuration in the structure lm75_params device_params[]. .default_resolution = 9, .default_sample_time = MSEC_PER_SEC / 18, }, + [p3t1755] = { + .default_resolution = 12, + .default_sample_time = MSEC_PER_SEC / 10, + }, [pct2075] = { .default_resolution = 11, .default_sample_time = MSEC_PER_SEC / 10,   Notes: You can change the configuration of the device using .set_mask and .clear_mask, see more details on LM75.c lines 57 to 78   4. Add the ID to the list in the structure i2c_device_id lm75_ids and of_device_id __maybe_unused lm75_of_match    { "max31725", max31725, }, { "max31726", max31725, }, { "mcp980x", mcp980x, }, + { "p3t1755", p3t1755, }, { "pct2075", pct2075, }, { "stds75", stds75, }, { "stlm75", stlm75, },   + { + .compatible = "nxp,p3t1755", + .data = (void *)p3t1755 + },   5. In addition to all modifications, I modify the device tree of my iMX8MP-EVK to connect the Sensor in I2C3 of the board.  https://github.com/nxp-imx/linux-imx/blob/lf-6.1.55-2.2.0/arch/arm64/boot/dts/freescale/imx8mp-evk.dts   }; }; + + p3t1755: p3t1755@48 { + compatible = "nxp,p3t1755"; + reg = <0x48>; + }; + };   Connections: We will use the expansion connector of the iMX8MP-EVK and J9 of the P3T1755DP-ARD board.   P3T1755DP-ARD board   iMX8MP-EVK   P3T1755DP-ARD ----> iMX8MP-EVK J9              ---------->            J21 +3v3 (Pin 9) ---> +3v3 (Pin 1) GND(Pin 7) ---> GND (PIN 9) SCL (Pin 4) ---> SCL (Pin 5) SDA (Pin 3) ---> SDA (Pin 3)     Reading the Sensor We can read the sensor using the next commands:   Read Temperature: $ cat /sys/class/hwmon/hwmon1/temp1_input Reading maximum temperature: $ cat /sys/class/hwmon/hwmon1/temp1_max Reading hysteresis: $ cat /sys/class/hwmon/hwmon1/temp1_max_hyst   https://www.nxp.com/design/design-center/development-boards-and-designs/analog-toolbox/arduino-shields-solutions/p3t1755dp-arduino-shield-evaluation-board:P3T1755DP-ARD    
View full article
Sometimes we need to use an SPI bus to communicate with sensors or another device. Unfortunately, by default on iMX8MN-EVK, we have the ECSPI2 disabled on our BSP.   We can use that peripheral on Linux enabling it in the device tree.   To enable the ECSPI2 on the device tree we have to add the next on imx8mn-evk.dtsi:     status = "okay"; }; +&ecspi2 { + #address-cells = <1>; + #size-cells = <0>; + fsl,spi-num-chipselects = <1>; + pinctrl-names = "default"; + pinctrl-0 = <&pinctrl_ecspi2 &pinctrl_ecspi2_cs>; + cs-gpios = <&gpio5 13 GPIO_ACTIVE_LOW>; + status = "okay"; + + spidev0: spi@0 { + reg = <0>; + compatible = "rohm,dh2228fv"; + spi-max-frequency = <500000>; + }; +}; + &fec1 { pinctrl-names = "default"; pinctrl-0 = <&pinctrl_fec1>;   On iomux node:   + pinctrl_ecspi2: ecspi2grp { + fsl,pins = < + MX8MN_IOMUXC_ECSPI2_SCLK_ECSPI2_SCLK 0x82 + MX8MN_IOMUXC_ECSPI2_MOSI_ECSPI2_MOSI 0x82 + MX8MN_IOMUXC_ECSPI2_MISO_ECSPI2_MISO 0x82 + >; + }; + + pinctrl_ecspi2_cs: ecspi2cs { + fsl,pins = < + MX8MN_IOMUXC_ECSPI2_SS0_GPIO5_IO13 0x40000 + >; + }; + pinctrl_ir_recv: ir-recv { fsl,pins = < MX8MN_IOMUXC_GPIO1_IO13_GPIO1_IO13 0x4f    after modifying and compiling the device tree you can see the device active like this:     Connection:   Test: spidev_test -D /dev/spidev1.0 -v       You can use the devsheell of yocto to make the changes:   https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/How-to-use-Devshell-to-compile-device-tree-files/ta-p/1727428
View full article
Symptoms   On i.MX8MP, when inputting a 80% duty, 0.4V-1.8V, 3KHz square wave, we observed that the system may hang. We also tested i.MX8MN and i.MX8MM and observed the same phenomenon. In i.MX8MN RM, there's a note in GPC chapter:     We believe that the issue described in this note exists not only in the iMX8MN, but also in the iMX8MP and iMX8MM. Meanwhile, there is not only a problem with power down in this issue, but also a problem with wait mode. Diagnosis   In debugging, we find that avoiding accessing LPCR_A53_AD register in imx_set_cluster_powerdown can fix the issue. So we think that due to frequently power up/down of cores, cores have chances failed to power up. When the IRQ behavior become more complex, because the IRQ is an async event, it will come in any time. if the wait mode is enabled, in some conner case, the GPC internal LPM mode state machine will run into problem, then lead to system failure. Solution   1. A workaround patch that bypass the wait mode setting during the cpuidle.. See the patch attached. 2. Will add the Note about "SCU power down should not be enabled in wait mode" to i.MX8MP and i.MX8MM RM. 3. Will try to identify this issue into errta document, ticket TKT0632147.
View full article
i.MX93 eMMC Secondary Boot          i.MX93 eMMC Secondary Boot.zip   i.MX8MP eMMC Secondary Boot           i.MX8MP eMMC Secondary Boot.zip i.MX8MM SDCARD Secondary Boot Demo https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/i-MX8MM-SDCARD-Secondary-Boot-Demo/ta-p/1500011   i.MX8QXP eMMC Secondary Boot https://community.nxp.com/t5/i-MX-Community-Articles/i-MX8QXP-eMMC-Secondary-Boot/ba-p/1257704#M45    i.MX6 SDCARD Secondary Boot Demo           i.MX6_SDCARD_Secondary_Boot_Demo.pdf      
View full article
The document will cover three parts, which include: A brief introduction to RSA algorithm How to compile boot image including OP-TEE-OS for Boot media - QSPI The steps to sign and verification The SoC for this experiment is based on i.MX8MP-EVK
View full article
Sometime need standalone compile device tree. Only Linux headers and device tree directory are needed.         
View full article
-- DTS for gpio wakeup   // SPDX-License-Identifier: (GPL-2.0+ OR MIT) /*  * Copyright 2022 NXP  */   #include "imx93-11x11-evk.dts"   / {         gpio-keys {                 compatible = "gpio-keys";                 pinctrl-names = "default";                 pinctrl-0 = <&pinctrl_gpio_keys>;                   power {                   label = "GPIO Key Power";                   linux,code = <KEY_POWER>;                   gpios = <&gpio2 7 GPIO_ACTIVE_LOW>;                   wakeup-source;                   debounce-interval = <20>;                   interrupt-parent = <&gpio2>;                   interrupts = <7 IRQ_TYPE_LEVEL_LOW>;                 };         }; };   &iomuxc {         pinctrl_gpio_keys: gpio_keys_grp {                 fsl,pins = <                         MX93_PAD_GPIO_IO07__GPIO2_IO07  0x31e                 >;         }; }; -- testing the switch GPIO  First check if your gpio dts configuration to make it act as a switch works or not After executing the command - 'evtest /dev/input/event1' Trigger an interrupt by connecting GPIO2 7 to GND, as soon as you do that, you will receive Event logs such as below:- This shows that your dts configuration for GPIO works.     -- Verify the interrupt         -- Go to sleep and then connect the GPIO to GND to trigger a wakeup, in the logs we see that kernel exits the suspend mode    
View full article
Hello, on this post I will explain how to record separated audio channels using an 8MIC-RPI-MX8 Board. As background about how to setup the board to record and play audio using i.MX boards, I suggest you take a look on the next post: How to configure, record and play audio using an 8MIC-RPI-MX8 Board. Requirements: I.MX 8M Mini EVK. Linux Binary Demo Files - i.MX 8MMini EVK. 8MIC-RPI-MX8 Board. Serial console emulator (Tera Term, Putty, etc.). Headphones/speakers. Waveform Audio Format WAV, known for WAVE (Waveform Audio File Format), is a subset of Microsoft’s Resource Interchange File Format (RIFF) specification for storing digital audio files. This format does not apply compression to the information and stores the audio with different sampling rates and bitrates. WAV files are larger in size compared to other formats such as MP3 which uses compression to reduce the file size while maintaining a good audio quality but, there is always some lose on quality since audio information is too random to be compressed with conventional methods, the main advantage of this format is provide an audio file without losses that is also widely used on studio. This files starts with a file header with data chunks. A WAV file consists of two sub-chunks: fmt chunk: data format. data chunk: sample data. So, is structured by a metadata that is called WAV file header and the actual audio information. The header of a WAV (RIFF) file is 44 bytes long and has the following format: How to separate the channels? To separate each audio channel from the recording we need to use the next command that will record raw data of each channel. arecord -D plughw:<audio device> -c<number of chanels> -f <format> -r <sample rate> -d <duration of the recording> --separate-channels <output file name>.wav arecord -D plughw:2,0 -c8 -f s16_le -r 48000 -d 10 --separate-channels sample.wav This command will output raw data of recorded channels as is showed below. This raw data cannot be used as a “normal” .wav file because the header information is missing. It is possible to confirm it if import raw data to a DAW and play recorded samples: So, to use this information we need to create the header for each file using WAVE library on python. Here the script that I used: import wave import os name = input("Enter the name of the audio file: ") os.system("arecord -D plughw:2,0 -c8 -f s16_le -r 48000 -d 10 --separate-channels " + name + ".wav") for i in range (0,8): with open(name + ".wav." + str(i), "rb") as in_file: data = in_file.read() with wave.open(name + "_channel_" + str(i) +".wav", "wb") as out_file: out_file.setnchannels(1) out_file.setsampwidth(2) out_file.setframerate(48000) out_file.writeframesraw(data) os.system("mkdir output_files") os.system("mv " + name + "_channel_" + "* " + "output_files") os.system("rm " + name + ".wav.*") If we run the script, will generate a directory with the eight audio channels in .wav format. Now, we will be able to play each channel individually using an audio player. References IBM, Microsoft Corporation. (1991). Multimedia Programming Interface and Data Specifications 1.0. Microsoft Corporation. (1994). New Multimedia Data Types and Data Techniques. Standford University. (2024, January 30). Retrieved from WAVE PCM sound file format: http://hummer.stanford.edu/sig/doc/classes/SoundHeader/WaveFormat/
View full article
P3T1755 Demo   In this space I want to show you the things that you can create usign our products.   In  this demo I demostrate a use case creating a GUI for a Temperature Sensor.   We can create modern GUIs and more with LVGL combined with our powerful processors.               CPU USAGE As we can see  the CPU usage for this demo is around 2%   Pictures         This demo is based on the previous publused articles.   References: https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/Adding-support-to-P3T1755-on-Linux/ta-p/1855874 https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/How-to-run-LGVL-on-iMX-using-framebuffer/ta-p/1853768  
View full article
  This guide assumes that the developer has knowledge of the V4L2 API and has worked or is familiar with sensor drivers and their operation within the Linux kernel. This guide does not focus on the details of the sensor driver development that you want to port. It is assumed that you already have an existing driver for your sensor, before making the port. The version of the ISP's was 6.6.36 Linux BSP. If a different version is used, it is the developer's responsibility to review the API documentation for the corresponding version, since there may be changes that affect what is indicated in this guide. To port the camera sensor, the following steps must be taken as described in the following sections: Define sensor attributes and create instances. ISS Driver and ISP Media Server. Sensor Calibration Files. VVCAM Driver Creation. Device Tree Modifications. Define Sensor Attributes and Create Instances The following three steps are already implemented in CamDevice and are included for reference only. Step 1: Define the sensor attributes in the IsiSensor_s data structure. Step 2: Define the IsiSensorInstanceConfig_t configuration structure that will be used to create a new sensor instance. Step 3: Call the IsiCreateSensorIss() function to create a new sensor instance. ISS Driver and ISP Media Server Step 0 - Use a driver template as base code: Drivers can be found in $ISP_SOURCES_TOP/units/isi/drv/. For example, the ISP sources, come with the OV4656 and OS08a20 drivers. $ISP_SOURCES_TOP indicates the path of your working directory, where the respective sources are located. Step 1 - Add your <SENSOR> ISS Driver: Create the driver entry for your sensor in the path $ISP_SOURCES_TOP/units/isi/drv/<SENSOR>/source/<SENSOR>.c. Change all occurrences of the respective sensor name within the code, for instance, OV4656 -> <SENSOR>, respecting capital letters where applicable. Step 2 - Check the information on the IsiCamDrvConfig_s data structure: Data members defined in this data structure include the sensor ID (CameraDriverID) and the function pointer to the IsiSensor data structure. By using the address of the IsiCamDrvConfig_s structure, the driver can then access the sensor API attached to the function pointer. The following is an example of the structure: /***************************************************************************** * Each sensor driver needs to declare this struct for ISI load *****************************************************************************/ IsiCamDrvConfig_t IsiCamDrvConfig = {     .CameraDriverID = 0x0000,     .pIsiHalQuerySensor = <SENSOR>_IsiHalQuerySensorIss,     .pfIsiGetSensorIss = <SENSOR>_IsiGetSensorIss, };   Important Note: Modify the CameraDriverID according to the chip ID of your sensor. Apply this change to any Chip ID occurrence within the code. Step 3 - Check sensor macro definitions: In case there is any macro definition in the ISS Driver code, which involves specific properties of the sensor, you should modify it according to your requirements. For example: #define <SENSOR>_MIN_GAIN_STEP         (1.0f/16.0f)   Step 4 - Modify ISP Media Server build tools: Changes required in this step include: Add a CMakeLists.txt file in $ISP_SOURCES_TOP/units/isi/drv/<SENSOR>/ that builds your sensor module. Modify the CMakeLists.txt located at $ISP_SOURCES_TOP/units/isi/drv/CMakeLists.txt to include and reference your sensor directory. Modify the $ISP_SOURCES_TOP/appshell/ and $ISP_SOURCES_TOP/mediacontrol/ build tools, since by default they refer to the construction of a particular sensor, for example, the OV4656, so it is necessary to change the name of the corresponding sensor. Modify the $ISP_SOURCES_TOP/build-all-isp.sh script to reference the sensor modules and generate the corresponding binaries when building the ISP media server instance.   Step 5 - ISP Media Server run script: You need to add the operation modes defined for your sensor in the script. Each operating mode is associated with an order (mode 0, mode 1 ... mode N), a name used to execute the command in the terminal (e.g <sensor>_custom_mode_1), a resolution, and a specific calibration file for the sensor. The script is located at $ISP_SOURCES_TOP/imx/run.sh .   Step 6 - Sensor<X> config: At $ISP_SOURCES_TOP/units/isi/drv/ you can find the files to configure each sensor entry to the ISP, called Sensor0_Entry.cfg and Sensor1_Entry.cfg. There, the associated calibration files are indicated for each sensor operating mode, including the calibration files in XML format and the Dewarp Unit configuration files in JSON format. In addition, the .drv file generated for your sensor is referenced, creating the association between the respective /dev/video<X> node and the sensor driver module outputted from the ISP Media Server. In case you are using only one ISP channel, just modify Sensor0_Entry.cfg. In case you require both instances of the ISP, you will need to modify both files. Sensor Calibration Files It is a requirement for using the ISP, to have a calibration file in XML format, specific to the sensor you are using and according to the resolution and working mode. To obtain the calibration files in XML format, there are 3 options: Use the NXP ISP tuning tool for this you will need to ask for access or sign a NDA document. Pay NXP professional services to do the tune. Pay a third-party vendor to do the tune   VVCAM Driver Creation The changes indicated below are based on the assumption that there is a functional sensor driver in its base form, and that it is compatible with the V4L2 API. From now on we focus on applying the changes suggested in the NXP documentation, specifically to establish the communication of the VVCAM Driver (kernel side) and the ISI Layer. Step 0 - Create the sensor driver entry: Developers must add the driver code to the file located at $ISP_SOURCES_TOP/vvcam/v4l2/sensor/<sensor>/<sensor>_xxxx.c, along with a Makefile for the sensor driver module. In the same way, as indicated in the ISS Driver section, you can refer to one of the sample drivers that are included as part of the ISP sources, to review details about the implementation of the driver and the structure of the required Makefile.   Step 1 - Add the VVCAM mode info data structure array: This array stores all the supported modes information for your sensor. The ISI layer can get all the modes with the VVSENSORIOC_QUERY command. The following is an example of the structure, please fill in the information using the attributes of your sensor and the modes it supports. #include "vvsensor.h" . . .   static struct vvcam_mode_info_s <sensor>_mode_info[] = {         {         .index = 0,         .width = ... ,         .height = ... ,         .hdr_mode = ... ,         .bit_width = ... ,         .data_compress.enable = ... ,         .bayer_pattern = ... ,         .ae_info = {                        .                        .                        .                        },         .mipi_info = {                        .mipi_lane = ... ,                        },         },         {         .index = 1,         .         .         .         }, }; Step 2 - Define sensor client to i2c : Define the client_to_sensor macro (in case you don't have any already) and check the segments of the driver code that require this macro. #define client_to_<sensor>(client)\         container_of(i2c_get_clientdata(client), struct <sensor>, subdev)   Step 3 - Define the V4L2-subdev IOCTL function: Define and implement the <sensor>_priv_ioctl, which is used to receive the commands and parameters passed down by the user space through ioctl() and control the sensor. long <sensor>_priv_ioctl(struct v4l2_subdev *subdev, unsigned int cmd, void *arg) {         struct i2c_client *client = v4l2_get_subdevdata(subdev);         struct <sensor> *sensor = client_to_<sensor>(client);         struct vvcam_sccb_data_s reg;         uint32_t value = 0;         long ret = 0;           if(!sensor){                return -EINVAL;         }           switch (cmd) {         case VVSENSORIOC_G_CLK: {                ret = custom_implementation();                break;         }         case VIDIOC_QUERYCAP: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_QUERY: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_CHIP_ID: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_RESERVE_ID: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_SENSOR_MODE:{                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_SENSOR_MODE: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_STREAM: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_WRITE_REG: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_READ_REG: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_EXP: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_POWER:         case VVSENSORIOC_S_CLK:         case VVSENSORIOC_RESET:         case VVSENSORIOC_S_FPS:         case VVSENSORIOC_G_FPS:         case VVSENSORIOC_S_LONG_GAIN:         case VVSENSORIOC_S_GAIN:         case VVSENSORIOC_S_VSGAIN:         case VVSENSORIOC_S_LONG_EXP:         case VVSENSORIOC_S_VSEXP:          case VVSENSORIOC_S_WB:         case VVSENSORIOC_S_BLC:         case VVSENSORIOC_G_EXPAND_CURVE:                break;         default:                break;         }           return ret; }   As you can see in the example, some cases are implemented but others are not. Developers are free to implement the features they consider necessary, as long as a minimum base of operation of the driver is guaranteed (query commands, read and write registers, among others). It is the developer's responsibility to implement each custom function, for each case or scenario that may arise when interacting with the sensor. In addition to what was shown previously, a link must be created to make the ioctl connection with the driver in question. Link your priv_ioctl function on the v4l2_subdev_core_ops struct, as in the example below: static const struct v4l2_subdev_core_ops <sensor>_core_ops = {         .s_power       = v4l2_s_power,         .subscribe_event = v4l2_ctrl_subdev_subscribe_event,         .unsubscribe_event = v4l2_event_subdev_unsubscribe,      // IOCTL link         .ioctl = <sensor>_priv_ioctl, };   Step 4 - Verify your sensor's private data structure: After performing the modifications suggested, it would be a good practice to double-check your sensor's private data structure properties, in case there is one missing, and also check that the properties are initialized correctly on the driver's probe.   Step 5 - Modify VVCAM V4L2 sensor Makefile : At $ISP_SOURCES_TOP/vvcam/v4l2/sensor/Makefile, include your sensor object as follows: ... obj-m += <sensor>/ ... Important Note: There is a very common issue that appears when working with camera sensor drivers in i.MX8MP platforms. The kernel log message shows something similar to the following: mxc-mipi-csi2.<X>: is_entity_link_setup, No remote pad found! The link setup callback is required by the Media Controller when performing the linking process of the media entities involved in the capture process of the camera. Normally, this callback is triggered by the imx8-media-dev driver included as part of the Kernel sources. To make sure that the problem is not related to your sensor driver, verify the link setup callback is already created in the code, and if is not, you can add the following template: /* Function needed by i.MX8MP */ static int <sensor>_camera_link_setup(struct media_entity *entity,                                    const struct media_pad *local,                                    const struct media_pad *remote, u32 flags) {     /* Return always zero */         return 0; }   /* Add the link setup callback to the media entity operations struct */ static const struct media_entity_operations <sensor>_camera_subdev_media_ops = {         .link_setup = <sensor>_camera_link_setup, };     /* Verify the initialization process of the media entity ops in the sensor driver's probe function*/ static int <sensor>_probe(struct i2c_client *client, ...) {         /* Initialize subdev */         sd = &<sensor>->subdev;         sd->dev = &client->dev;         <sensor>->subdev.internal_ops = ...         <sensor>->subdev.flags |= ...         <sensor->subdev.entity.function = ...     /* Entity ops initialization */         <sensor->subdev.entity.ops = &<sensor>_camera_subdev_media_ops; } In most cases, adding the link setup function will solve the media controller issue, or at least it discards problems on the driver side. Device Tree Modifications On the Device Tree side, it is necessary to enable the ISP channels that will be used. Likewise, it is necessary to disable the ISI channels, which are normally the ones that connect to the MIPI_CSI2 ports to extract raw data from the sensor (in case the ISP is not used). A MIPI_CSI2 port can be mapped to either an ISI channel or an ISP channel, but not both simultaneously. In this guide, we focus on using the ISP, so any other custom configuration that you want to implement may vary from what is shown. In the code below, ISP channel 0 is enabled, and the connection is made to the port where the sensor is connected (mipi_csi_0). &mipi_csi_0 {         status = "okay";         port@0 {         // Example endpoint to <sensor>_ep                mipi0_sensor_ep: endpoint@1 {                        remote-endpoint = <&<sensor>_ep>;                };         }; };   &cameradev {         status = "okay"; };   &isi_0 {         status = "disabled"; };   &isi_1 {         status = "disabled"; };   &isp_0 {         status = "okay"; };   &isp_1 {         status = "disabled"; };   &dewarp {         status = "okay"; }; What is shown above does not represent a complete device tree file, is only a general skeleton of the points you should pay attention to when working with ISP channels. For simplicity, we omitted all the attributes that are normally defined when working with camera sensor drivers and their respective configurations in the i2c port of the hardware.   Note: Due to hardware restrictions when using ISP channels, it is recommended to use the isp_0 channel, when working with only one sensor. In case you need to use two sensors, you can enable both channels, taking into account the limitations regarding the output resolutions and the clock frequency when both channels are working simultaneously. What is not recommended is to use the isp_1 channel when working with a single sensor.   References ISP Independent Sensor Interface (ISI) API reference, I.MX8M Plus Camera Sensor Porting User guide: https://www.nxp.com/webapp/Download?colCode=IMX8MPCSPUG Sensor Calibration tool: https://www.nxp.com/webapp/Download?colCode=AN13565 i.MX8M Plus reference manual: https://www.nxp.com/webapp/Download?colCode=IMX8MPRM  
View full article