i.MXプロセッサ ナレッジベース

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

i.MX Processors Knowledge Base

ディスカッション

ソート順:
Check new updated version for with Morty here Step 1 : Get iMX Yocto AVS setup environment Review the steps under Chapter 3 of the i.MX_Yocto_Project_User'sGuide.pdf on the L4.X LINUX_DOCS to prepare your host machine. Including at least the following essential Yocto packages $ sudo apt-get install gawk wget git-core diffstat unzip texinfo \   gcc-multilib build-essential chrpath socat libsdl1.2-dev u-boot-tools Install the i.MX NXP AVS repo Create/Move to a directory where you want to install the AVS yocto build enviroment. Let's call this as <yocto_dir> $ cd <yocto_dir> $ repo init -u https://source.codeaurora.org/external/imxsupport/meta-avs-demos -b master -m imx7d-pico-avs-sdk_4.1.15-1.0.0.xml Download the AVS BSP build environment: $ repo sync Step 2: Setup yocto for Alexa_SDK image with AVS-SETUP-DEMO script: Run the avs-setup-demo script as follows to setup your environment for the imx7d-pico board: $ MACHINE=imx7d-pico DISTRO=fsl-imx-x11 source avs-setup-demo.sh -b <build_sdk> Where <build_sdk> is the name you will give to your build folder. After acepting the EULA the script will prompt if you want to enable: Sound Card selection The following Sound Cards are supported on the build: SGTL (In-board Audio Codec for PicoPi) 2-Mic Conexant The script will prompt if you are going to use the Conexant Card. If not then SGTL will be assumed as your selection Are you going to use Conexant Sound Card [Y/N]? Install Alexa SDK Next option is to select if you want to pre-install the AVS SDK software on the image. Do you want to build/include the AVS_SDK package on this image(Y/N)? If you select YES, then your image will contain the AVS SDK ready to use (after authentication). Note this AVS_SDK will not have WakeWord detection support, but it can be added on runtime. If your selection was NO, then you can always manually fetch and build the AVS_SDK on runtime. All the packages dependencies will be already there, so only fetching the AVS_SDK source code and building it is required. Finish avs-image configuration At the end you will see a text according with the configuration you select for your image build. Next is an example for a Preinstalled AVS_SDK with Conxant Sound Card support and WiFi/BT not enabled. ==========================================================   AVS configuration is now ready at conf/local.conf             - Sound Card = Conexant                                     - AVS_SDK pre-installed                                       You are ready to bitbake your AVS demo image now:               bitbake avs-image                                        ========================================================== Step 3: Build the AVS image Go to your <build_sdk> directory and start the build of the avs-image There are 2 options Regular Build: $ cd <yocto_dir>/<build_sdk> $ bitbake avs-image With QT5 support included: $ cd <yocto_dir>/<build_sdk> $ bitbake avs-image-qt5 The image with QT5 is useful if you want to add some GUI for example to render DisplayCards. Step 4 : Deploying the built images to SD/MMC card to boot on target board. After a build has succesfully completed, the created image resides at <build_sdk>/tmp/deploy/images/imx7d-pico/ In this directory, you will find the imx7d-pico-avs.sdcard image or imx7d-pico-avs-qt5.sdcard, depending on the build you chose on Step3. To Flash the .sdcard image into the eMMC device of your PicoPi board follow the next steps: Download the bootbomb flasher Follow the instruction on Section 4. Board Reflashing of the Quick Start Guide for AVS kit to setup your board on flashing mode. Copy the built SDCARD file $ sudo dd if=imx7d-pico-avs.sdcard of=/dev/sd bs=1M && sync $ sync Properly eject the pico-imx7d board: $ sudo eject /dev/sd NXP Documentation Refer to the Quick Start Quide for AVS SDK to fully setup your PicoPi board with Synaptics 2Mic and PicoPi i.mx7D For a more comprehensive understanding of Yocto, its features and setup; more image build and deployment options and customization, please take a look at the i.MX_Yocto_Project_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document. For a more detailed description of the Linux BSP, u-boot use and configuration, please take a look at the i.MX_Linux_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document.
記事全体を表示
Information about the transition from the NXP Demo Experience to GoPoint for i.MX Application Processors.
記事全体を表示
i.MX8 VPU hardware decoder support below video codec: H.265 HEVC Main Profile 4Kp60 Level 5.1 H.264 AVC Constrained Baseline, Main and High profile H.264 MVC WMV9 / VC-1 Simple, Main and Advanced Profile MPEG 1 and 2 Main Profile at High Level AVS Jizhun Profile (JP) MJPEG4.2 ASP, H.263, Sorenson Spark Divx 3.11, with Global Motion Compensation (GMC) ON2/Google VP6/VP8 RealVideo 8/9/10 JPEG and MJPEG A/B Baseline   i.MX8 VPU Linux driver is implemented based on V4L2 standard. Chromium beside software video decoding, it also support hardware video decoder(VideoDecodeAccelerator),  there are some kind of VideoDecodeAccelerator, one of them is V4L2VDA. Please note V4L2VDA is using V4l2 api, so it is possible that change V4L2VDA to enable Chromium hardware video playback on i.MX8.   This doc share patch to add chromium video decode accelerate by using i.MX8QM/i.MX8QXP VPU. It will support chromium H.264, H.265, VP8 hardware video decode. H.264 and H.265 need use mp4 container. VP8 use webm container.   HW: i.MX8QM/i.MX8QXP MEK board, 1080P HDMI display, mouse, keyboard SW: i.MX8 5.10.72_2.2.2 yocto bsp release(which included chromium 91.0), and patch in this doc   Patch description: imx8-5.10.72-vpudrv-update.diff, update i.MX8  5.10.72_2.2.2 kernel vpu driver to https://source.codeaurora.org/external/imx/linux-imx/commit/drivers/mxc/vpu_malone?h=lf-5.15.y&id=fa7c67e2c9ed4fb8392fa258f931d6996339a17a chromium-ozone-wayland_91.0.4472.114.bb.diff, change meta-browser/meta-chromium/recipes-browser/chromium/chromium-ozone-wayland_91.0.4472.114.bb for adding some compile flags, etc. 5.10.72-merge.patch, this patch change chromium source code to add video decode accelerate by using i.MX8 VPU.   Build steps: 1>Download i.MX8 5.10.72_2.2.2 yocto release from nxp.com 2>apply chromium-ozone-wayland_91.0.4472.114.bb.diff to change meta-browser/meta-chromium/recipes-browser/chromium/chromium-ozone-wayland_91.0.4472.114.bb 3>put 5.10.72-merge.patch to folder path_of_yocto-5.10.72-2.2.2/sources/meta-browser/meta-chromium/recipes-browser/chromium/files/ 3>apply imx8-5.10.72-vpudrv-update.diff to i.MX8 5.10.72_2.2.2 kernel 4>under the yocto image build folder, add "CORE_IMAGE_EXTRA_INSTALL += "chromium-ozone-wayland" to file path_of_yocto-5.10.72-2.2.2/folder-of-bld/conf/local.conf 5>run bitbake to build rootfs image   Test steps: After system boot up, put some video clip under /home/root/video then run below cmd (do not run chromium without any parameter, as that will start chromium with some other setting, you can check /usr/lib/chromium/chromium-wrapper) "/usr/lib/chromium/chromium-bin   --no-sandbox --ozone-platform=wayland --enable-features=VaapiVideoDecoder  --enable-accelerated-video-decode   --enable-clear-hevc-for-testing --ignore-gpu-blacklist --window-size=1920,1180  /home/root/video" then use mouse to click video clip and will start playback.   Reference: https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-8-processors:IMX8-SERIES https://www.nxp.com/design/software/embedded-software/i-mx-software/embedded-linux-for-i-mx-applications-processors:IMXLINUX https://www.chromium.org/audio-video/#:~:text=codec%20and%20container%20support https://github.com/igel-oss/meta-browser-hwdecode/blob/master/recipes-chromium/chromium/files/0001-Add-support-for-V4L2VDA-on-Linux.patch      
記事全体を表示
Overview: This document is written for Freescale customers who have Freescale AC3 release packages (excluded package). (If you did not have the AC3 release package, you can disregard this document.) Freescale OMX Player in Android release supports audio track selection when playing files with multiple audio tracks. However, most customers don't use this enhanced API to select the audio track even if current audio codec is not supported. To avoid a soundless output when partial audio track can be played, this document provides the method to select the available audio track automatically to play. The patch in this document is not included in our current release because it did not match with our track selection rule - play the first track. If you have any idea with this issue, feel free to add comments into this document. Issue description: Software: R13.4-GA or R13.4.1 Android releases Hardware: MX6Dual/Quad SabreSD board Test source: 1.mkv Test Step: 1. Lunch Gallery from main menu. 2. Play the video And you can see the watch the video without any sound Root Reason: The file has 2 audio track DTS & AC3: audio track 1 is DTS and track 2 is AC3. OMX Player will choose the first audio track to play as default audio track, which is DTS audio. However, the software only supports the AC3 audio codec, so it could not set up audio decoder for DTS track. If we choose to play the AC3 track, sounds could be heard. How to fix: The audio track index is set in GMPlayer::LoadParser(). You can get audio format to check whether it is supported by decoder. Please see the patch audio_track_slection.diff
記事全体を表示
The i.MX 8QXP MEK does not allow the OV5640/LVDS/LCD usage only by changing the device tree anymore. It occurs because the M4 owns the i2c resources, so the A core must use rpmsg to enable virtual drivers. Due to this, if the user changes the device tree, for instance, the *ov5640.dtb, the kernel won't boot, entering in the following loop: [    8.603353] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).      [    8.610025] [drm] No driver support for vblank timestamp query.              [    8.616077] imx-drm display-subsystem: bound imx-drm-dpu-bliteng.2 (ops dpu_) [    8.624978] imx-drm display-subsystem: bound imx-dpu-crtc.0 (ops dpu_crtc_op) [    8.632526] imx-drm display-subsystem: bound imx-dpu-crtc.1 (ops dpu_crtc_op) [    8.639833] imx-drm display-subsystem: failed to bind ldb@562210e0 (ops imx_7 [    8.648428] imx-drm display-subsystem: master bind failed: -517 With the approach provided in this post, it is possible to make this change manually, only by changing the flash.bin at U-boot for a non-m4 one. In order to make the changes to the flash.bin file, it’s needed to obtain the following files: - u-boot.bin from internal u-boot provided by NXP. - scfw_tcm.bin from SCFW porting kit - bl31.bin from ARM Trusted Firmware - SECO firmware container image Disclaimer The described procedures in this document target a GNU/Linux (Ubuntu 20.04 LTS) and it’s focused on iMX8QXP B0 + BSP L4.19.35_1.1.0. Required packages 1 - Install ARM64 ToolChain: 1.1 - Install ARM64 GCC and G++ cross-compilers: # apt install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu 2 - Install ARM32 GCC6 ToolChain: 2.1 - Download the ARM32 6 Toolchain and install it: $ mkdir ~/gcc_toolchain $ cp ~/Downloads/gcc-arm-none-eabi-6-2017-q2-update-linux.tar.bz2 ~/gcc_toolchain/ $ cd ~/gcc_toolchain/ $ tar xvjf gcc-arm-none-eabi-6-2017-q2-update-linux.tar.bz2 # apt-get update # apt-get install srecord 3 - Download MKimage 3.1 - Create a new directory desired to the packages: $ mkdir flash_build $ cp flash_build 3.1 - Clone the MKimage: $ git clone https://source.codeaurora.org/external/imx/imx-mkimage -b imx_4.19.35_1.1.0 4 - U-boot build 4.1 - Clone the U-boot  $ git clone https://source.codeaurora.org/external/imx/uboot-imx -b imx_v2019.04_4.19.35_1.1.0 $ cd uboot-imx 4.2 - Export the ARM64 ToolChain:  $ export ARCH=arm64 $ export CROSS_COMPILE=/usr/bin/aarch64-linux-gnu- 4.3 - Build it:  $ unset LDFLAGS $ make -j4 imx8qxp_mek_defconfig $ make 4.4 - Copy the binary files to the MKimage/iMX8QX directory:  $ cp spl/u-boot-spl.bin ../imx-mkimage/iMX8QX/ $ cp u-boot-nodtb.bin ../imx-mkimage/iMX8QX/ $ cd ..   5 - ARM Trusted Firmware 5.1 - Clone the imx-atf:  $ git clone https://source.codeaurora.org/external/imx/imx-atf -b imx_4.19.35_1.1.0 $ cd imx-atf 5.2 - Build it:  $ unset LDFLAGS $ make PLAT=imx8qx bl31 5.3 - Copy the binary files to the MKimage/iMX8QX directory:  $ cp build/imx8qx/release/bl31.bin ../imx-mkimage/iMX8QX/ $ cd ..   6 - SCFW 6.1 - Export the ARM32 GCC6 Toolchain:  $ export TOOLS=~/gcc_toolchain/ 6.2 - Download the BSP L4.19.35_1.1.0_SCFW and copy it to the flash_build directory:  $ cp ~/Downloads/imx-scfw-porting-kit-1.2.7.1.tar.gz $ tar xvzf imx-scfw-porting-kit-1.2.7.1.tar.gz $ cd packages/ $ chmod a+x imx-scfw-porting-kit-1.2.7.1.tar.gz $ ./imx-scfw-porting-kit-1.2.7.1.bin 6.3 - Build it to i.MX 8QXP MEK B0:  $ cd imx-scfw-porting-kit-1.2.7.1/src/ $ tar xvzf scfw_export_mx8qx_b0.tar.gz $ cd scfw_export_mx8qx_b0/ $ make qx R=B0 B=mek 6.4 - Copy the binary file to the MKimage/iMX8QX directory:  $ cp build_mx8qx_b0/scfw_tcm.bin ../../../../imx-mkimage/iMX8QX/ $ cp ../../../../ 7 - SECO Firmware Container Image 7.1 - Download the SECO firmware binaries and copy it to the flash_build directory $ cp ~/Downloads/firmware-imx-7.9.bin . $ chmod a+x firmware-imx-7.9.bin 7.2 - Copy the binary files to the MKimage/iMX8QX directory:  $ cp firmware-imx-7.9/firmware/seco/mx8qx-ahab-container.img /imx-mkimage/iMX8QX/ 8 - Build flash.bin 8.1 - In a new terminal, open the imx-mkimage directory: $ cd flash_build/imx-mkimage 8.2 - Build it:  $ make SOC=iMX8QX flash 8.3 - Deploy it to the SDCard:  $ sudo dd if=iMX8QX/flash.bin of=/dev/sdX bs=1k seek=32 && sync Now, you are able to use any non-rpmsg.dtb without kernel errors. Author: Pedro Jardim: pedro.jardim@nxp.com
記事全体を表示
Video decoding gst-launch filesrc location=sample.mp4 ! qtdemux ! ffdec_h264 ! mfw_v4lsink Notes: On LTIB BSP 3.0.35_4.0.0, prep the package and apply the attached patch on top, then build. On Yocto, the easy way to add the gst-ffmpeg package is by adding these two lines on the conf/local.conf file: IMAGE_INSTALL_append = " gst-ffmpeg" LICENSE_FLAGS_WHITELIST = 'commercial'
記事全体を表示
This document describes the i.MX 8QXP MEK mini-SAS connectors features on Linux and Android use cases, covering the supported daughter cards, the process to change Device Tree (DTS) files or Boot images, and enable these different display options on the board.
記事全体を表示
GUI Guider version: 1.6.0 LVGL version: v8.3.5 Host software requirements: Ubuntu 20.04, Ubuntu 22.04 or Debian 12 Hardware requirements: Evaluation Kit for the i.MX 93 Applications Processor. (i.MX 93 Evaluation Kit | NXP Semiconductors) On this guide we will use the IMX-MIPI-HDMI accessory board to connect the iMX93 with a HDMI Monitor. (IMX-MIPI-HDMI Product Information|NXP) This board is usually provided with the iMX8M Mini and the iMX8M Nano.  Steps: 1. Copy your project from the folder GUI-Guider-Projects to your Linux PC.  2. Build an image for iMX93 using The Yocto Project.    a. Based on iMX Yocto Porject Users Guide set directories and download the repo $ mkdir imx-bsp-6.1.1-1.0.0 $ cd imx-bsp-6.1.1-1.0.0 $ repo init -u https://github.com/nxp-imx/imx-manifest -b imx-linux-langdale -m imx-6.1.1-1.0.0.xml $ repo sync Use distro fsl-imx-xwayland and select machine imx93evk and use this commnad with a build folder name: $ MACHINE=imx93evk DISTRO=fsl-imx-xwayland source ./imx-setup-release.sh - b bld-imx93evk b. Use bitbake command to start the build process. Also, add the -c populate_sdk to get the toolchain. $ bitbake imx-image-multimedia -c populate_sdk  c. Install the Yocto toolchain located on <build-folder>/tmp/deploy/sdk/.  $ sudo sh ./fsl-imx-xwayland-glibc-x86_64-imx-image-multimedia-armv8a-imx93evk-toolchain-6.1-langdale.sh d. Install ninja utility on the build host $ sudo apt install ninja-build e. For Ubuntu 20.04 and Ubuntu 22.04, copy the lv_conf.h file from lvgl-simulator to lvgl $ cp lvgl-simulator/lv_conf.h lvgl/ f. Change the interpreter on build.sh from #!/bin/sh to #!/bin/bash. This is an important step! g. Then, enter to linux folder and use the following commands to make build.sh executable $ dos2unix build.sh $ chmod +x build.sh h. Execute the build.sh $ ./build.sh i. Copy the binary to the iMX93 using a USB or SCP.  2. On the target iMX93 follow these steps. a. On Uboot, use fatls interface device:partition fatls mmc 0:1 (Device 0 : Partition 1) With this command, we will be able to list device tree files. => fatls mmc 0:1 b. Select imx93-11x11-evk-rm67199.dtb and use the command editenv fdtfile  => editenv fdtfile Output example edit: imx93-11x11-evk-rm67199.dtb c. In edit command line put the selected device tree .dtb d. Use saveenv command to save environment and continue with the boot process. e. Finally, run the GUI Application $ ./gui_guider&   I hope this article will be helpful. Best regards, Brian.
記事全体を表示
Header 1 Header 2 Video rendering gst-launch videotestsrc ! mfw_v4lsink Audio rendering gst-launch audiotestsrc ! alsasink WAV Audio rendering gst-launch filesrc location=test.wav ! wavparse ! alsasink Video rendering selecting caps gst-launch videotestsrc ! capsfilter name='video/x-raw-yuv,format=(fourcc)I420' ! mfw_v4lsink gst-launch videotestsrc ! 'video/x-raw-yuv,format=(fourcc)I420' ! mfw_v4lsink
記事全体を表示
Freescale does not have a specific GStreamer element to do JPEG encoding, so the standard 'jpegenc' should be used. Image Capture With a web camera gst-launch v4l2src num-buffers=1 ! jpegenc ! filesink location=sample.jpeg With an embedded camera gst-launch mfw_v4lsrc num-buffers=1 !  jpegenc ! filesink location=sample.jpeg More pipelines on GStreamer i.MX6 Pipelines
記事全体を表示
(DEPRECATED. Please check this document for Real Time Streaming) A server can be streaming video and a client, in this case a i.MX6 target, is receiving and decoding it. For example, a server with GStreamer and a web camera connected, can be streaming with the following command: $ # Pipeline 1 $ gst-launch v4l2src ! 'video/x-raw-yuv, format=(fourcc)I420, width=(int)1280, height=(int)800' ! ffenc_mpeg4 ! tcpserversink host=$CLIENT_IP port=$PORT and on the target, the client receives, decodes and display with $ # Pipeline 2 $ gst-launch tcpclientsrc host=$SERVER_IP port=$PORT  ! 'video/mpeg, width=(int)1280, height=(int)800, framerate=(fraction)10/1, mpegversion=(int)4, systemstream=(boolean)false' ! vpudec ! mfw_isink The filter caps between the tcpclientsrc and the decoder (vpudec) depend on the sink caps coming from the server encoder (ffenc_mpeg4), so these may change depending on your needs. Running the above pipelines require the environment variables SERVER_IP, CLIENT_IP and PORT. In case you want the i.MX6 to act as a server, just change the video source (either mfw_v4lsrc of v4l2src) and the encoder (vpuenc), so $ # Pipeline 3 $  gst-launch v4l2src  !  'video/x-raw-yuv, format=(fourcc)I420, width=(int)640, height=(int)480, interlaced=(boolean)false, framerate=(fraction)10/1'  ! vpuenc ! tcpserversink host=$CLIENT_IP port=$PORT For testing purposes, set SERVER_IP=127.0.0.1, CLIENT_IP=127.0.0.1 and PORT=500, and run pipeline 3 and 2 in two different consoles. Check with 'top' the  CPU usage and see that VPU is actually doing most of the work.
記事全体を表示
The document includes the following contents: (1)document how to port ov5646 to android jb4.2.2 (2) ov5645 driver for Linux 3.0.35 (3) ov5645 schematic based on i.MX6Q/DL (4)ov5645 for android camera HAL   [Note:]      P5V29A-0JG is a camera module based on OV5645, and PAO532-0JG is based on OV5640, both manufactured by NINGBO SUNNY OPOTECH CO.LTD (China), If customer wants to use them on i.MX6 platform, can send me email to ask for datasheets of P5V29A & PAO532 , or discuss corresponding questions on porting.   Email: weidong.sun@freescale.com
記事全体を表示
The user interface has limited the use of the tool GUI Guider. Getting an interaction only through a mouse or touchscreen can be enough for some use cases. However, sometimes the use case requires to go beyond its limitations. This video/appnote explores the possibility of integrating voice by creating a bridge between a speech recognition technology, such as VIT, and the interface creator GUI Guider. It uses a universal way to link all the voice recognition commands and a wakeword to any interaction created by GUI Guider. The following video shows the steps necessary to create that connection by creating the voice recognition using VIT voice commands and wakewords, create an interface of GUI Guider using a template, how to connect between them using the board i.MX 93 evk and testing it. For more information consult the following links AppNote HTML: https://docs.nxp.com/bundle/AN14270/page/topics/abstract.html?_gl=1*1glzg9k*_ga*NDczMzk4MDYuMTcxNjkyMDI0OA..*_ga_WM5LE0KMSH*MTcxNjkyMDI0OC4xLjEuMTcxNjkyMDcyMy4wLjAuMA AppNote PDF: https://www.nxp.com/docs/en/application-note/AN14270.pdf Associated File: AN14270SW  
記事全体を表示
Hello, on this post I will explain how to record separated audio channels using an 8MIC-RPI-MX8 Board. As background about how to setup the board to record and play audio using i.MX boards, I suggest you take a look on the next post: How to configure, record and play audio using an 8MIC-RPI-MX8 Board. Requirements: I.MX 8M Mini EVK. Linux Binary Demo Files - i.MX 8MMini EVK. 8MIC-RPI-MX8 Board. Serial console emulator (Tera Term, Putty, etc.). Headphones/speakers. Waveform Audio Format WAV, known for WAVE (Waveform Audio File Format), is a subset of Microsoft’s Resource Interchange File Format (RIFF) specification for storing digital audio files. This format does not apply compression to the information and stores the audio with different sampling rates and bitrates. WAV files are larger in size compared to other formats such as MP3 which uses compression to reduce the file size while maintaining a good audio quality but, there is always some lose on quality since audio information is too random to be compressed with conventional methods, the main advantage of this format is provide an audio file without losses that is also widely used on studio. This files starts with a file header with data chunks. A WAV file consists of two sub-chunks: fmt chunk: data format. data chunk: sample data. So, is structured by a metadata that is called WAV file header and the actual audio information. The header of a WAV (RIFF) file is 44 bytes long and has the following format: How to separate the channels? To separate each audio channel from the recording we need to use the next command that will record raw data of each channel. arecord -D plughw:<audio device> -c<number of chanels> -f <format> -r <sample rate> -d <duration of the recording> --separate-channels <output file name>.wav arecord -D plughw:2,0 -c8 -f s16_le -r 48000 -d 10 --separate-channels sample.wav This command will output raw data of recorded channels as is showed below. This raw data cannot be used as a “normal” .wav file because the header information is missing. It is possible to confirm it if import raw data to a DAW and play recorded samples: So, to use this information we need to create the header for each file using WAVE library on python. Here the script that I used: import wave import os name = input("Enter the name of the audio file: ") os.system("arecord -D plughw:2,0 -c8 -f s16_le -r 48000 -d 10 --separate-channels " + name + ".wav") for i in range (0,8): with open(name + ".wav." + str(i), "rb") as in_file: data = in_file.read() with wave.open(name + "_channel_" + str(i) +".wav", "wb") as out_file: out_file.setnchannels(1) out_file.setsampwidth(2) out_file.setframerate(48000) out_file.writeframesraw(data) os.system("mkdir output_files") os.system("mv " + name + "_channel_" + "* " + "output_files") os.system("rm " + name + ".wav.*") If we run the script, will generate a directory with the eight audio channels in .wav format. Now, we will be able to play each channel individually using an audio player. References IBM, Microsoft Corporation. (1991). Multimedia Programming Interface and Data Specifications 1.0. Microsoft Corporation. (1994). New Multimedia Data Types and Data Techniques. Standford University. (2024, January 30). Retrieved from WAVE PCM sound file format: http://hummer.stanford.edu/sig/doc/classes/SoundHeader/WaveFormat/
記事全体を表示
On this tutorial we will review the implementation of Flutter on the i.MX8MP using the Linux Desktop Image. Please find more information about Flutter using the following link: Flutter: Option to create GUIs for Embedded System... - NXP Community Requirements: Evaluation Kit for the i.MX 8M Plus Applications Processor. (i.MX 8M Plus Evaluation Kit | NXP Semiconductors) NXP Desktop Image for i.MX 8M Plus (GitHub - nxp-imx/meta-nxp-desktop at lf-6.1.1-1.0.0-langdale) Note: This tutorial is based on the NXP Desktop Image Ubuntu 22.04 with Yocto version 6.1.1 – Langdale. Steps: 1. First, run commands to update packages. $ sudo apt update $ sudo apt upgrade 2. Install Flutter for Linux using the following command. $ sudo snap install flutter --classic 3. Run the command to verify the correct installation. $ flutter doctor With this command you will find information about the installation. The important part for our purpose is the parameter "Linux toolchain - develop for Linux desktop". 4. Run the command “flutter create .” to create a flutter project, this framework will create different folders and files used to develop the application.  $ cd Documents $ mkdir flutter_hello $ cd flutter_hello $ flutter create .​ 5. Finally, you can run the “hello world” application using: $ flutter run Verify the program behavior incrementing the number displayed on the window.  
記事全体を表示
One of the most popular use cases for embedded systems are projects destinated to show information and interact with users. These views are called GUI or Graphic User Interface which are designed to be intuitive, attractive, consistent, and clear. There are many tools that we can use to achieve great GUIs, mostly implemented for platforms such as Web, Android, and iOS. Here, we will need to introduce the concept of framework, basically, it is a set of tools and rules that provides a minimal structure to start with your development. Frameworks usually comes with configuration files, code snippets, files and folders organization helping us to save time and effort. Also, it is important to review the concept of SDK or Software Development Kit which is a set of tools that allows to build software for specific platforms. Usually supplies debugging tools, documentation, libraries, API’s, emulators, and sample code. Flutter is an open-source UI software development kit by Google that help us to create applications with great GUIs on different platforms from a single codebase. Depends on the reference, you can find Flutter defined as a framework or SDK and both are correct, however, an SDK could be a best definition thanks to Flutter supplies a wide and complete package to create an application in which framework is also included. This article is aimed at those that are in a prototyping stage looking for a different tool to develop projects. Also, this article pretends to be a theoretical introduction explaining the most important concepts. However, is a good practice to learn more about reviewing the official documentation from Flutter. (Flutter documentation | Flutter) Here is the structure used throughout this article: What is Flutter? Flutter details Platforms Programming language Official documentation Flutter for embedded systems What is Flutter? Flutter was officially released by Google in December 2018 with a main aim, to give developers a tool to create applications natively compiled for mobile (Android, iOS), web and desktop (Windows, Linux) from a single codebase. It means that as a developer, Flutter will create a structure with minimal code, configuration files, build files for each operating system, manifests, etc. in which we will add our custom code and finally build this code for our preferred OS. For example, we can create an application to review fruit and vegetable information and compile for Android and iOS with the same code. A basic Flutter development process based on my experience looks like the following diagram: Flutter has the following key features: Cross-platform development. Flutter allows the developer to create applications for different platforms using a single codebase. It means that you will not need to recreate the application for each platform you want to support.   Hot-reload. This feature allows the developer to see changes in real time without restarting the whole application, this results in time savings for your project.   High Performance Flutter apps achieve high performance due to the app code is compiled to native ARM code. With this tool no interpreters are involved.   UI Widgets Flutter supplies a set of widgets (UI components such as boxes, inputs text, buttons, etc.) predefined by UI systems guidelines Material on Android and Cupertino for iOS. Source: Material 3 Design Kit | Figma Community Source: Design - Apple Developer   Great community support. This feature could be subjective but, it is useful when we are developing our project find solutions to known issues or report new ones. Because of Flutter is an open source and is widely implemented in the industry this tool owns a big community, with events, forums, and documentation. Flutter Details Supported Platforms With Flutter you can create applications for: Android iOS Linux Debian Linux Ubuntu macOS web Chrome, Firefox, Safari, Edge Windows Supported deployment platforms | Flutter Programming Language Flutter use Dart, a programming language is an open-source language supported by Google optimized to use on the creation of user interfaces. Dart key features: Statically typed. This feature helps catching errors making the code robust ensuring that the variable’s value always match with the declared variable’s type. Null safety. All variables on Dart are non-nullable which means that every variable must have a non-null value avoiding errors at execution time. This feature also, make the code robust and secure. Async/Await. Dart is client-optimized which means that this language was specially created to ensure the best performance as a client application. Async/Await is a feature part of this optimization making easier to manage network requests and other asynchronous operations. Object oriented. Dart is an object-oriented language with classes and mixin. This is especially useful to use on Flutter with the usage of widgets. Compiler support of Just-In-Time (JIT) and Ahead-of-Time (AOT) JIT provides the support that enables the Hot Reload Flutter feature that I mentioned before. It is a complex mechanism, but Dart “detects” changes in your code and execute only these changes avoiding recompiling all the code. AOT compiler produces efficient ARM code improving start up time and performance. Official documentation Flutter has a rich community and documentation that goes from UI guidelines to an Architectural Overview. You can find the official documentation at the following links: Flutter Official Documentation: Flutter documentation | Flutter Flutter Community: Community (flutter.dev) Dart Official Documentation: Dart documentation | Dart Flutter for embedded systems So far, we know all the excellent features and platforms that Flutter can support. But, what about the embedded systems? On the official documentation we can find that Flutter may be used for embedded systems but in fact there is no an official supported platform. This SDK has been supported by their community, specially there is one repository on GitHub supported by Sony that provides documentation and Yocto recipes to support Flutter on embedded Linux. To understand the reason to differentiate between Flutter for Linux Desktop with official support and to create a specific Flutter support for embedded Linux is important to describe the basics of Flutter architecture. Based on the Flutter documentation the system is designed using layers that can be illustrated as follows:   Source: Flutter architectural overview | Flutter We can see as a top level “Framework” which is a high-level layer that includes widgets, tools and libraries that are in contact with developers. Below “Framework,” the layer “Engine” is responsible of drawing the widgets specified in the previous layer and provides the connection between high-level and low-level code. This layer is mostly written in C++ for this reason Flutter can achieve high performance running applications. Specifically for graphics rendering Flutter implements Impeller for iOS and Skia for the rest of platforms. The bottom layer is “Embedder” which is specific for each target and operating system this layer allows Flutter application to run as a native app providing the access to interact with different services managed by the operating systems such as input, rendering surfaces and accessibility. This layer for Linux Desktop uses GTK/GDK and X11 as backend that is highly dependent of unnecessary libraries and expensive for embedded systems which have constrained resources for computation and memory. The work around founded by Sony’s Flutter for Embedded Linux repository is to change this backend using a widely implemented backend for embedded systems Wayland. The following image illustrates the difference between Flutter for Linux Desktop and Flutter for Embedded Linux.   Source: What's the difference between Linux desktop and Embedded Linux · sony/flutter-embedded-linux Wiki · GitHub   Source: What's the difference between Linux desktop and Embedded Linux · sony/flutter-embedded-linux Wiki · GitHub Here is the link to the mentioned repository: GitHub - sony/flutter-elinux: Flutter tools for embedded Linux (eLinux) Finally, I would like to encourage you to read the official Flutter documentation and consider this tool as a great option compared to widely used tools on embedded devices such as Qt or Chromium. Also, please have a look to a great article written by Payam Zahedi delving into the implementation of Flutter for Embedded Linux measuring performance and giving conclusions about the usage of Flutter in embedded systems. (Flutter on Embedded Devices. Learn how to run Flutter on embedded… | by Payam Zahedi | Snapp Embedded | Medium).    
記事全体を表示
vpuwraper can fulfill VPU decoder/encoder, if customer’s user case is simple, for example they just need to encode yuv stream to H264, or decode H264 stream to yuv, There is no need to use gstreamer or V4L2 complex framework, you can use vpuwraper. Platform: i.MX8MP + L5.4.70.2.3.0 Build Procedure: mkdir vpu cd vpu git clone https://github.com/nxp-imx/imx-vpuwrap   cd imx-vpuwrap/ git tag -l   git switch -c rel_imx_5.4.70_2.3.0   source ../../.././5.4.70.2.3.0/sdk/environment-setup-aarch64-poky-linux   make -f Makefile_8mp   Test on i.MX8MP EVK board Pls find attached test log for decode and encode If busChromaU in YUV file is null, you will failed to encode it,pls apply patch vpuwraper patch for L5.4.70.2.3.0.patch to fix t If YUV file is interleave format, you need to add add interleave parameter : -interleave 1 ./test_enc_arm_elinux -i test.yuv -o aaa.h264 -f 2 -w 176 -h 96 -interleave 1   Thanks, Lambert
記事全体を表示
In this article, I will explain how to set up the iMX8M Plus to use the 4K Dart BCON Basler Camera module. Requirements: Evaluation Kit for the i.MX 8M Plus Applications Processor. (i.MX 8M Plus Evaluation Kit | NXP Semiconductors) Basler Camera for i.MX 8M Plus (4K dart BCON for MIPI camera module for i.MX 8M Plus | NXP Semiconductors). Embedded Linux for i.MX Applications Processors (Embedded Linux for i.MX Applications Processors | NXP Semiconductors) (For this example we will use BSP version Linux 5.15.71_2.2.0) Serial Console Emulator Basler Camera Specifications and Manuals: Basler Camera Specifications at this link: Embedded Vision Kits daA3840-30mc-IMX8MP-EVK - Embedded Vision Kits (baslerweb.com). Basler Manual to identify and setting up the hardware at this link: daA3840-30mc-IMX8MP-EVK | Basler Product Documentation (baslerweb.com) Basler Camera Module out-of-box with i.MX 8M Plus Applications Processor. (Video: Basler Camera Module out-of-box with i.MX 8M Plus Applications Processor | NXP Semiconductors) Steps After setting up the hardware we will need to turn on the iMX8M Plus and follow these steps: 1. Stop the boot process on Uboot by pressing any key. 2. Use the following command to list interfaces. => mmc list Output example => FSL_SDHC: 1 (SD) => FSL_SDHC: 2 The above command will show you the device number in this example for SD, the device number is 1. 3. Then use fatls <interface> <device[:partition]> [<directory>] fatls mmc 1:1 (Device 1 : Partition 1) With this command, we will be able to list device tree files. => fatls mmc 1:1 4. Select imx8mp-evk-basler.dtb or imx8mp-evk-dual-basler.dtb and use the command editenv fdtfile.  => editenv fdtfile Output example edit: imx8mp-evk-basler.dtb 5. In edit command line put the selected device tree (*.dtb). 6. Use saveenv command to save environment and continue with the boot process. 7. Using the terminal and go to /opt/imx8-isp/bin and execute the script run.sh. $ ./run.sh -c basler_1080p60 -lm 8. Use the command gst-device-monitor-1.0 to list devices. Here you will find the path to the camera device. $ gst-device-monitor-1.0 Output example Device found: name : VIV class : Video/Source caps : video/x-raw, format=YUY2, width=[ 176, 4096, 16 ], height=[ 144, 3072, 8 ], pixel-aspect-ratio=1/1, framerate={ (fraction)30/1, (fraction)29/1, (fraction)28/1, (fraction)27/1, (fraction)26/1, (fraction)25/1, (fraction)24/1, (fraction)23/1, (fraction)22/1, (fraction)21/1, (fraction)20/1, (fraction)19/1, (fraction)18/1, (fraction)17/1, (fraction)16/1, (fraction)15/1, (fraction)14/1, (fraction)13/1, (fraction)12/1, (fraction)11/1, (fraction)10/1, (fraction)9/1, (fraction)8/1, (fraction)7/1, (fraction)6/1, (fraction)5/1, (fraction)4/1, (fraction)3/1, (fraction)2/1, (fraction)1/1 } ... properties: udev-probed = true device.bus_path = platform-vvcam-video.0 sysfs.path = /sys/devices/platform/vvcam-video.0/video4linux/video2 device.subsystem = video4linux device.product.name = VIV device.capabilities = :capture: device.api = v4l2 device.path = /dev/video2 v4l2.device.driver = viv_v4l2_device v4l2.device.card = VIV v4l2.device.bus_info = platform:viv0 v4l2.device.version = 393473 (0x00060101) v4l2.device.capabilities = 2216693761 (0x84201001) v4l2.device.device_caps = 69206017 (0x04200001) gst-launch-1.0 v4l2src device=/dev/video2 ! ... 9. Finally, use gstreamer to verify proper operation. (With this gstreamer pipeline you will see a new window with the camera output. Then, just rotate the lens to acquire the correct focus) $ gst-launch-1.0 -v v4l2src device=/dev/video2 ! "video/x-raw,format=YUY2,width=1920,height=1080" ! queue ! imxvideoconvert_g2d ! waylandsink Basic description of Gstreamer Pipeline gst-launch-1.0 -v: The option -v enables the verbose mode to get detailed information of process. v4l2src device=/dev/video2: Select input device in this case the camera is on path /dev/video3. "video/x-raw,format=YUY2,width=1920,height=1080": Received format from camera. queue: This command is a buffer between camera recording process and the following image process, this command help us to interface two process and prevent blocking where each process has different speeds, in other words, when a process A is faster than process B. imxvideoconvert_g2d: This proprietary plugin uses hardware acceleration to perform rotation, scaling, and color space conversion on video frames. waylandsink : This command creates its own window and renders the decoded frames processed previously. 10. Result     I hope this article will be helpful. Best regards, Brian.
記事全体を表示
Hello everyone, We have recently migrated our Source code from CAF (Codeaurora) to Github, so i.MX NXP old recipes/manifest that point to Codeaurora eventually will be modified so it points correctly to Github to avoid any issues while fetching using Yocto. Also, all repo init commands for old releases should be changed from: $ repo init -u https://source.codeaurora.org/external/imx/imx-manifest -b <branch name> [ -m <release manifest>] To: $ repo init -u https://github.com/nxp-imx/imx-manifest -b <branch name> [ -m <release manifest>] This will also apply to all source code that was stored in Codeaurora, the new repository for all i.MX NXP source code is: https://github.com/nxp-imx For any issues regarding this, please create a community thread and/or a support ticket. Regards, Aldo.
記事全体を表示
i.MX8 series contains internal HiFi4 DSP. It is targeted for Audio related signal processing. SOF (Sound Open Firmware) is open source audio DSP firmware, driver and SDK. This document introduces basic theory about IIR/FIR digital filters, how to design IIR/FIR digital filters and the Equalizer filters implementation by SOF. After that, the document also describes how HiFi4 DSP MAC engine accelerate the EQ filters calculation.
記事全体を表示