i.MXプロセッサ ナレッジベース

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

i.MX Processors Knowledge Base

ディスカッション

ソート順:
The document includes the following contents: (1)document how to port ov5646 to android jb4.2.2 (2) ov5645 driver for Linux 3.0.35 (3) ov5645 schematic based on i.MX6Q/DL (4)ov5645 for android camera HAL   [Note:]      P5V29A-0JG is a camera module based on OV5645, and PAO532-0JG is based on OV5640, both manufactured by NINGBO SUNNY OPOTECH CO.LTD (China), If customer wants to use them on i.MX6 platform, can send me email to ask for datasheets of P5V29A & PAO532 , or discuss corresponding questions on porting.   Email: weidong.sun@freescale.com
記事全体を表示
OpenCV is a computer vision library originally developed by Intel. It is free for commercial and research use under the open source BSD license. The library is cross-platform. It focuses mainly on real-time image processing; as such, if it finds Intel's Integrated Performance Primitives on the system, it will use these commercial optimized routines to accelerate itself. Application OpenCV's application areas include: * 2D and 3D feature toolkits * Egomotion estimation * Face Recognition * Gesture Recognition * Human-Computer Interface (HCI) * Mobile robotics * Motion Understanding * Object Identification * Segmentation and Recognition * Stereopsis Stereo vision: depth perception from 2 cameras * Structure from motion (SFM) * Motion Tracking To support some of the above areas, OpenCV includes a statistical machine learning library that contains: * Boosting * Decision Trees * Expectation Maximization * k-nearest neighbor algorithm * Naive Bayes classifier * Artificial neural networks * Random forest * Support Vector Machine Installing OpenCV on i.MX 51 EVK Board running Ubuntu Linux Assuming that you already have the Ubuntu Linux running on your board, you can use this wiki page to guide you to get your USB camera running on your system in order to use real time image processing features of this library. In a brand new installation of Ubuntu some libraries is not installed by default, so you need to install them by your own hands (use synaptic to do that), here is the list of these libraries: libgtk2.0-dev libjpeg62-dev zlib1g-dev libpng12-dev libtiff4-dev libjasper-dev libgst-dev libgstreamer0.10-dev If you already have some of those libraries installed, make sure that is the DEV version. After installing those libraries you can download the stable OpenCV version here. Install it following the procedure below: 1 - untar the opencv package tar -xvzf opencv-1.1pre1.tar.gz  2 - change to OpenCV folder cd opencv-1.1.0  3 - configure the installation enabling gstreamer and letting to compile demo apps later ./configure --with-gstreamer --disable-apps You will get the following results: General configuration ================================================       Compiler:                         g++       CXXFLAGS:       DEF_CXXFLAGS:             -Wall -fno-rtti -pipe -O3 -fomit-frame-pointer       PY_CXXFLAGS:               -Wall -pipe -O3 -fomit-frame-pointer       OCT_CXXFLAGS:             -fno-strict-aliasing -Wall -Wno-uninitialized -pipe -O3 -fomit-frame-pointer        Install path:                      /usr/local  HighGUI configuration ================================================       Windowing system --------------       Use Carbon / Mac OS X:        no       Use gtk+ 2.x:                        yes       Use gthread:                         yes       Image I/O ---------------------       Use ImageIO / Mac OS X:       no       Use libjpeg:                            yes       Use zlib:                                yes       Use libpng:                             yes       Use libtiff:                               yes       Use libjasper:                          yes       Use libIlmImf:                          no             Video I/O ---------------------       Use QuickTime / Mac OS X:     no       Use xine:                                no       Use gstreamer:                        yes       Use ffmpeg:                             no       Use dc1394 & raw1394:     no       Use v4l:                                   yes       Use v4l2:                                 yes       Use unicap:                             no     Wrappers for other languages =========================================       SWIG Python                          no       Octave                                    no       Additional build settings ============================================       Build demo apps                      no Now run make ... 4 - Build OpenCV ./make 5 - Install OpenCV ./sudo make install if all steps above were executed properly, now you can compile the sample applications: 1 - change to samples/c directory cd samples/c 2 - change the build_all script mode to +x chmod +x build_all.sh 3 - run the script ./build_all.sh Now you can test. The results below were taken from the Laplacian filter sample processing in real-time images grabbed from an USB camera: Laplacian filter with USB Camera capture device Also, you can see how is it performance on a 3 windowed application performing color conversion and canny edge detection at the same time: http://www.youtube.com/watch?v=w9yQgdABT7c EOF !
記事全体を表示
MIPI can support video streaming over 1, 2, 3 and 4 lanes. On i.MX6 Sabre boards, the OV5640 camera supports 1 or 2 lanes and the NXP Linux Kernel uses 2 lanes as default. In order to use only one lane, follow the steps below: 1 - Change the board Device Tree on Linux Kernel. On file <linux kernel folder>/arch/arm/boot/dts/imx6qdl-sabresd.dtsi, find the entry "&mipi_csi" and change lanes from 2 to 1. 2 - Configure OV5640 to use only one lane instead of two. On file <linux kernel folder>/drivers/media/platform/mxc/capture/ov5640_mipi.c, change the register 0x300e value from 0x45 to 0x05. This register setup is located at struct ov5640_init_setting_30fps_VGA. 3 - Build the kernel and device tree files. 4 - Test the camera. Unit test can be used to test the video capture: /unit_tests/mxc_v4l2_overlay.out -di /dev/video1 -ow 1024 -oh 768 -m 1 5 - Checking if it's really using one lane. MIPI_CSI_PHY_STATE resgister (address 0x021D_C014) provides the status of all data and clock lanes. During video streaming using 2 lanes, the register value constantly changes its value between 0x0000_0300 and 0x0000_0330. When using only one lane, this register value constantly changes its value between 0x0000_0300 and 0x0000_0310. To read the register value during the stream, run the video test with &: /unit_tests/mxc_v4l2_overlay.out -di /dev/video1 -ow 1024 -oh 768 -m 1 & Now, run the memtool: /unit_tests/memtool -32 0x021dc014 1 i.MX6DL running mxc_v4l2_overlay.out with only one lane:
記事全体を表示
One of the most popular use cases for embedded systems are projects destinated to show information and interact with users. These views are called GUI or Graphic User Interface which are designed to be intuitive, attractive, consistent, and clear. There are many tools that we can use to achieve great GUIs, mostly implemented for platforms such as Web, Android, and iOS. Here, we will need to introduce the concept of framework, basically, it is a set of tools and rules that provides a minimal structure to start with your development. Frameworks usually comes with configuration files, code snippets, files and folders organization helping us to save time and effort. Also, it is important to review the concept of SDK or Software Development Kit which is a set of tools that allows to build software for specific platforms. Usually supplies debugging tools, documentation, libraries, API’s, emulators, and sample code. Flutter is an open-source UI software development kit by Google that help us to create applications with great GUIs on different platforms from a single codebase. Depends on the reference, you can find Flutter defined as a framework or SDK and both are correct, however, an SDK could be a best definition thanks to Flutter supplies a wide and complete package to create an application in which framework is also included. This article is aimed at those that are in a prototyping stage looking for a different tool to develop projects. Also, this article pretends to be a theoretical introduction explaining the most important concepts. However, is a good practice to learn more about reviewing the official documentation from Flutter. (Flutter documentation | Flutter) Here is the structure used throughout this article: What is Flutter? Flutter details Platforms Programming language Official documentation Flutter for embedded systems What is Flutter? Flutter was officially released by Google in December 2018 with a main aim, to give developers a tool to create applications natively compiled for mobile (Android, iOS), web and desktop (Windows, Linux) from a single codebase. It means that as a developer, Flutter will create a structure with minimal code, configuration files, build files for each operating system, manifests, etc. in which we will add our custom code and finally build this code for our preferred OS. For example, we can create an application to review fruit and vegetable information and compile for Android and iOS with the same code. A basic Flutter development process based on my experience looks like the following diagram: Flutter has the following key features: Cross-platform development. Flutter allows the developer to create applications for different platforms using a single codebase. It means that you will not need to recreate the application for each platform you want to support.   Hot-reload. This feature allows the developer to see changes in real time without restarting the whole application, this results in time savings for your project.   High Performance Flutter apps achieve high performance due to the app code is compiled to native ARM code. With this tool no interpreters are involved.   UI Widgets Flutter supplies a set of widgets (UI components such as boxes, inputs text, buttons, etc.) predefined by UI systems guidelines Material on Android and Cupertino for iOS. Source: Material 3 Design Kit | Figma Community Source: Design - Apple Developer   Great community support. This feature could be subjective but, it is useful when we are developing our project find solutions to known issues or report new ones. Because of Flutter is an open source and is widely implemented in the industry this tool owns a big community, with events, forums, and documentation. Flutter Details Supported Platforms With Flutter you can create applications for: Android iOS Linux Debian Linux Ubuntu macOS web Chrome, Firefox, Safari, Edge Windows Supported deployment platforms | Flutter Programming Language Flutter use Dart, a programming language is an open-source language supported by Google optimized to use on the creation of user interfaces. Dart key features: Statically typed. This feature helps catching errors making the code robust ensuring that the variable’s value always match with the declared variable’s type. Null safety. All variables on Dart are non-nullable which means that every variable must have a non-null value avoiding errors at execution time. This feature also, make the code robust and secure. Async/Await. Dart is client-optimized which means that this language was specially created to ensure the best performance as a client application. Async/Await is a feature part of this optimization making easier to manage network requests and other asynchronous operations. Object oriented. Dart is an object-oriented language with classes and mixin. This is especially useful to use on Flutter with the usage of widgets. Compiler support of Just-In-Time (JIT) and Ahead-of-Time (AOT) JIT provides the support that enables the Hot Reload Flutter feature that I mentioned before. It is a complex mechanism, but Dart “detects” changes in your code and execute only these changes avoiding recompiling all the code. AOT compiler produces efficient ARM code improving start up time and performance. Official documentation Flutter has a rich community and documentation that goes from UI guidelines to an Architectural Overview. You can find the official documentation at the following links: Flutter Official Documentation: Flutter documentation | Flutter Flutter Community: Community (flutter.dev) Dart Official Documentation: Dart documentation | Dart Flutter for embedded systems So far, we know all the excellent features and platforms that Flutter can support. But, what about the embedded systems? On the official documentation we can find that Flutter may be used for embedded systems but in fact there is no an official supported platform. This SDK has been supported by their community, specially there is one repository on GitHub supported by Sony that provides documentation and Yocto recipes to support Flutter on embedded Linux. To understand the reason to differentiate between Flutter for Linux Desktop with official support and to create a specific Flutter support for embedded Linux is important to describe the basics of Flutter architecture. Based on the Flutter documentation the system is designed using layers that can be illustrated as follows:   Source: Flutter architectural overview | Flutter We can see as a top level “Framework” which is a high-level layer that includes widgets, tools and libraries that are in contact with developers. Below “Framework,” the layer “Engine” is responsible of drawing the widgets specified in the previous layer and provides the connection between high-level and low-level code. This layer is mostly written in C++ for this reason Flutter can achieve high performance running applications. Specifically for graphics rendering Flutter implements Impeller for iOS and Skia for the rest of platforms. The bottom layer is “Embedder” which is specific for each target and operating system this layer allows Flutter application to run as a native app providing the access to interact with different services managed by the operating systems such as input, rendering surfaces and accessibility. This layer for Linux Desktop uses GTK/GDK and X11 as backend that is highly dependent of unnecessary libraries and expensive for embedded systems which have constrained resources for computation and memory. The work around founded by Sony’s Flutter for Embedded Linux repository is to change this backend using a widely implemented backend for embedded systems Wayland. The following image illustrates the difference between Flutter for Linux Desktop and Flutter for Embedded Linux.   Source: What's the difference between Linux desktop and Embedded Linux · sony/flutter-embedded-linux Wiki · GitHub   Source: What's the difference between Linux desktop and Embedded Linux · sony/flutter-embedded-linux Wiki · GitHub Here is the link to the mentioned repository: GitHub - sony/flutter-elinux: Flutter tools for embedded Linux (eLinux) Finally, I would like to encourage you to read the official Flutter documentation and consider this tool as a great option compared to widely used tools on embedded devices such as Qt or Chromium. Also, please have a look to a great article written by Payam Zahedi delving into the implementation of Flutter for Embedded Linux measuring performance and giving conclusions about the usage of Flutter in embedded systems. (Flutter on Embedded Devices. Learn how to run Flutter on embedded… | by Payam Zahedi | Snapp Embedded | Medium).    
記事全体を表示
meta-avs-demos Yocto layer meta-avs-demos is a Yocto meta layer (complementary to the NXP BSP release for i.MX) published on CodeAurora that includes the additional required packages to support  Amazon's Alexa Voice Services SDK (AVS_SDK) applications. The build procedure is the described on the README.md of the corresponding branch. We have 2 fuctional branches now: imx-alexa-sdk: Support for Morty based i.mx releases imx7d-pico-avs-sdk_4.1.15-1.0.0: legacy support for Jethro releases The master branch is only used to collect manifest files, that used with repo init/sync commands will fetch the whole environment for the 2 special supported boards: i.MX7D Pico Pi and i.MX8M EVK. However the meta-avs-demos can be used with any i.MX board either. Recipes to include Amazon's Alexa Voice Services in your applications. The meta-avs-demos provides the required recipes to build an i.MX image with the support for running Alexa SDK. The imx-alexa-sdk branch is based on Morty and kernel 4.9.X and it supports the next builds: i.MX7D Pico Pi i.MX8M EVK Generic i.MX board For the i.MX7D Pico Pi and i.MX8M EVK there is an extended support for additional (external) Sound Cards like: TechNexion VoiceHat: 2Mic Array board with DSPConcepts SW support Synaptics Card: 2 Mic with Sensory WakeWord support The Generic i.MX is for any other regular i.MX board supported on the official NXP BSP releases. Only the default soundcard (embedded) on the board is supported. Sensory wakeword is currently only enabled for those with ARMV7 architecture. To support any external board like the VoiceHat or Synaptics is up to the user to include the additional patches/changes required. Build Instructions Follow the corresponding README file to follow the steps to build an image with Alexa SDK support README-IMX7D-PICOPI.md README-IMX8M-EVK.md README-IMX-GENERIC.md
記事全体を表示
The i.MX 8QXP MEK does not allow the OV5640/LVDS/LCD usage only by changing the device tree anymore. It occurs because the M4 owns the i2c resources, so the A core must use rpmsg to enable virtual drivers. Due to this, if the user changes the device tree, for instance, the *ov5640.dtb, the kernel won't boot, entering in the following loop: [    8.603353] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).      [    8.610025] [drm] No driver support for vblank timestamp query.              [    8.616077] imx-drm display-subsystem: bound imx-drm-dpu-bliteng.2 (ops dpu_) [    8.624978] imx-drm display-subsystem: bound imx-dpu-crtc.0 (ops dpu_crtc_op) [    8.632526] imx-drm display-subsystem: bound imx-dpu-crtc.1 (ops dpu_crtc_op) [    8.639833] imx-drm display-subsystem: failed to bind ldb@562210e0 (ops imx_7 [    8.648428] imx-drm display-subsystem: master bind failed: -517 With the approach provided in this post, it is possible to make this change manually, only by changing the flash.bin at U-boot for a non-m4 one. In order to make the changes to the flash.bin file, it’s needed to obtain the following files: - u-boot.bin from internal u-boot provided by NXP. - scfw_tcm.bin from SCFW porting kit - bl31.bin from ARM Trusted Firmware - SECO firmware container image Disclaimer The described procedures in this document target a GNU/Linux (Ubuntu 20.04 LTS) and it’s focused on iMX8QXP B0 + BSP L4.19.35_1.1.0. Required packages 1 - Install ARM64 ToolChain: 1.1 - Install ARM64 GCC and G++ cross-compilers: # apt install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu 2 - Install ARM32 GCC6 ToolChain: 2.1 - Download the ARM32 6 Toolchain and install it: $ mkdir ~/gcc_toolchain $ cp ~/Downloads/gcc-arm-none-eabi-6-2017-q2-update-linux.tar.bz2 ~/gcc_toolchain/ $ cd ~/gcc_toolchain/ $ tar xvjf gcc-arm-none-eabi-6-2017-q2-update-linux.tar.bz2 # apt-get update # apt-get install srecord 3 - Download MKimage 3.1 - Create a new directory desired to the packages: $ mkdir flash_build $ cp flash_build 3.1 - Clone the MKimage: $ git clone https://source.codeaurora.org/external/imx/imx-mkimage -b imx_4.19.35_1.1.0 4 - U-boot build 4.1 - Clone the U-boot  $ git clone https://source.codeaurora.org/external/imx/uboot-imx -b imx_v2019.04_4.19.35_1.1.0 $ cd uboot-imx 4.2 - Export the ARM64 ToolChain:  $ export ARCH=arm64 $ export CROSS_COMPILE=/usr/bin/aarch64-linux-gnu- 4.3 - Build it:  $ unset LDFLAGS $ make -j4 imx8qxp_mek_defconfig $ make 4.4 - Copy the binary files to the MKimage/iMX8QX directory:  $ cp spl/u-boot-spl.bin ../imx-mkimage/iMX8QX/ $ cp u-boot-nodtb.bin ../imx-mkimage/iMX8QX/ $ cd ..   5 - ARM Trusted Firmware 5.1 - Clone the imx-atf:  $ git clone https://source.codeaurora.org/external/imx/imx-atf -b imx_4.19.35_1.1.0 $ cd imx-atf 5.2 - Build it:  $ unset LDFLAGS $ make PLAT=imx8qx bl31 5.3 - Copy the binary files to the MKimage/iMX8QX directory:  $ cp build/imx8qx/release/bl31.bin ../imx-mkimage/iMX8QX/ $ cd ..   6 - SCFW 6.1 - Export the ARM32 GCC6 Toolchain:  $ export TOOLS=~/gcc_toolchain/ 6.2 - Download the BSP L4.19.35_1.1.0_SCFW and copy it to the flash_build directory:  $ cp ~/Downloads/imx-scfw-porting-kit-1.2.7.1.tar.gz $ tar xvzf imx-scfw-porting-kit-1.2.7.1.tar.gz $ cd packages/ $ chmod a+x imx-scfw-porting-kit-1.2.7.1.tar.gz $ ./imx-scfw-porting-kit-1.2.7.1.bin 6.3 - Build it to i.MX 8QXP MEK B0:  $ cd imx-scfw-porting-kit-1.2.7.1/src/ $ tar xvzf scfw_export_mx8qx_b0.tar.gz $ cd scfw_export_mx8qx_b0/ $ make qx R=B0 B=mek 6.4 - Copy the binary file to the MKimage/iMX8QX directory:  $ cp build_mx8qx_b0/scfw_tcm.bin ../../../../imx-mkimage/iMX8QX/ $ cp ../../../../ 7 - SECO Firmware Container Image 7.1 - Download the SECO firmware binaries and copy it to the flash_build directory $ cp ~/Downloads/firmware-imx-7.9.bin . $ chmod a+x firmware-imx-7.9.bin 7.2 - Copy the binary files to the MKimage/iMX8QX directory:  $ cp firmware-imx-7.9/firmware/seco/mx8qx-ahab-container.img /imx-mkimage/iMX8QX/ 8 - Build flash.bin 8.1 - In a new terminal, open the imx-mkimage directory: $ cd flash_build/imx-mkimage 8.2 - Build it:  $ make SOC=iMX8QX flash 8.3 - Deploy it to the SDCard:  $ sudo dd if=iMX8QX/flash.bin of=/dev/sdX bs=1k seek=32 && sync Now, you are able to use any non-rpmsg.dtb without kernel errors. Author: Pedro Jardim: pedro.jardim@nxp.com
記事全体を表示
This document describes the i.MX 8MM EVK mini-SAS connectors features on Linux and Android use cases, covering the supported daughter cards, the process to change Device Tree (DTS) files or Boot images, and enable these different display options on the board.
記事全体を表示
Hello, here Jorge. On this post I will explain how to configure, record and play audio using an i.MX 8MIC-RPI-MX8 Board. Requirements: I.MX 8M Mini EVK Linux Binary Demo Files - i.MX 8MMini EVK (L5.15.52_2.1.0) i.MX 8MIC-RPI-MX8 Board Serial console emulator (Tera Term, Putty, etc.) Headphones/speakers The 8MIC-RPI-MX8 accessory board is designed for voice enabled application prototyping and development on the i.MX 8M family. The board plugs directly into the 40-pin expansion connector on the i.MX 8M Mini and Nano EVK’s. Some features about this board are: 8 PDM Microphones 8 monochrome LEDs 4 multi-color LEDs 2 status LEDs 4 pushbuttons Microphone Mute Switch Microphone geometry switch Connecting the i.MX 8MIC-RPI-MX8 Board. The i.MX 8MIC-RPI-MX8 Board has a 40-pin expansion connector that you can plug it directly to the EVK board. Ensure that pin 1 of the 8MIC-RPI-MX8 is aligned with pin 1 on the EVK J1001 as is showed on the next figure:  Selecting the device tree on the board. Once the pre-compiled image is flashed on the board (Flashing Linux BSP using UUU) and you connected the 8MIC-RPI-MX8 it is necessary to select the correct device tree to handle 8MIC board. On U-boot check the available .dtb files on the BSP using the next command: u-boot=> fatls mmc 2:1 And you will get the corresponding list of .dbt files:  On this case we are working with an I.MX 8M Mini EVK and the corresponding .dtb file is: imx8mm-evk-8mic-revE.dtb To select it you need to set the environment variable and save it with: u-boot=> setenv fdtfile imx8mm-evk-8mic-revE.dtb u-boot=> saveenv Doble check it using: u-boot=> printenv fdtfile   Now it is time to boot Linux using the next command: u-boot=> boot Recording audio with the i.MX 8MIC-RPI-MX8 Board. The Advanced Linux Sound Architecture (ALSA) provides audio and MIDI functionality to the Linux operating system. ALSA has the following significant features: Efficient support for all types of audio interfaces, from consumer sound cards to professional multichannel audio interfaces. Fully modularized sound drivers. SMP and thread-safe design. User space library (alsa-lib) to simplify application programming and provide higher level functionality. Support for the older Open Sound System (OSS) API, providing binary compatibility for most OSS programs. Once we are on Linux, we can check our audio codecs detected on the board using: arecord -l   Now, to record audio we need to use the ALSA arecord command to start recording with IMX8 boards, there are different options that you can check on the next link. On this case we are going to use the next: arecord -D hw:imxaudiomicfil -c8 -f s16_le -r48000 -d10 sample.wav -D: selects the device. -c: selects the number of channels on the recording. -f: selects the format. -r: selects the sample rate. -d: determinate the duration recording time in seconds. sample.wav: Is the name of the resulting audio file. Running the last command, we started to record audio. It is time to make some noise and record it!   Playing audio from IMX8 boards. Now it is time to connect our headphones or speakers to the jack.   Also, as on arecord command you can check the devices where you can play audio from the board using the next command: aplay -l And you will get all the codecs to play audio:   To play our recordings we need to use the ALSA aplay command, it is important to select the correct audio codec to hear the audio from the jack on the board: aplay -Dplughw:3,0 sample.wav -D: selects the device. sample.wav: Is the name of audio file to play   Hope this will helpful for people who wants to record audio using PDM microphones and playing audio from IMX8 boards. Best regards.
記事全体を表示
GUI Guider version: 1.6.0 LVGL version: v8.3.5 Host software requirements: Ubuntu 20.04, Ubuntu 22.04 or Debian 12 Hardware requirements: Evaluation Kit for the i.MX 93 Applications Processor. (i.MX 93 Evaluation Kit | NXP Semiconductors) On this guide we will use the IMX-MIPI-HDMI accessory board to connect the iMX93 with a HDMI Monitor. (IMX-MIPI-HDMI Product Information|NXP) This board is usually provided with the iMX8M Mini and the iMX8M Nano.  Steps: 1. Copy your project from the folder GUI-Guider-Projects to your Linux PC.  2. Build an image for iMX93 using The Yocto Project.    a. Based on iMX Yocto Porject Users Guide set directories and download the repo $ mkdir imx-bsp-6.1.1-1.0.0 $ cd imx-bsp-6.1.1-1.0.0 $ repo init -u https://github.com/nxp-imx/imx-manifest -b imx-linux-langdale -m imx-6.1.1-1.0.0.xml $ repo sync Use distro fsl-imx-xwayland and select machine imx93evk and use this commnad with a build folder name: $ MACHINE=imx93evk DISTRO=fsl-imx-xwayland source ./imx-setup-release.sh - b bld-imx93evk b. Use bitbake command to start the build process. Also, add the -c populate_sdk to get the toolchain. $ bitbake imx-image-multimedia -c populate_sdk  c. Install the Yocto toolchain located on <build-folder>/tmp/deploy/sdk/.  $ sudo sh ./fsl-imx-xwayland-glibc-x86_64-imx-image-multimedia-armv8a-imx93evk-toolchain-6.1-langdale.sh d. Install ninja utility on the build host $ sudo apt install ninja-build e. For Ubuntu 20.04 and Ubuntu 22.04, copy the lv_conf.h file from lvgl-simulator to lvgl $ cp lvgl-simulator/lv_conf.h lvgl/ f. Change the interpreter on build.sh from #!/bin/sh to #!/bin/bash. This is an important step! g. Then, enter to linux folder and use the following commands to make build.sh executable $ dos2unix build.sh $ chmod +x build.sh h. Execute the build.sh $ ./build.sh i. Copy the binary to the iMX93 using a USB or SCP.  2. On the target iMX93 follow these steps. a. On Uboot, use fatls interface device:partition fatls mmc 0:1 (Device 0 : Partition 1) With this command, we will be able to list device tree files. => fatls mmc 0:1 b. Select imx93-11x11-evk-rm67199.dtb and use the command editenv fdtfile  => editenv fdtfile Output example edit: imx93-11x11-evk-rm67199.dtb c. In edit command line put the selected device tree .dtb d. Use saveenv command to save environment and continue with the boot process. e. Finally, run the GUI Application $ ./gui_guider&   I hope this article will be helpful. Best regards, Brian.
記事全体を表示
This document describes the i.MX 8MQ EVK HDMI output and mini-SAS connectors features on Linux and Android use cases, covering the supported daughter-board, the process to change Device Tree (DTS) files or Boot Images, and enable these different display options on the board.
記事全体を表示
When streaming, if you want to play a streaming URL, it can be inconvenient if the browser cannot recognize the URL as a media stream and downloads the content rather than using Gallery to play it. To create this kind of media streaming, you need to write an apk to use VideoView to play the URL/media stream from the console. Here is the command of how to play a media file or network stream from console. Gingerbread am start -n com.cooliris.media/com.cooliris.media.MovieView -d "<URL>"       The URL can be file position or network stream URL, such as: you can play a local file by: am start -n com.cooliris.media/com.cooliris.media.MovieView -d "/mnt/sdcard/test.mp4" You can also play a http stream by: am start -n com.cooliris.media/com.cooliris.media.MovieView -d "http://v.iask.com/v_play_ipad.php?vid=76710932" Or play a rtsp stream by: am start -n com.cooliris.media/com.cooliris.media.MovieView -d "rtsp://10.0.2.1:554/stream" ICS am start -n com.android.gallery3d/com.android.gallery3d.app.MovieActivity -d "<URL>"        The URL has the same definition of Gingerbread.
記事全体を表示
Information about the transition from the NXP Demo Experience to GoPoint for i.MX Application Processors.
記事全体を表示
Description       this doc is explain how to develop a audio card driver base on i.MX6 platform. which explain the ASOC architecture struction basic knowledage and then give some sample for the audio driver development like: 1:NXP SGTL5000: NXP i.MX BSP sabrelite board default support it. 2: Wolfson WM8524.    A: 3.0.35 BSP support: i.MX6 setbox BSP support it:(which in elder fsl community link and out of data)    B: 3.14.28 BSP support pls check attachment: 3: Wolfson WM8960.     which include how to add the android middle-layer and driver, pls check attachment. 4: TI TLV320AIC3120      which include how to add the android middle-layer and driver, pls check attachment. 5: TI TLV320AIC3X   Products Product Category NXP Part Number URL MPU i.MX6 Family https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-6-processors:IMX6X_SERIES   Tools NXP Development Board URL i.MX6 SabreSDP https://www.nxp.com/design/development-boards:EVDEBRDSSYS#/collection=softwaretools&start=0&max=25&query=typeTax%3E%3Et633::archived%3E%3E0::Sub_Asset_Type%3E%3ETSP::deviceTax%3E%3Ec731_c380_c127_c126&sorting=Buy%2FSpecifications.desc&language=en&siblings=false which have a doc MX6X_ASOC_V5-20191115.pdf and related driver sample codes.
記事全体を表示
In order to get USB cameras (web cams) working on i.MX 51 EVK board running Ubuntu, a few steps must be followed, and they are: Enable USB Camera's drivers on Kernel Test it using Gstreamer or another compatible software (as Cheese) Kernel Driver USB cameras (web cameras) on Linux work over GSPCA driver, to enable this driver you need to go to: ./ltib -c   [*] Configure the kernel     Device Drivers -->          Multimedia Devices -->               [*] Video Capture Adapters -->                    [*] V4L USB Devices -->                         <*> USB Video Class (UVC)                                      [*] UVC input events device support                         <*> GSPCA Based WebCams --> From this point, you need to choose your specific driver. If you don't know, you can select all of those options as a built-in module "<*>" that will work. GSPCA Drivers USB Camera Detection Connect your USB camera to the USB Host port on i.MX 51 EVK board and then type "dmesg", and also check if there is a video0 device using: ubuntu@ubuntu-desktop:~$ ls /dev/video0 /dev/video0 USB Camera Detection Gstreamer Command Line In order to test your USB camera using Gstreamer plugin, use the following command line to perform it: ubuntu@ubuntu-desktop:~$ gst-launch-0.10 v4l2src ! ffmpegcolorspace ! ximagesink and the results: Hi there !!! EOF !
記事全体を表示
What is HTML5 Video? HTML5 video is an element for the purpose of playing videos or movies in HTML5 specification. HTML5 video is intended by its creators to become the new standard way to show video on the web without plugins. Video will be shown inside the web page, like flash. HTML5 Video Web Page <video> element example <video src="movie.mp4" poster="movie.jpg" controls> </video> HTML5 video page source example <html>           <head>           </head>            <body>                      <video src="http://10.192.225.226/movie.mp4" width="640" height="480"  controls="true">                      </video>            </body> </html> HTML5 Video Rendering Path Performance Data in i.MX6Q with Android ICS With LVDS display, H264@1080p@20Mbps Can reach 30 fps With HDMI 1080p display, H264@1080p@10Mbps Can reach 25 fps HTML5 Video Website Some HTML5 Video website when accessing with android platform www.youtube.com www.iqiyi.com HTML5 reference document SPEC         http://dev.w3.org/html5/spec/single-page.html?utm_source=dlvr.it&utm_medium=feed Wikipedia page        http://en.wikipedia.org/wiki/HTML5
記事全体を表示
BSP version: 11.03 Multimedia Package version: 11.03 1. Install BSP and Multi Media package (11.03 release) 2. Avoid Display Timeout: append the following line to rootfs/etc/oprofile: echo -e -n '\033[9]' > /dev/tty0 3. Set VGA port as the primary display in the kernel command line: video=mxcdi1fb:GBR24,VGA-XGA di1_primary vga 4. Connect a VGA monitor and WVGA display to the MX53 Quick Start 5. Boot Linux on MX53 Quick Start board (NFS is used in this example) 6. Unblank WVGA display (fb1): $ echo 0 > /sys/class/graphics/fb1/blank 7. On the target enter into /dev/shm directory. If the following files are present: vss_lock vss_shmem ,delete them. 8. On your host, edit the ltib/rootfs/usr/share/vssconfig as following: vss device definition Master=VGA, Slave=WVGA master display [VGA] type = framebuffer format = RGBP fb_num = 2 main_fb_num = 0 vs_max = 4 slave display [WVGA] type = framebuffer format = RGBP fb_num = 1 vs_max = 4 9. Run the Gstreamer pipeline below: gst-launch filesrc location=file.mp4 ! qtdemux ! mfw_vpudecoder ! mfw_isink display=VGA display-1=WVGA Video is played on the VGA and WVGA panels. A 720p file can be played at the same time.
記事全体を表示
1. Intro   This document contains instructions to run run the SAI low power audio demo on the i.MX 8M Plus EVK. Here, the  RPSMG to allows audio to be passed from the A53 cluster running Linux to the M7 core. The latter controls the on board WM8960 audio codec,  which is connected to a 3.5 mm audio jack that allow us to play music using headphones. I will show the necessary steps to make the demo work and will add some GStreamer examples to demonstrate the demo's capabilities.   TBD: update this with a nice diagram that depicts the A53 and M7 RPMSG channel. 2. Requirements   Hardware  MX 8M Plus EVK Headphones with 3.5 mm audio jack Type-C power supply for i.MX 8M Plus EVK Micro USB to USB adapter cable Software  A recent prebuilt Linux BSP image from NXP.com ( we tested this on 5.15.35 and 5.15.5 releases) Windows 10 or Ubuntu 20.04 Workstation MCUXpresso SDK for i.MX 8M Plus ( available from:  Welcome | MCUXpresso SDK Builder (nxp.com)) 3. Reference documentation for this example   MCUXpresso SDK   [1] Getting Started with MCUXpresso SDK for EVK-MIMX8MP     Available within the MCUXpresso SDK package:  \{INSTALL PATH}\SDK_X_X_X_EVK-MIMX8MP\docs    [2] SAI low power audio README file Contains instructions for the SAI Low Power Audio Demo.  Available within the MCUXpresso SDK package: \{INSTALL PATH}\SDK_X_X_X_EVK-MIMX8MP\boards\evkmimx8mp\demo_apps\sai_low_power_audio   4. Downloading a pre-built Linux BSP image for the i.MX 8M Plus   I will make use of the prebuilt Linux Image for the i.MX 8M Plus EVK for demonstrating the demo works.  At the moment of writing this time, I used the 5.15.32 release, although there are older releases like 5.10.5 that I tested and proved to work with no issues. This SAI Low Power Audio Demo shall work for other processors on the i.MX 8M family. Although specific instructions ( e.g. load address for M-core binary load) might require some adaptation. For M-core load address, please refer to the specific MCUXpresso SDK documentation for each processor. The prebuilt Linux image (5.15.32) for the i.MX 8M Plus EVK can be downloaded from here: https://www.nxp.com/webapp/Download?colCode=L5.15.32_2.0.0_MX8MP&appType=license You can download other releases from here: Embedded Linux for i.MX Applications Processors | NXP Semiconductors . Select a version and a board and select download. 5. Flashing the BSP image   If you are using an Ubuntu 20.04 workstation, I recommend you to flash the image using dd. For this, you can refer to the i.MX Linux User's Guide: Section - 4.3.2 Copying the full SD card image - https://www.nxp.com/docs/en/user-guide/IMX_LINUX_USERS_GUIDE.pdf sudo dd if=.wic of=/dev/sdx bs=1M && sync NOTE: when using dd, ALWAYS, double check the of device that you are about to writing. Messing up with another location or partition will harm your system   If you are following this document on a Windows machine: You can use the Universal Update Utility (UUU) to flash your image on either the board's eMMC or SD card. Document named UUU.pdf shall serve as your reference guide for further instructions and flashing examples. It is available along with UUU binary here: https://github.com/NXPmicro/mfgtools/releases Two examples are shown below for your convenience:                                     SD card flash                                                 uuu -b sd_all bootloader rootfs.sdcard.bz2                                     eMMC flash                                                 uuu -b emmc_all bootloader rootfs.sdcard.bz2        uuu uuu.auto NOTE: UUU is also compatible with Ubuntu NOTE: there are other engineers who like to use BalenaEtcher for flashing their BSP images. I have tested it and works on both Ubuntu and Windows 10 machines.   6. Preparing the BSP and booting up M7 core  using U-Boot   I am writing this upon the instructions contained on the README file for the low power audio example  [2]. Instructions ready to copy and paste will follow:   Instruct U-Boot to pass to the kernel the rpmsg device tree to enable communication between the A53 cluster and the M7 one: u-boot=>setenv fdtfile imx8mp-evk-rpmsg.dtb u-boot=>saveenv Load the M7 example: u-boot=>setenv mmcargs 'setenv bootargs ${jh_clk} console=${console} root=${mmcroot} snd_pcm.max_alloc_per_card=134217728' u-boot=>saveenv Now, we need to load the M4 with the demo. Refer to [1] for further information. If running the BSP on an SD card, make sure the example binary is listed on the boot partition as follows: fatls mmc 1:1 You shall see something similar to this:             imx8mp_m7_TCM_sai_low_power_audio.bin Open the serial terminal emulator for the M7. Out of the fourth ports listed when we plug the i.MX 8M Plus serial debug cable to the PC, the M7 is typically the last one listed.   All the serial ports available to the workstation when the i.MX 8M Plus serial cable is connected to it. NOTE: you may require to install addtitional COM drivers if you are running on Windows. I like doing the previous step so I can see the result of the next commands issued in U-boot to load the M7 image. fatload mmc 1:1 0x48000000 imx8mp_m7_TCM_sai_low_power_audio.bin; cp.b 0x48000000 0x7e0000 20000; bootaux 0x7e0000 Here is an screenshot that shows how the U-Boot's response should look: U-Boot response when loading the SAI low power audio example to the Cortex M7 That should have prompted the following message on the M7 terminal: M7-core is up!   Now, let’s move to user space! u-boot=> boot 7. Testing the example using a simple GStreamer pipeline   As soon as the O.S. finishes booting. We can see that M7 terminal prompts the following: M7 is now in STOP mode; waiting for some audio to beat the room! Confirm that the WM8960 is listed as audio card as follows: cat /proc/asound/cards             Listing avaialable audio cards. WM8960 should be present. Make note of the list. The wm8960 is listed a the third sound card. This is where I like to differ a bit from [2] and I suggest a quicker test in case of not having an audio file ready. We just simply use GStreamer to play an audiotest source. Please make sure to plug in your headphones onto the board’s 3.5 mm jack before.   The following GStreamer pipeline is using the WM8960 as an audiosink.  gst-launch-1.0 audiotestsrc ! alsasink device=hw:3   NOTE: please be cautious and not put the headphones directly in your head at the first attempt. The sound can be too loud to some people. This is what you should see on the M7 side: Stop the GStreamer pipeline issuing CTRL + C. M7 shall warn you about that: NOTE: you can use the aplay command to play audio as shown on [2]. However, I consider using a testsrc is much quicker and flexible for a quick test.  8. Additional information   Feel free to go ahead and tweak the GStreamer pipeline to change audio test source properties. audiotest src. This command will let you know the available options:            gst-inspect-1.0 audiotestsrc                         NOTE: you can navigate through the displayed list using the “d”key. Press “q’’ to quit. For example:     For example, I am reproducing sound using a different setup based on the list above: gst-launch-1.0 audiotestsrc freq=4000 volume=0.8 wave=8 ! alsasink device=hw:3 9.  Errata and future updates   TBD:     Add an example on how to define the default audio card and play the audio either using gst-play or building the pipeline using filesrc Comment on the limitations of the M7 core regarding sample rate and audio formats  
記事全体を表示
In this article, I will explain how to set up the iMX8M Plus to use the 4K Dart BCON Basler Camera module. Requirements: Evaluation Kit for the i.MX 8M Plus Applications Processor. (i.MX 8M Plus Evaluation Kit | NXP Semiconductors) Basler Camera for i.MX 8M Plus (4K dart BCON for MIPI camera module for i.MX 8M Plus | NXP Semiconductors). Embedded Linux for i.MX Applications Processors (Embedded Linux for i.MX Applications Processors | NXP Semiconductors) (For this example we will use BSP version Linux 5.15.71_2.2.0) Serial Console Emulator Basler Camera Specifications and Manuals: Basler Camera Specifications at this link: Embedded Vision Kits daA3840-30mc-IMX8MP-EVK - Embedded Vision Kits (baslerweb.com). Basler Manual to identify and setting up the hardware at this link: daA3840-30mc-IMX8MP-EVK | Basler Product Documentation (baslerweb.com) Basler Camera Module out-of-box with i.MX 8M Plus Applications Processor. (Video: Basler Camera Module out-of-box with i.MX 8M Plus Applications Processor | NXP Semiconductors) Steps After setting up the hardware we will need to turn on the iMX8M Plus and follow these steps: 1. Stop the boot process on Uboot by pressing any key. 2. Use the following command to list interfaces. => mmc list Output example => FSL_SDHC: 1 (SD) => FSL_SDHC: 2 The above command will show you the device number in this example for SD, the device number is 1. 3. Then use fatls <interface> <device[:partition]> [<directory>] fatls mmc 1:1 (Device 1 : Partition 1) With this command, we will be able to list device tree files. => fatls mmc 1:1 4. Select imx8mp-evk-basler.dtb or imx8mp-evk-dual-basler.dtb and use the command editenv fdtfile.  => editenv fdtfile Output example edit: imx8mp-evk-basler.dtb 5. In edit command line put the selected device tree (*.dtb). 6. Use saveenv command to save environment and continue with the boot process. 7. Using the terminal and go to /opt/imx8-isp/bin and execute the script run.sh. $ ./run.sh -c basler_1080p60 -lm 8. Use the command gst-device-monitor-1.0 to list devices. Here you will find the path to the camera device. $ gst-device-monitor-1.0 Output example Device found: name : VIV class : Video/Source caps : video/x-raw, format=YUY2, width=[ 176, 4096, 16 ], height=[ 144, 3072, 8 ], pixel-aspect-ratio=1/1, framerate={ (fraction)30/1, (fraction)29/1, (fraction)28/1, (fraction)27/1, (fraction)26/1, (fraction)25/1, (fraction)24/1, (fraction)23/1, (fraction)22/1, (fraction)21/1, (fraction)20/1, (fraction)19/1, (fraction)18/1, (fraction)17/1, (fraction)16/1, (fraction)15/1, (fraction)14/1, (fraction)13/1, (fraction)12/1, (fraction)11/1, (fraction)10/1, (fraction)9/1, (fraction)8/1, (fraction)7/1, (fraction)6/1, (fraction)5/1, (fraction)4/1, (fraction)3/1, (fraction)2/1, (fraction)1/1 } ... properties: udev-probed = true device.bus_path = platform-vvcam-video.0 sysfs.path = /sys/devices/platform/vvcam-video.0/video4linux/video2 device.subsystem = video4linux device.product.name = VIV device.capabilities = :capture: device.api = v4l2 device.path = /dev/video2 v4l2.device.driver = viv_v4l2_device v4l2.device.card = VIV v4l2.device.bus_info = platform:viv0 v4l2.device.version = 393473 (0x00060101) v4l2.device.capabilities = 2216693761 (0x84201001) v4l2.device.device_caps = 69206017 (0x04200001) gst-launch-1.0 v4l2src device=/dev/video2 ! ... 9. Finally, use gstreamer to verify proper operation. (With this gstreamer pipeline you will see a new window with the camera output. Then, just rotate the lens to acquire the correct focus) $ gst-launch-1.0 -v v4l2src device=/dev/video2 ! "video/x-raw,format=YUY2,width=1920,height=1080" ! queue ! imxvideoconvert_g2d ! waylandsink Basic description of Gstreamer Pipeline gst-launch-1.0 -v: The option -v enables the verbose mode to get detailed information of process. v4l2src device=/dev/video2: Select input device in this case the camera is on path /dev/video3. "video/x-raw,format=YUY2,width=1920,height=1080": Received format from camera. queue: This command is a buffer between camera recording process and the following image process, this command help us to interface two process and prevent blocking where each process has different speeds, in other words, when a process A is faster than process B. imxvideoconvert_g2d: This proprietary plugin uses hardware acceleration to perform rotation, scaling, and color space conversion on video frames. waylandsink : This command creates its own window and renders the decoded frames processed previously. 10. Result     I hope this article will be helpful. Best regards, Brian.
記事全体を表示
On this tutorial we will review the implementation of Flutter on the i.MX8MP using the Linux Desktop Image. Please find more information about Flutter using the following link: Flutter: Option to create GUIs for Embedded System... - NXP Community Requirements: Evaluation Kit for the i.MX 8M Plus Applications Processor. (i.MX 8M Plus Evaluation Kit | NXP Semiconductors) NXP Desktop Image for i.MX 8M Plus (GitHub - nxp-imx/meta-nxp-desktop at lf-6.1.1-1.0.0-langdale) Note: This tutorial is based on the NXP Desktop Image with Yocto version 6.1.1 – Langdale. Steps: 1. First, run commands to update packages. $ sudo apt update $ sudo apt upgrade 2. Install Flutter for Linux using the following command. $ sudo snap install flutter --classic 3. Run the command to verify the correct installation. $ flutter doctor With this command you will find information about the installation. The important part for our purpose is the parameter "Linux toolchain - develop for Linux desktop". 4. Run the command “flutter create .” to create a flutter project, this framework will create different folders and files used to develop the application.  $ cd Documents $ mkdir flutter_hello $ cd flutter_hello $ flutter create .​ 5. Finally, you can run the “hello world” application using: $ flutter run Verify the program behavior incrementing the number displayed on the window.  
記事全体を表示
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-342059 
記事全体を表示