i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
Building Linux Kernel Building Linux Kernel Building Using LTIB Building Outside LTIB Downloading and installing GNU Toolchain and git Building Kernel from Freescale git repository Building Kernel Mainline About Linux Building Using LTIB Linux kernel can be easily built using Ltib. On Ltib menu, just select: [*] Configure the Kernel When you exit this menu, Ltib will show the Kernel Menuconfig as below: This is the Kernel Menuconfig, where it's possible to configure kernel options and drivers. After exit this menu, kernel will be built and stored at: <Ltib directory>/rootfs/boot Building Outside LTIB Downloading and installing GNU Toolchain and git When you install LTIB, a GNU toolchain is automatically installed on /opt/freescale/usr/local/ Kernel releases newer than 2.6.34 doesn't build on Toolchain 4.1.2, only on 4.4.1 or later Check on your host at /opt/freescale/usr/local/ the current installed Toolchain. Next step is to install GIT on host. For Ubuntu machines, use: sudo apt-get install git-core Building Kernel from Freescale git repository Freescale provides access to their own git kernel repository and can be viewed at: Freescale Public GIT To download the kernel source code, create a new folder and use the command: git clone git://git.freescale.com/imx/linux-2.6-imx.git OR git clone http://git.freescale.com/git/cgit.cgi/imx/linux-2.6-imx.git After some minutes, a folder called linux-2.6-imx will be created containing the Linux kernel Create a local git branch from a remote branch you want to use. Let's use branch origin/imx_3.0.15 as example: cd linux-2.6-imx git checkout -b localbranch origin/imx_3.0.15 To check all available remote branches, use: git branch -r Export the cross compiler, architecture and the toolchain path: export ARCH=arm export CROSS_COMPILE=arm-none-linux-gnueabi- If using Toolchain 4.1.2: export PATH="$PATH:/opt/freescale/usr/local/gcc-4.1.2-glibc-2.5-nptl-3/arm-none-linux-gnueabi/bin/" OR If using Toolchain 4.4.4: export PATH="$PATH:/opt/freescale/usr/local/gcc-4.4.4-glibc-2.11.1-multilib-1.0/arm-fsl-linux-gnueabi/bin/" Copy the config file for the wanted platform on linux folder as example: cp arch/arm/configs/imx6_defconfig .config All platform config files are located at <linux directory>/arch/arm/configs/ Call menuconfig and change configuration (if needed) make menuconfig Now it's ready to be built: make uImage The zImage and uImage will be located at /arch/arm/boot/ folder. Building Kernel Mainline Mainline Kernel can be viewed on this link: https://git.kernel.org/cgit/linux/kernel/git/stable/linux-stable.git To download the kernel source code, create a new folder and use the command: git clone git://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git OR git clone http://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git OR git clone https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux-stable.git After some minutes, a folder called linux will be created containing the Linux kernel Create a local git branch from a remote branch you want to use. Let's use branch origin/linux-3.8.y as example: cd linux git checkout -b localbranch origin/linux-3.8.y To check all available remote branches, use: git branch -r Export the cross compiler, architecture and the toolchain path: export ARCH=arm export CROSS_COMPILE=arm-none-linux-gnueabi- If using Toolchain 4.4.4: export PATH="$PATH:/opt/freescale/usr/local/gcc-4.4.4-glibc-2.11.1-multilib-1.0/arm-fsl-linux-gnueabi/bin/" Configure to the platform you want to build kernel. For i.MX family, use imx_v6_v7_defconfig: make imx_v6_v7_defconfig All platform config files are located at <linux directory>/arch/arm/configs/ Call menuconfig and change configuration (only if needed, this is an optional step!) make menuconfig Now it's ready to be built: make -j4 uImage LOADADDR=0x70008000 - Use -j4 option to speed up your build in case or PC has 4 cores. It's optional. - IMPORTANT: Use the correct address for each processor. You can check the correct address value at linux/arch/arm/mach-imx/Makefile.boot. After build the uImage, build the dtb file (device tree binary). For i.MX53 QSB use: make imx53-qsb.dtb The uImage will be located at: linux/arch/arm/boot/ folder and dtb binary will be located at: linux/arch/arm/boot/dts About Linux For general Linux information, see About Linux
View full article
This is a HW design checklist for customer's reference. Please read and fill it in carefully before requesting a schematic review.
View full article
Recipes to include Amazon's Alexa Voice Services in your applications. Step 1 : Get iMX Yocto AVS setup environment Review the steps under Chapter 3 of the i.MX_Yocto_Project_User'sGuide.pdf on the L4.X LINUX_DOCS to prepare your host machine. Including at least the following essential Yocto packages $ sudo apt-get install gawk wget git-core diffstat unzip texinfo \ gcc-multilib build-essential chrpath socat libsdl1.2-dev u-boot-tools Install the i.MX NXP AVS repo Create/Move to a directory where you want to install the AVS yocto build enviroment. Let's call this as <yocto_dir> $ cd <yocto_dir> $ repo init -u https://source.codeaurora.org/external/imxsupport/meta-avs-demos -b master -m imx-alexa-sdk-4.9.51-8mq_ga.xml Download the AVS BSP build environment: $ repo sync Step 2: Setup yocto for Alexa_SDK image with AVS-SETUP-DEMO script: Run the avs-setup-demo script as follows to setup your environment for the imx8mqevk board: $ MACHINE=imx8mqevk DISTRO=fsl-imx-xwayland source avs-setup-demo.sh -b <build_sdk_8M> Where <build_sdk> is the name you will give to your build folder. After acepting the EULA the script will prompt if you want to enable: Sound Card selection The following Sound Cards are supported on the build: 2-Mic Synaptics/Conexant 2-Mic TechNexion Voice Hat (with DSPConcepts SW) The script will prompt to select the soundcard you will be using: Which Sound Card are you going to use? Synaptics/Conexant .................... 1 VoiceHat (for DSPConcepts SW) ......... 2 Type the number of your selection and press Enter... Install Alexa SDK Next option is to select if you want to pre-install the AVS SDK software on the image. Do you want to build/include the AVS_SDK package on this image(Y/N)? If you select YES, then your image will contain the AVS SDK ready to use (after authentication). Note this AVS_SDK will not have WakeWord detection support, but it can be added on runtime. If your selection was NO, then you can always manually fetch and build the AVS_SDK on runtime. All the packages dependencies will be already there, so only fetching the AVS_SDK source code and building it is required. Finish avs-image configuration At the end you will see a text according with the configuration you select for your image build. Next is an example for a Preinstalled AVS_SDK with Synaptics Sound Card support ============================================================ AVS configuration is now ready at conf/local.conf - Sound Card = Synaptics - Alexa SDK 1.7 pre-installed - Wifi supported You are ready to bitbake your AVS demo image now:   bitbake avs-image If you want to use QT5DisplayCards, use then:   bitbake avs-image-qt5 ============================================================ Step 3: Build the AVS image Go to your <build_sdk> directory and start the build of the avs-image There are 2 options Regular Build: $ cd  <yocto_dir>/<build_sdk>   $ bitbake avs-image With QT5 support included: $ cd  <yocto_dir>/<build_sdk>   $ bitbake avs-image-qt5 The image with QT5 is useful if you want to add some GUI for example to render DisplayCards. Step 4 : Deploying the built images to SD/MMC card to boot on target board. After a build has succesfully completed, the created image resides at <build_sdk>/tmp/deploy/images/imx8mqevk/ In this directory, you will find imx8mqevk-avs--.sdcard image or imx8mqevk-avs-qt5--.sdcard, depending on the build you chose on Step3. To Flash the .sdcard image into an SD Card follow the next steps: Extract and copy the .sdcard file to your SD Card $ cd <build_sdk>/tmp/deploy/images/imx8mqevk/ $ cp -v imx8mqevk-avs-synaptics-1.7.sdcard.bz2 <workdir> $ cd <workdir> $ sudo bzip2 -d imx8mqevk-avs-synaptics-1.7.sdcard.bz2 $ sudo dd if=imx8mqevk-avs-synaptics-1.7.sdcard.bz2 of=/dev/sd<part> bs=1M && sync $ sync Properly eject the SD Card: $ sudo eject /dev/sd<part> Insert the flashed SD Card on the 8M EVK and boot. Follow the instructions at startup to setup your AVS and run the SampleApp. NXP Documentation For a more comprehensive understanding of Yocto, its features and setup; more image build and deployment options and customization, please take a look at the i.MX_Yocto_Project_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document. For a more detailed description of the Linux BSP, u-boot use and configuration, please take a look at the i.MX_Linux_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document.
View full article
Script which patches the ltib folder on Ubuntu 12.04. Steps: $ cp patch-ltib-ubuntu12.04.sh <your ltib folder> $ cd <your ltib folder> $ chmod +x patch-ltib-ubuntu12.04.sh $ ./patch-ltib-ubuntu12.04.sh
View full article
Here are my experiences for compiling Qt5.3.0-beta1 on Yocto. Special thanks to Martin Jansa, the maintainer of the meta-qt5 layer and his help on this. My original procedure was based on this tutorial: Building Qt5 using yocto on Wandboard - Wandboard Wiki Reason: Qt5.3 contains a nice new plugin that allows the use of gstreamer output for textures without the CPU intensive step of copying them (Gerrit Code Review). This allows to play even full HD videos and apply all the power of Qt5 (e.g. shaders) to them. Steps: Setup your repo: repo init -u https://github.com/Freescale/fsl-community-bsp-platform -b master-next; repo sync Download meta-qt5 branch: cd sources; git clone -b jansa/qt5-5.3.0-beta1 https://github.com/meta-qt5/meta-qt5.git Checkout a specific revision: cd meta-qt5; git checkout 92be18a3a14deed9d38b8fc6e89f09ba4d730597 Apply the following patch (maybe later no longer needed): diff --git a/recipes-qt/qt5/qt5.inc b/recipes-qt/qt5/qt5.inc index dfc1c76..a2f9a73 100644 --- a/recipes-qt/qt5/qt5.inc +++ b/recipes-qt/qt5/qt5.inc @@ -54,6 +54,7 @@ FILES_${PN}-tools-dbg = " \ " FILES_${PN}-plugins-dbg = " \      ${OE_QMAKE_PATH_PLUGINS}/*/.debug/* \ +    ${OE_QMAKE_PATH_PLUGINS}/*/*/.debug/* \ " # extra packages @@ -98,6 +99,7 @@ FILES_${PN}-tools = " \ " FILES_${PN}-plugins = " \      ${OE_QMAKE_PATH_PLUGINS}/*/*${SOLIBSDEV} \ +    ${OE_QMAKE_PATH_PLUGINS}/*/*/*${SOLIBSDEV} \ " FILES_${PN}-mkspecs = "\      ${OE_QMAKE_PATH_ARCHDATA}/mkspecs \ Define your machine: export MACHINE=xxx (replace with your board) Setup build environment: cd .. ; . setup-environment build edit your local layer conf ("conf/bblayers.conf") and add the following two lines:   ${BSPDIR}/sources/meta-openembedded/meta-ruby \   ${BSPDIR}/sources/meta-qt5 \ edit your local.conf and add the following lines: DISTRO_FEATURES_remove = "x11 wayland" IMAGE_INSTALL_append = " \     firmware-imx-vpu-imx6q \     firmware-imx-vpu-imx6d \ " IMAGE_INSTALL_append = " \     cpufrequtils \     nano \     packagegroup-fsl-gstreamer \     packagegroup-fsl-tools-testapps \     packagegroup-fsl-tools-benchmark \     gstreamer \     gst-plugins-base-app \     gst-plugins-base \     gst-plugins-good \     gst-plugins-good-rtsp \     gst-plugins-good-udp \     gst-plugins-good-rtpmanager \     gst-plugins-good-rtp \     gst-plugins-good-video4linux2 \     qtbase-fonts \     qtbase-plugins \     qtbase-tools \     qtbase-examples \     qtdeclarative \     qtdeclarative-plugins \     qtdeclarative-tools \     qtdeclarative-examples \     qtdeclarative-qmlplugins \     qtmultimedia \     qtmultimedia-plugins \     qtmultimedia-examples \     qtmultimedia-qmlplugins \     qtsvg \     qtsvg-plugins \     qtsensors \     qtimageformats-plugins \     qtsystems \     qtsystems-tools \     qtsystems-examples \     qtsystems-qmlplugins \     qtscript \     qt3d \     qt3d-examples \     qt3d-qmlplugins \     qt3d-tools \     qtwebkit \     qtwebkit-examples-examples \     qtwebkit-qmlplugins \     cinematicexperience \     " PACKAGECONFIG_append_pn-qtmultimedia = " gstreamer010" QT5_VERSION = "5.2.1+5.3.0-beta1+git%" PREFERRED_VERSION_qtbase-native = "${QT5_VERSION}" PREFERRED_VERSION_qtbase = "${QT5_VERSION}" PREFERRED_VERSION_qtdeclarative = "${QT5_VERSION}" PREFERRED_VERSION_qtjsbackend = "${QT5_VERSION}" PREFERRED_VERSION_qtjsbackend-native = "${QT5_VERSION}" PREFERRED_VERSION_qtgraphicaleffects = "${QT5_VERSION}" PREFERRED_VERSION_qtimageformats = "${QT5_VERSION}" PREFERRED_VERSION_qtmultimedia = "${QT5_VERSION}" PREFERRED_VERSION_qtquick1 = "${QT5_VERSION}" PREFERRED_VERSION_qtquickcontrols = "${QT5_VERSION}" PREFERRED_VERSION_qtsensors = "${QT5_VERSION}" PREFERRED_VERSION_qtserialport = "${QT5_VERSION}" PREFERRED_VERSION_qtscript = "${QT5_VERSION}" PREFERRED_VERSION_qtsvg = "${QT5_VERSION}" PREFERRED_VERSION_qttools-native = "${QT5_VERSION}" PREFERRED_VERSION_qtwebkit = "${QT5_VERSION}" PREFERRED_VERSION_qtwebkit-examples = "${QT5_VERSION}" PREFERRED_VERSION_qtxmlpatterns = "${QT5_VERSION}" build an image: bitbake core-image-minimal This image will build QT5.3 for framebuffer. If you want to use it with X11, then adapt according to this tutorial: Integrate Qt5 into yocto sato image on Wandboard - Wandboard Wiki Please tell me, if I missed something. I wrote this as I remembered the steps.
View full article
Question: How is mx6 PMIC_ON_REQ under SW control? mx6 PMIC_ON_REQ is hooked up to the PFUZE100's PWRON and Linux and our 3.0.35bsp is used. Mx6 SW control is to drive the PMIC_ON_REQ pin low.  It appears from the documentation that this pin can be controlled by either another imx6 pin OR through SW control. The issue is that the reference manual is not clear on how to do this. While doing an SR search (SR 1-877711457), it does appear the PMIC_ON_REQ is controlled by SW. Answer: In latest RM version, Figure 60-3. Chip on/off state flow diagram and Table 60-3. Power mode transitions in IMX6DQRM.pdf show two ways to make PMIC_ON_REQ go low. I'm sure in latest BSP SW method had been included. It turns out the SNVS module on the mx6s/dl is different from the mx6q/d which is again different from the mx6slx. The bottom line is that the requirements for the SNVS functionality came primarily from the Android market so many of the Linux use cases are not supported. SW control of the PMIC_ON_REQ pin is an example of this. This means that you are correct, there only 2 ways to get PMIC_ON_REQ to power up for the mx6q/d 1 -  a low on the ON/OFF pin greater than the debounce time (750ms) 2 - a wake-up/tamper event. For the mx6s/dl, there are 3 ways to get PMIC_ON_REQ to power up 1 - power-on-reset on the VSNVS  (i.e first applying VSNVS) 2 -  a low on the ON/OFF pin greater than the debounce time (750ms) 3 - a wake-up/tamper event. Note, in my case, where there is an external input that actually wakes up the system, turns on the PMIC and brings up the mx6 there is only 1 way to get PMIC_ON_REQ to go back high 1 - a low on the ON/OFF pin greater than the debounce time (750ms) As it turns out, when the VSNVS_HP section is powered (i.e VDDHIGH is applied), it gates off the wake-up timer.
View full article
Contents 1 创建 i.MX8QXP Linux 4.14.98_ga 板级开发包编译环境 2 1.1 下载板级开发包 ...................................................... 2 1.2 创建yocto编译环境: ................................................ 3 2 Device Tree ............................................................. 15 2.1 恩智浦的device Tree结构 ..................................... 15 2.2 device Tree的由来(no updates) ............................ 18 2.3 device Tree的基础与语法(no updates) ................. 20 2.4 device Tree的代码分析(no updates) .................... 42 3 恩智浦i.MX8XBSP 包文件目录结构 ......................... 75 4 恩智浦i.MX8XBSP的编译(no updates) .................... 77 4.1 需要编译哪些文件 ................................................ 77 4.2 如何编译这些文件 ................................................ 78 4.3 如何链接为目标文件及链接顺序 ........................... 79 4.4 kernel Kconfig ...................................................... 81 5 恩智浦BSP的内核初始化过程(no updates) .............. 81 5.1 初始化的汇编代码 ................................................ 83 5.2 初始化的C代码 ..................................................... 87 5.3 init_machine ....................................................... 100 6 恩智浦BSP的内核定制 ........................................... 103 6.1 DDR修改 ............................................................ 103 6.2 IO管脚配置与Pinctrl驱动 .................................... 105 6.3 新板bringup ........................................................ 120 6.4 更改调试串口 ...................................................... 128 6.5 uSDHC设备定制(eMMC flash,SDcard, SDIOcard) 135 6.6 LVDS LCD 驱动定制 .......................................... 144 6.7 GPIO_Key 驱动定制 .......................................... 147 6.8 GPIO_LED 驱动定制 ......................................... 151 6.9 Fuse nvram驱动 ................................................. 154 6.10 SPI与SPI Slave驱动 ........................................... 155 6.11 USB 3.0 TypeC 改成 USB 3.0 TypeA(未验证) ... 162 6.12 汽车级以太网驱动定制 ....................................... 162 6.13 i.MX8DX MEK支持 ............................................. 180 6.14 NAND Flash支持与烧录 ..................................... 181
View full article
In i.MX8MQ and i.MX8M Mini, the codec used is WM8524, which only supports audio playback. Although 8M Mini does have PDM microphone interface (MICFIL), there is no support for audio record via I2S. This guide will show you how to add audio recording driver in i.MX8MQ/8MM step by step.   Hardware: i.MX8MQ/8MM Evk, I2S output digital microphone OS: Android/Linux Kernel version: 4.14.78 For detailed steps, please see attachment.
View full article
目录 1    硬件资源,文档及工具下载... 2 1.1    硬件资源... 2 1.2    内存配置测试相关的文档... 3 1.3    内存压力测试工具. 3 1.4    内存配置工具. 4 2    内存设计要求... 4 3    LPDDR4基础... 4 4    硬件连接... 6 5    i.MX8QXP/DXP+LPDDR4内存配置与测试步骤... 8 5.1    生成LPDDR4初始化脚本... 8 5.2    使用内存测试工具测试内存... 13 5.3    编译内存测试工具所用的SCFW镜像... 17 5.4    其它尺寸的LPDDR4配置... 18 6    i.MX8DX+DDR3L内存配置... 23 7    测试失败的DEBUG.. 26 8    内存参数应用到SCFW中... 30
View full article
Platform: i.mx8qm/qxp OS: imx-yocto-L4.14.98_2.0.0_ga Camera: max9286 deserializer <=> max96705 serializer  + ar0144 or: max9286 deserializer <=> max96705 serializer + ov9284 Note that currently only one camera is support and the serializer should be connected to the IN0 of max9286. Data format: ar0144: mono raw 12bit. ov9284: mono raw 10bit. On imx8qm/qxp the data will be recieved as raw 16bit and the valid data bit start from bit[13] to LSB. for mono raw 12bit the valid data bit is 0bxxdd_dddd_dddd_ddxx for mono raw 10bit the valid data bit is 0bxxdd_dddd_dddd_xxxx max9286 and max96705 configuration: dbl bws PXL_CRC/edc hven hibw lccen him should be the same on both sides, this can be achieved by pin or register configurations. The crossbar function of max96705 can be used to fix the reversed data bit. for example, reversed 12bit with dbl to 1. 0x20 0xb 0x21 0xa 0x22 0x9 ....... 0x2b 0x0 0x30 0xb 0x31 0xa .... 0x3b 0x0 0x20 to 0x2b and 0x30 to 0x3b are the registers of max96705. Patch apply: 1. push the kernel-patch to the kernel source and apply it. 2. reconfig the kernel setting, make sure there is only CONFIG_MAX9286_AR0144 or        CONFIG_MAX9286_WISSEN(ov9284) enabled, all other max9286 related are disabled. You can run menuconfig to achieve this. 3. For testing copy the vulkan-v4l2.tar to the board, and run vulkan-v4l2.     the source code is at https://github.com/sheeaza/vulkan-v4l2 branch ar0144 for ar0144, branch ov9284 for ov9284. =========== updated patch for data format.
View full article
Introduction The Intel® Neural Compute Stick 2 (Intel® NCS 2) is Intel’s newest deep learning inference development kit. Packed in an affordable USB-stick form factor, the Intel® NCS 2 is powered by latest VPU (vision processing unit) – the Intel® Movidius™ Myriad X, which includes an on-chip neural network accelerator called the Neural Compute Engine. With 16 SHAVE cores and a dedicated hardware neural network accelerator, the NCS 2 offers up to 8x performance improvement+ over the previous generation. Ref: https://software.intel.com/en-us/articles/run-intel-openvino-models-on-intel-neural-compute-stick-2   The NCS 2 officially supported hardware platform is x86 PC and Raspberry Pi. In this guide, we will introduce how to implement in i.MX8MQ. Please see attached guide for more details.
View full article
Check new updated version for with Morty here Step 1 : Get iMX Yocto AVS setup environment Review the steps under Chapter 3 of the i.MX_Yocto_Project_User'sGuide.pdf on the L4.X LINUX_DOCS to prepare your host machine. Including at least the following essential Yocto packages $ sudo apt-get install gawk wget git-core diffstat unzip texinfo \   gcc-multilib build-essential chrpath socat libsdl1.2-dev u-boot-tools Install the i.MX NXP AVS repo Create/Move to a directory where you want to install the AVS yocto build enviroment. Let's call this as <yocto_dir> $ cd <yocto_dir> $ repo init -u https://source.codeaurora.org/external/imxsupport/meta-avs-demos -b master -m imx7d-pico-avs-sdk_4.1.15-1.0.0.xml Download the AVS BSP build environment: $ repo sync Step 2: Setup yocto for Alexa_SDK image with AVS-SETUP-DEMO script: Run the avs-setup-demo script as follows to setup your environment for the imx7d-pico board: $ MACHINE=imx7d-pico DISTRO=fsl-imx-x11 source avs-setup-demo.sh -b <build_sdk> Where <build_sdk> is the name you will give to your build folder. After acepting the EULA the script will prompt if you want to enable: Sound Card selection The following Sound Cards are supported on the build: SGTL (In-board Audio Codec for PicoPi) 2-Mic Conexant The script will prompt if you are going to use the Conexant Card. If not then SGTL will be assumed as your selection Are you going to use Conexant Sound Card [Y/N]? Install Alexa SDK Next option is to select if you want to pre-install the AVS SDK software on the image. Do you want to build/include the AVS_SDK package on this image(Y/N)? If you select YES, then your image will contain the AVS SDK ready to use (after authentication). Note this AVS_SDK will not have WakeWord detection support, but it can be added on runtime. If your selection was NO, then you can always manually fetch and build the AVS_SDK on runtime. All the packages dependencies will be already there, so only fetching the AVS_SDK source code and building it is required. Finish avs-image configuration At the end you will see a text according with the configuration you select for your image build. Next is an example for a Preinstalled AVS_SDK with Conxant Sound Card support and WiFi/BT not enabled. ==========================================================   AVS configuration is now ready at conf/local.conf             - Sound Card = Conexant                                     - AVS_SDK pre-installed                                       You are ready to bitbake your AVS demo image now:               bitbake avs-image                                        ========================================================== Step 3: Build the AVS image Go to your <build_sdk> directory and start the build of the avs-image There are 2 options Regular Build: $ cd <yocto_dir>/<build_sdk> $ bitbake avs-image With QT5 support included: $ cd <yocto_dir>/<build_sdk> $ bitbake avs-image-qt5 The image with QT5 is useful if you want to add some GUI for example to render DisplayCards. Step 4 : Deploying the built images to SD/MMC card to boot on target board. After a build has succesfully completed, the created image resides at <build_sdk>/tmp/deploy/images/imx7d-pico/ In this directory, you will find the imx7d-pico-avs.sdcard image or imx7d-pico-avs-qt5.sdcard, depending on the build you chose on Step3. To Flash the .sdcard image into the eMMC device of your PicoPi board follow the next steps: Download the bootbomb flasher Follow the instruction on Section 4. Board Reflashing of the Quick Start Guide for AVS kit to setup your board on flashing mode. Copy the built SDCARD file $ sudo dd if=imx7d-pico-avs.sdcard of=/dev/sd bs=1M && sync $ sync Properly eject the pico-imx7d board: $ sudo eject /dev/sd NXP Documentation Refer to the Quick Start Quide for AVS SDK to fully setup your PicoPi board with Synaptics 2Mic and PicoPi i.mx7D For a more comprehensive understanding of Yocto, its features and setup; more image build and deployment options and customization, please take a look at the i.MX_Yocto_Project_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document. For a more detailed description of the Linux BSP, u-boot use and configuration, please take a look at the i.MX_Linux_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document.
View full article
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. OpenBLAS is an optimized BLAS library which is uesd for deep learning accelerator in Caffe/Caffe2. I enable it in Yocto (Rocko) by adding bb file. And I build on i.MX6QP, i.MX7ULP and i.MX8MQ and also run its test example successfully. You can find test example(openblas_utest) under folder image/opt/openblas/bin of OpenBLAS work directory. Currently, version 0.3.0 is supported in the bb file. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ update to v 0.3.6 and enable mutli-thread by set USE_OPENMP=1 and USE_THREAD=4 when compiling this library.
View full article
In defaut Linux BSP, NXP implemented LVDS to HDMI(it6263) and MIPI-DSI to HDMI(adv7535) bridge chip drivers. And these drivers need read the EDID from display, then apply the timing parameters to DRM driver. But for the use case that bridge chip -> Serializer -> Deserializer -> LCD Panel use case, there is no EDID. The attached are reference patches for such use case, it combined the bridge chip to panel directly, and no EDID is needed. The patches are tested on iMX8QXP MEK with bridge chip + panel mode, both of them can see the fb0 device under /sys/class/graphics/ folder, also can see card under  /sys/class/drm/. Display works fine with DTS selected 720P panel mode. [2020-06-24]: Add patches for L4.14.98 kernel: Android_Auto_P9.0.0_GA2.1.0_Kernel_No_EDID_IT6263.patch L4.14.98-iMX8QXP-MEK-ADV7535-MIPI-DSI-to-HDMI-bridge-chip-com.patch
View full article
Introduction The NFC Reader Library is a feature complete software support library for NXP's NFC Frontend ICs. It is designed to give developers a faster and simpler way to deliver NFC-enabled products. This multi-layer library, written in C, makes it easy to create NFC based applications. The purpose of this document is to provide instructions on how to install the NFC Reader Library on imx7dsabresd and communicate with PN5180, a NFC frontend. It will describe all the steps required to connect the board to an OM25180TWR, the wire connections, the changes in the device tree, and the library configuration. Building the Linux image and the Bal kernel module This section describes how to build the Linux image using Yocto and how to compile the Bal kernel module. Informations specific for this library start from the next section. Requirements: a Linux host PC (ex. Ubuntu 14.04/16.04) and root permissions. To download the required host packages, use: $ sudo apt-get install gawk wget git-core diffstat unzip texinfo gcc-multilib build-essential chrpath socat libsdl1.2-dev Specific for Ubuntu: $ sudo apt-get install libsdl1.2-dev xterm sed cvs subversion coreutils texi2html docbook-utils python-pysqlite2 help2man make gcc g++ desktop-file-utils \ libgl1-mesa-dev libglu1-mesa-dev mercurial autoconf automake groff curl lzop asciidoc To setup the repo utility (a tool written on top of git), run the commands: $ mkdir ~/bin (this step may not be needed if the bin folder already exists) $ curl http://commondatastorage.googleapis.com/git-repo-downloads/repo > ~/bin/repo $ chmod a+x ~/bin/repo Then add the following line to .bashrc to ensure the ~/bin is in the PATH variable. export PATH=~/bin:$PATH To download the Freescale Yocto Project Community BSP: $ mkdir fsl-release-bsp $ cd fsl-release-bsp $ repo init -u git://git.freescale.com/imx/fsl-arm-yocto-bsp.git -b imx-4.1-krogoth $ repo sync To build the image, in the fsl-release-bsp run the commands: $ mkdir buildDevSpi $ DISTRO=fsl-imx-xwayland MACHINE=imx7dsabresd source fsl-setup-release.sh -b buildDevSpi $ bitbake fsl-image-machine-test To build the toolchain, run: $ bitbake meta-toolchain $ cd buildDevSpi/tmp/deploy/sdk/ $ ./fsl-imx-xwayland-glibc-x86_64-meta-toolchain-cortexa7hf-neon-toolchain-4.1.15-2.1.0.sh Accept the default parameters. In order to deploy the image on an SD card use: $ sudo dd if=fsl-image-machine-test-imx7dsabresd.sdcard of=<sd card> bs=1M && sync The image is found in buildDevSpi/tmp/deploy/images/imx7dsabresd. For the kernel module compilation, setup the console environment: $ . /opt/fsl-imx-xwayland/4.1.14-2.1.0/environment-setup-cortexa7hf-neon-poky-linux-gnueabi Then, in the kernel module directory, replace the path with your linux build directory in the Makefile and run: $ make The bal.ko is the compiled module. To use the kernel menuconfig, run: $ bitbake -c menuconfig linux-imx Another useful command, for rebuilding the linux kernel image, is: $ bitbake -f -c compile linux-imx; bitbake -f -c deploy linux-imx; bitbake -f -c compile fsl-image-machine-test; bitbake -f fsl-image-machine-test Host interface The interface of the PN5180 to a host is based on a SPI interface, extended by the signal line BUSY. Only half-duplex data transfer is supported and no chaining is allowed, meaning that the whole instruction has to be sent, or the whole receiver buffer has to be read out. The module is connected to the i.MX7D board using the mikro bus expansion port in the following way: MK_BUS_CS, MK_BUS_SCK, MK_BUS_MOSI, MK_BUS_MISO are used for the SPI bus lines. MK_BUS_INT, MK_BUS_RX, MK_BUS_TX are used for the BUSY, RESET and IRQ lines. The pin configuration will be the following: GPIO6_IO22 will be the CS, GPIO6_IO14 will be BUSY, GPIO_IO12 will be RESET and GPIO_IO13 will be IRQ. The DWL pin, which can be used for firmware update, will be connected to GND. A common ground is also required. Connections table: Jumper Jumper Pins Description i.MX7D I/O Tower Edge J4 1 - 2 SPI Clk Selection ECSPI3_SCLK (SAI2_RX_DATA) B7 J20 1 - 2 PN5180 Reset GPIO6_IO12 (SAI1_RX_DATA) B8 J1 1 - 2 SPI SS0 GPIO6_IO22 (SAI2_TX_DATA) B9 J3 1 - 2 SPI MOSI ECSPI3_MOSI (SAI2_TX_BCLK) B10 J2 1 - 2 SPI MISO ECSPI3_MISO (SAI2_TX_SYNC) B11 J19 2 - 3 PN5180 BUSY GPIO6_IO14 (SAI1_TX_SYNC) B58 J5 1 - 2 PN5180 IRQ GPIO6_IO13 (SAI1_TX_BCLK) B62 X X PN5180 DWL GND B52 X X GND GND B2 Kernel Configuration In order to allow the library to manage the RESET, IRQ and BUSY pins, the options for Debug GPIO and Userspace I/O drivers must be enabled (in menuconfig, Device Drivers -> GPIO Support -> Debug GPIO and Device Drivers -> Userspace I/O -> Userspace I/O platform driver with generic IRQ). For controlling the SPI, there are two options: spidev or NXP bal. For spidev, it is necessary to select Device Drivers -> SPI support -> User mode SPI and apply the imx7d-sdb_spidev.patch (it also does the pinmuxing, it is attached to the document). When using NXP bal, it is necessary to compile the module, initialize it with insmod and apply the imx7d-sdb_bal.patch (it also does the pinmuxing, the patch and the module are attached to this document). Library Configuration For the library configuration, <lib-folder>/Platform/DAL/Board_Imx6ulevkPn5180.h must be replaced with Board_Imx7dsabresdPn5180_bal.h or Board_Imx7dsabresdPn5180_spidev.h (based on the selected spi interface). For compilation, the command is: $ ./build.sh yocto /opt/fsl-imx-xwayland/4.1.15-2.1.0/sysroots/ The last parameter is the location of the toolchain generated by yocto. A build folder is generated outside of the source code folder. The applications from the ComplianceApp can be deployed on the board in order to test the functionality provided. Other useful resources: – i.MX Yocto Project User's Guide: https://www.nxp.com/webapp/sps/download/preDownload.jsp?render=true – NFC Reader Library for Linux Installation: https://www.nxp.com/docs/en/application-note/AN11802.pdf – PN5180 component: https://www.nxp.com/docs/en/data-sheet/PN5180A0XX-C1-C2.pdf
View full article
Low power demo on i.MX8MM.   9/28/2020: Attachments updated. 1. Fix a bug in 5.4.24 kernel that system can only wakeup once. 2. Remove 0x104 from atf patch. On 5.4.24, tested OK without PLL2.   9/8/2020: Attachments updated. Add patches for 5.4.24 kernel.   We use it to test power consumption on i.MX8MM EVK.   Usage: 1. Kernel: echo "mem" > /sys/power/state   2. M4: Select a power mode from menu and wait for wakeup. Default wakeup method is GPT.   Add more patches, which will add functions for the case: 1. M core RUN and A core in suspend with DDR OFF. 2. M core wakeup A core without DDR support.   Descriptions: freertos_lowpower.zip. A simple freertos example for M4 RUN when A core in DSM. Generally, we use MU_TriggerInterrupts(MUB, kMU_GenInt0InterruptTrigger); to do wakeup. low_power_demo.zip A simple baremetal example for M4 RUN when A core in DSM. Generally, we use MU_TriggerInterrupts(MUB, kMU_GenInt0InterruptTrigger); to do wakeup. Note that the freertos version will have more options in menu. atf patch: Allow A53 to enter fast-wakeup stop when M4 RUN. Also avoid bypass of some plls, which is important to make M4 RUN when A53 enters suspend. 0001-iMX8MM-GIR-wakeup.patch: GIR wakeup patch for kernel. Need kernel to use fsl-imx8mm-evk-m4.dtb. 0002-Don-t-keep-root-clks-when-M4-is-ON.patch. Don't keep root clocks when M4 is ON. 0001-plat-imx8mm-keep-the-necessary-clock-enabled-for-rdc.patch. There's a design issue that when wakeup from DSM, described in patch: "if NOC power down is enabled in DSM mode, when system resume back, RDC need to reload the memory regions config into the MRCs, so PCIE, DDR, GPU bus related clock must on to make sure RDC MRCs can be successfully reloaded." Note that this patch will keep PCIE, DDR and GPU clock on, which will increase the power. An optimization will be decrease PCIE, DDR and GPU clock before entering DSM.   Power measurement: Supply Domain Voltage(V) I(mA) P(mW) peak avg peak avg peak avg VDD_ARM(L6) 1.010029 1.009513 1.109 1.030 1.120 1.039 VDD_SOC(L5) 0.855199 0.854857 190.110 189.973 162.582 162.400 VDD_GPU_VPU_DRAM(L10) 0.977240 0.977050 19.865 19.800 19.413 19.346 NVCC_DRAM(L15) 1.094407 1.094168 2.059 1.984 2.253 2.171 Total         185.367 184.956   Notes: This power measurements is got by putting Cortex-A in DSM and Cortex-M in RUNNING. In other tests, if M core can be put to STOP mode, additional power can be saved (5 - 20mA in VDD_SOC). From the table, we can see that by putting DDR to retain, a lot of power can be saved in VDD_SOC and NVCC_DRAM.
View full article
Dithering Implementation for Eink Display Panel by Daiyu Ko, Freescale Dithering a.          Dithering in digital image processing Dithering is a technique used in computer graphics to create the illusion of color depth in images with a limited color palette (color quantization). In a dithered image, colors not available in the palette are approximated by a diffusion of colored pixels from within the available palette. The human eye perceives the diffusion as a mixture of the colors within it (see color vision). Dithered images, particularly those with relatively few colors, can often be distinguished by a characteristic graininess, or speckled appearance. Figure 1. Original photo; note the smoothness in the detail http://en.wikipedia.org/wiki/File:Dithering_example_undithered_web_palette.png Figure 2.Original image using the web-safe color palette with no dithering applied. Note the large flat areas and loss of detail. http://en.wikipedia.org/wiki/File:Dithering_example_dithered_web_palette.png Figure 3.Original image using the web-safe color palette with Floyd–Steinberg dithering. Note that even though the same palette is used, the application of dithering gives a better representation of the original b.         Applications Display hardware, including early computer video adapters and many modern LCDs used in mobile phonesand inexpensive digital cameras, show a much smaller color range than more advanced displays. One common application of dithering is to more accurately display graphics containing a greater range of colors than the hardware is capable of showing. For example, dithering might be used in order to display a photographic image containing millions of colors on video hardware that is only capable of showing 256 colors at a time. The 256 available colors would be used to generate a dithered approximation of the original image. Without dithering, the colors in the original image might simply be "rounded off" to the closest available color, resulting in a new image that is a poor representation of the original. Dithering takes advantage of the human eye's tendency to "mix" two colors in close proximity to one another. For Eink panel, since it is grayscale image only, we can use the dithering algorism to reduce the grayscale level even to black/white only but still get better visual results. c.          Algorithm There are several algorithms designed to perform dithering. One of the earliest, and still one of the most popular, is the Floyd–Steinberg dithering algorithm, developed in 1975. One of the strengths of this algorithm is that it minimizes visual artifacts through an error-diffusion process; error-diffusion algorithms typically produce images that more closely represent the original than simpler dithering algorithms. (Original) Threshold Bayer   (ordered)                                     Example (Error-diffusion): Error-diffusion dithering is a feedback process that diffuses the quantization error to neighboring pixels. Floyd–Steinberg dithering only diffuses the error to neighboring pixels. This results in very fine-grained dithering. Jarvis, Judice, and Ninke dithering diffuses the error also to pixels one step further away. The dithering is coarser, but has fewer visual artifacts. It is slower than Floyd–Steinberg dithering because it distributes errors among 12 nearby pixels instead of 4 nearby pixels for Floyd–Steinberg. Stucki dithering is based on the above, but is slightly faster. Its output tends to be clean and sharp. Floyd–Steinberg Jarvis,   Judice & Ninke Stucki                         Error-diffusion dithering (continued): Sierra dithering is based on Jarvis dithering, but it's faster while giving similar results. Filter Lite is an algorithm by Sierra that is much simpler and faster than Floyd–Steinberg, while still yielding similar (according to Sierra, better) results. Atkinson dithering, developed by Apple programmer Bill Atkinson, resembles Jarvis dithering and Sierra dithering, but it's faster. Another difference is that it doesn't diffuse the entire quantization error, but only three quarters. It tends to preserve detail well, but very light and dark areas may appear blown out. Sierra Sierra   Lite Atkinson                              2.     Eink display panel characteristic a.       Low resolution Eink only has couple resolution modes for display      DU                  (1bit, Black/White)      GC4                (2bit, Gray scale)      GC16              (4bit, Gray scale)      A2                   (1bit, Black/White, fast update mode) b.      Slow update time For 800x600 panel size (per frame)      DU                  300ms                              GC4                450ms                              GC16              600ms                               A2                   125ms 3.       3.     Effect by doing dithering for Eink display panel a.       Low resolution with better visual quality By doing dithering to the original grayscale image, we can get better visual looking result. Even if the image becomes black and white image, with the dithering algorism, you will still get the feeling of grayscale image. b.      Faster update with Eink’s animation waveform Since the DU/A2 mode could update the Eink panel faster than grayscale mode, with dithering, we can get no only the better visual looking result, but also we can use DU/A2 fast update mode to show animation or even normal video files. 4.       4.     Our current dithering implementation a.       Choose a simple and effective algorism Considering Eink panel’s characteristics, we compared couple dithering algorism and decide to use Atkinson dithering algorism. It is simple and the result is better especially for Einkblack/white display case. b.      Made a lot of optimization so that it will not affect update time too much With the simplicity of the Atkinson dithering algorism, we can also put a lot of effort to do the optimization in order to reduce the dithering processing time and make it practical for actual use. c.       Current algorism performance and result Currently, with Atkinson dithering algorism, our processing time is about 70ms. 5.       5.     Availability a.       We implemented both Y8->Y1 and Y8->Y4 dithering with the same dithering algorism. b.      Implemented into our EPDC driver with i.MX6SL Linux 3.0.35 version release. c.       Also implemented in our Video for Eink demo 6.       6.     References a.       Part of dithering introduction from www.wikipedia.org
View full article
       There are 8 UART ports on i.mx6ul and one uniform Linux driver for these UARTs. Form UART1~UART6, there is no special operation or attention to use them. But for UART7/UART8, there is a special rule to enable them.       According to i.mx6ul RM, we can see UART7/8 RTS pins are muxed with ENET TX_CLK pins. When SION bit of ENET_TX_CLK is set, we need switch to other MUX mode as input signal for UARTx_RTS. Otherwise, UARTx_RTS will be interrupted by loopback ENET clock signal. So we should set IOMUXC_UART7_RTS_B_SELECT_INPUT and IOMUXC_UART8_RTS_B_SELECT_INPUT registers to 0x2/03 to avoid ENET clock's conflict no matter whether we enable UART7/8 RTS/CTS function or not. Let's summarize the different scenarios to enable UART7/8 as follows: 1. ENET driver is disabled and UART7/8 is enabled. There is no special operation to do, just use UART7/8 like other UARTs 2. ENET and UART7/8 are both enabled. There are two use models, RTS/CTS enabled or disabled.     2a. If we enable RTS/CTS feature and configure RTS/CTS pins in the device tree, of course, we should avoid the conflict between UART CTS/RTS pins and ENET TX_CLK pins. There is no special operation to do becuase your RTS/CTS device tree would automatically set  IOMUXC_UART7_RTS_B_SELECT_INPUT/ IOMUXC_UART8_RTS_B_SELECT_INPUT register to correct value.     2b. If we don't enable RTS/CTS feature and no RTS/CTS pin configuration in devcie tree, we should manually add code to set  IOMUXC_UART7_RTS_B_SELECT_INPUT/  IOMUXC_UART8_RTS_B_SELECT_INPUT register because the default value is 0x0(ENETx_TX_CLK_ALT1) Here is an example to show how to use UART7 on EVK board in scenario 2b. 1. modify imx6ul-14x14-evk.dts to enable UART7     a. remove all  LCD settings to disable lcdif because we configure UART7 TX/RX pin pad to LCD data line     b. add UART7 related settings                &uart7 {                     pinctrl-names = "default";                     pinctrl-0 = <&pinctrl_uart7>;                    status = "okay";                 };               &iomuxc {                   pinctrl-names = "default";                   pinctrl-0 = <&pinctrl_hog_1>;                   ....                           pinctrl_uart7: uart7grp {                           fsl,pins = <                                       MX6UL_PAD_LCD_DATA16__UART7_DCE_TX 0x1b0b1                                       MX6UL_PAD_LCD_DATA17__UART7_DCE_RX 0x1b0b1                           >;                  }; 2. add code to set IOMUXC_UART7_RTS_B_SELECT_INPUT register in arch/arm/mach-imx/mach-imx6ul.c          static void __init imx6ul_init_machine(void)          {               struct device *parent;               void __iomem *iomux;               struct device_node *np;               ...........               imx6ul_pm_init();               np = of_find_compatible_node(NULL,NULL,"fsl,imx6ul-iomuxc");               iomux = of_iomap(np, 0);               writel_relaxed(0x2,iomux+0x650);            } 3. build zImage and imx6ul-14x14-evk.dtb 4. Test in linux console      root@imx6ulevk: ls /dev/ttymxc*                      //you can see ttymxc6 is in the list     root@imx6ulevk: echo hello > /dev/ttymxc6         root@imx6ulevk:
View full article
Most engineers should incorporate the following fundamental methodology when designing and bringing up a new board design: 1. Review the schematics and layout to ensure proper connectivity of all devices 2. Once the board returns from the manufacturer, measure and document all of the voltage rails of each IC on the board (especially the SoC and DRAM) 3. Ensure JTAG debugger connectivity (due to the complexity of systems today, every new board design should have some “hooks” to allow JTAG connectivity, even if these are simply test points) 4. Bring up and ensure proper DRAM functionality; it is imperative the first three steps are precisely accomplished – often times, DRAM instability or non functionality is due to improper connection (including not being connected to the voltage net) or poor layout. Once these four steps are completed, the board can then proceed to a more broad based checkout of other peripherals using some type of compiled test code executed from DRAM. More often than not, the end user’s board will differ from Freescale reference design boards either in how the DRAMs are connected or simply by using a different DRAM vendor.  As such, tools were created to aid in the development of DRAM initialization scripts.  The resulting script, though targeted for the RealView development system (aka include files), can be easily ported to another debugger’s command syntax or to assembly code for use in boot loaders.  These tools are Excel spread sheet based and include a “How To Use” tab, making the tool usage relatively self-explanatory.  Each tool is unique to a specific i.MX processor and to the DRAM technology used with each processor.  This attached files are tools available for the following i.MX SoCs:
View full article
  The mfgtool is the tool download the images to i.MX series of applications processors. It’s convenient and easy use to download the images to your board. About its introductions, work flow and use guide you can see details in the Document file of mfgtool. If customers use our reference boards, they can directly use the default mfgtools we supply for every version BSP and board. But when customers design board and do porting with our i.MX series processors. As they do many changes from our reference board, they need to rebuild the images for their board and for the download tool mfgtool. In the old version BSP, take the L3.0.35_4.1.0_130816 version as an example. When finishing porting the BSP for design board. Run the following command line to generate the manufacturing firmware. ./ltib --profile config/platform/imx/updater.profile --preconfig config/platform/imx/imx6q_updater.cf --continue –batch For android BSP Android4.2.2, one can use the follow command: make distclean make mx6dl_sabresd_mfg_config make In the newest BSP, for linux BSP in yocto use the command: $ bitbake fsl-image-mfgtool-initramfs For the newest android BSP, the command” make mx6dl_sabresd_mfg_config” can not use anymore. So how to get the \Profiles\Linux\OS Firmware\firmware\u-boot-imx6dlsabresd_sd.imx? The easiest way that you can use the u-boot you build for your board, and in the newest BSP, mfgtool can use the same u-boot with the normal u-boot for your board. You do not need to build the u-boot for mfgtool separately. They can use the same one. Hope this can do some help for you.
View full article