i.MXプロセッサ ナレッジベース

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

i.MX Processors Knowledge Base

ディスカッション

ソート順:
Description       this doc is explain how to develop a audio card driver base on i.MX6 platform. which explain the ASOC architecture struction basic knowledage and then give some sample for the audio driver development like: 1:NXP SGTL5000: NXP i.MX BSP sabrelite board default support it. 2: Wolfson WM8524.    A: 3.0.35 BSP support: i.MX6 setbox BSP support it:(which in elder fsl community link and out of data)    B: 3.14.28 BSP support pls check attachment: 3: Wolfson WM8960.     which include how to add the android middle-layer and driver, pls check attachment. 4: TI TLV320AIC3120      which include how to add the android middle-layer and driver, pls check attachment. 5: TI TLV320AIC3X   Products Product Category NXP Part Number URL MPU i.MX6 Family https://www.nxp.com/products/processors-and-microcontrollers/arm-processors/i-mx-applications-processors/i-mx-6-processors:IMX6X_SERIES   Tools NXP Development Board URL i.MX6 SabreSDP https://www.nxp.com/design/development-boards:EVDEBRDSSYS#/collection=softwaretools&start=0&max=25&query=typeTax%3E%3Et633::archived%3E%3E0::Sub_Asset_Type%3E%3ETSP::deviceTax%3E%3Ec731_c380_c127_c126&sorting=Buy%2FSpecifications.desc&language=en&siblings=false which have a doc MX6X_ASOC_V5-20191115.pdf and related driver sample codes.
記事全体を表示
///////////////////////////create device node /dev/galcore///////////////////////////// $home/myandroid/kernel_imx/drivers/mxc/gpu-viv/Kbuild MODULE_NAME ?= galcore /* define node name*/ $home/myandroid/kernel_imx/drivers/mxc/gpu-viv/hal/os/linux/kernel/gc_hal_kernel_linux.h define DEVICE_NAME "galcore" $home/myandroid/kernel_imx/drivers/mxc/gpu-viv/hal/os/linux/kernel/gc_hal_kernel_probe.c drv_init call ret = register_chrdev(major, DEVICE_NAME, &driver_fops); ///////////////////////////////opengles2 functios/////////////////////////////////////////// myandroid/device/fsl-proprietary/gpu-viv/lib/egl/libGLESv2_VIVANTE.so glActiveTexture glBindBuffer ... ... ... //those glxxxxxx call into sub_D40C int __fastcall sub_D40C(int a1, int a2, int a3) //address 0x0000D40C { int result; // r0@1 int v4; int v5; v4 = a2;   v5 = a3;   gcoOS_GetTLS(&v4);  //------------> goto libGAL.so   result = v4;   if ( v4 )     result = *(_DWORD *)(v4 + 36);   return result; } and $home/myandroid/device/fsl-proprietary/gpu-viv/lib/libGAL.so //export function signed int __fastcall gcoOS_GetTLS(void **a1) { ... ... gcoOS_GetTLS v4 = open("/dev/galcore", 2); ... ... } and device node /dev/galcore pass command into module galcore $home/myandroid/kernel_imx/drivers/mxc/gpu-viv/hal/kernel/gc_hal_kernel.c gckKERNEL_Dispatch This document was generated from the following discussion: Share Vivante 3d gc2000 work flow
記事全体を表示
When streaming, if you want to play a streaming URL, it can be inconvenient if the browser cannot recognize the URL as a media stream and downloads the content rather than using Gallery to play it. To create this kind of media streaming, you need to write an apk to use VideoView to play the URL/media stream from the console. Here is the command of how to play a media file or network stream from console. Gingerbread am start -n com.cooliris.media/com.cooliris.media.MovieView -d "<URL>"       The URL can be file position or network stream URL, such as: you can play a local file by: am start -n com.cooliris.media/com.cooliris.media.MovieView -d "/mnt/sdcard/test.mp4" You can also play a http stream by: am start -n com.cooliris.media/com.cooliris.media.MovieView -d "http://v.iask.com/v_play_ipad.php?vid=76710932" Or play a rtsp stream by: am start -n com.cooliris.media/com.cooliris.media.MovieView -d "rtsp://10.0.2.1:554/stream" ICS am start -n com.android.gallery3d/com.android.gallery3d.app.MovieActivity -d "<URL>"        The URL has the same definition of Gingerbread.
記事全体を表示
The following document contains a list of document, questions and discussions that are relevant in the community based on amount of views. If you are having a problem, doubt or getting started in i.MX processors, you should check the following links to see if your doubt is in there. Yocto Project Freescale Yocto Project main page‌ Yocto Training - HOME‌ i.MX Yocto Project: Frequently Asked Questions‌ Useful bitbake commands‌ Yocto Project Package Management - smart  How to add a new layer and a new recipe in Yocto  Setting up the Eclipse IDE for Yocto Application Development Guide to the .sdcard format  Yocto NFS &amp; TFTP boot  YOCTO project clean  Yocto with a package manager (ex: apt-get)  Yocto Setting the Default Ethernet address and disable DHCP on boot.  i.MX x Building QT for i.MX6  i.MX6/7 DDR Stress Test Tool V3.00  i.MX6DQSDL DDR3 Script Aid  Installing Ubuntu Rootfs on NXP i.MX6 boards  iMX6DQ MAX9286 MIPI CSI2 720P camera surround view solution for Linux BSP i.MX Design&amp;Tool Lists  Simple GPIO Example - quandry  i.MX6 GStreamer-imx Plugins - Tutorial &amp; Example Pipelines  Streaming USB Webcam over Network  Step-by-step: How to setup TI Wilink (WL18xx) with iMX6 Linux 3.10.53  Linux / Kernel Copying Files Between Windows and Linux using PuTTY  Building Linux Kernel  Patch to support uboot logo keep from uboot to kernel for NXP Linux and Android BSP (HDMI, LCD and LVDS)  load kernel from SD card in U-boot  Changing the Kernel configuration for i.MX6 SABRE  Android  The Android Booting process  What is inside the init.rc and what is it used for.  Others How to use qtmultimedia(QML) with Gstreamer 1.0
記事全体を表示
BSP version: 11.03 Multimedia Package version: 11.03 1. Install BSP and Multi Media package (11.03 release) 2. Avoid Display Timeout: append the following line to rootfs/etc/oprofile: echo -e -n '\033[9]' > /dev/tty0 3. Set VGA port as the primary display in the kernel command line: video=mxcdi1fb:GBR24,VGA-XGA di1_primary vga 4. Connect a VGA monitor and WVGA display to the MX53 Quick Start 5. Boot Linux on MX53 Quick Start board (NFS is used in this example) 6. Unblank WVGA display (fb1): $ echo 0 > /sys/class/graphics/fb1/blank 7. On the target enter into /dev/shm directory. If the following files are present: vss_lock vss_shmem ,delete them. 8. On your host, edit the ltib/rootfs/usr/share/vssconfig as following: vss device definition Master=VGA, Slave=WVGA master display [VGA] type = framebuffer format = RGBP fb_num = 2 main_fb_num = 0 vs_max = 4 slave display [WVGA] type = framebuffer format = RGBP fb_num = 1 vs_max = 4 9. Run the Gstreamer pipeline below: gst-launch filesrc location=file.mp4 ! qtdemux ! mfw_vpudecoder ! mfw_isink display=VGA display-1=WVGA Video is played on the VGA and WVGA panels. A 720p file can be played at the same time.
記事全体を表示
Video decoding gst-launch filesrc location=sample.mp4 ! qtdemux ! ffdec_h264 ! mfw_v4lsink Notes: On LTIB BSP 3.0.35_4.0.0, prep the package and apply the attached patch on top, then build. On Yocto, the easy way to add the gst-ffmpeg package is by adding these two lines on the conf/local.conf file: IMAGE_INSTALL_append = " gst-ffmpeg" LICENSE_FLAGS_WHITELIST = 'commercial'
記事全体を表示
[中文翻译版] 见附件   原文链接: https://community.nxp.com/docs/DOC-342059 
記事全体を表示
OpenCV is a computer vision library originally developed by Intel. It is free for commercial and research use under the open source BSD license. The library is cross-platform. It focuses mainly on real-time image processing; as such, if it finds Intel's Integrated Performance Primitives on the system, it will use these commercial optimized routines to accelerate itself. Application OpenCV's application areas include: * 2D and 3D feature toolkits * Egomotion estimation * Face Recognition * Gesture Recognition * Human-Computer Interface (HCI) * Mobile robotics * Motion Understanding * Object Identification * Segmentation and Recognition * Stereopsis Stereo vision: depth perception from 2 cameras * Structure from motion (SFM) * Motion Tracking To support some of the above areas, OpenCV includes a statistical machine learning library that contains: * Boosting * Decision Trees * Expectation Maximization * k-nearest neighbor algorithm * Naive Bayes classifier * Artificial neural networks * Random forest * Support Vector Machine Installing OpenCV on i.MX 51 EVK Board running Ubuntu Linux Assuming that you already have the Ubuntu Linux running on your board, you can use this wiki page to guide you to get your USB camera running on your system in order to use real time image processing features of this library. In a brand new installation of Ubuntu some libraries is not installed by default, so you need to install them by your own hands (use synaptic to do that), here is the list of these libraries: libgtk2.0-dev libjpeg62-dev zlib1g-dev libpng12-dev libtiff4-dev libjasper-dev libgst-dev libgstreamer0.10-dev If you already have some of those libraries installed, make sure that is the DEV version. After installing those libraries you can download the stable OpenCV version here. Install it following the procedure below: 1 - untar the opencv package tar -xvzf opencv-1.1pre1.tar.gz  2 - change to OpenCV folder cd opencv-1.1.0  3 - configure the installation enabling gstreamer and letting to compile demo apps later ./configure --with-gstreamer --disable-apps You will get the following results: General configuration ================================================       Compiler:                         g++       CXXFLAGS:       DEF_CXXFLAGS:             -Wall -fno-rtti -pipe -O3 -fomit-frame-pointer       PY_CXXFLAGS:               -Wall -pipe -O3 -fomit-frame-pointer       OCT_CXXFLAGS:             -fno-strict-aliasing -Wall -Wno-uninitialized -pipe -O3 -fomit-frame-pointer        Install path:                      /usr/local  HighGUI configuration ================================================       Windowing system --------------       Use Carbon / Mac OS X:        no       Use gtk+ 2.x:                        yes       Use gthread:                         yes       Image I/O ---------------------       Use ImageIO / Mac OS X:       no       Use libjpeg:                            yes       Use zlib:                                yes       Use libpng:                             yes       Use libtiff:                               yes       Use libjasper:                          yes       Use libIlmImf:                          no             Video I/O ---------------------       Use QuickTime / Mac OS X:     no       Use xine:                                no       Use gstreamer:                        yes       Use ffmpeg:                             no       Use dc1394 & raw1394:     no       Use v4l:                                   yes       Use v4l2:                                 yes       Use unicap:                             no     Wrappers for other languages =========================================       SWIG Python                          no       Octave                                    no       Additional build settings ============================================       Build demo apps                      no Now run make ... 4 - Build OpenCV ./make 5 - Install OpenCV ./sudo make install if all steps above were executed properly, now you can compile the sample applications: 1 - change to samples/c directory cd samples/c 2 - change the build_all script mode to +x chmod +x build_all.sh 3 - run the script ./build_all.sh Now you can test. The results below were taken from the Laplacian filter sample processing in real-time images grabbed from an USB camera: Laplacian filter with USB Camera capture device Also, you can see how is it performance on a 3 windowed application performing color conversion and canny edge detection at the same time: http://www.youtube.com/watch?v=w9yQgdABT7c EOF !
記事全体を表示
The document includes the following contents: (1)document how to port ov5646 to android jb4.2.2 (2) ov5645 driver for Linux 3.0.35 (3) ov5645 schematic based on i.MX6Q/DL (4)ov5645 for android camera HAL   [Note:]      P5V29A-0JG is a camera module based on OV5645, and PAO532-0JG is based on OV5640, both manufactured by NINGBO SUNNY OPOTECH CO.LTD (China), If customer wants to use them on i.MX6 platform, can send me email to ask for datasheets of P5V29A & PAO532 , or discuss corresponding questions on porting.   Email: weidong.sun@freescale.com
記事全体を表示
  This guide assumes that the developer has knowledge of the V4L2 API and has worked or is familiar with sensor drivers and their operation within the Linux kernel. This guide does not focus on the details of the sensor driver development that you want to port. It is assumed that you already have an existing driver for your sensor, before making the port. The version of the ISP's was 6.6.36 Linux BSP. If a different version is used, it is the developer's responsibility to review the API documentation for the corresponding version, since there may be changes that affect what is indicated in this guide. To port the camera sensor, the following steps must be taken as described in the following sections: Define sensor attributes and create instances. ISS Driver and ISP Media Server. Sensor Calibration Files. VVCAM Driver Creation. Device Tree Modifications. Define Sensor Attributes and Create Instances The following three steps are already implemented in CamDevice and are included for reference only. Step 1: Define the sensor attributes in the IsiSensor_s data structure. Step 2: Define the IsiSensorInstanceConfig_t configuration structure that will be used to create a new sensor instance. Step 3: Call the IsiCreateSensorIss() function to create a new sensor instance. ISS Driver and ISP Media Server Step 0 - Use a driver template as base code: Drivers can be found in $ISP_SOURCES_TOP/units/isi/drv/. For example, the ISP sources, come with the OV4656 and OS08a20 drivers. $ISP_SOURCES_TOP indicates the path of your working directory, where the respective sources are located. Step 1 - Add your <SENSOR> ISS Driver: Create the driver entry for your sensor in the path $ISP_SOURCES_TOP/units/isi/drv/<SENSOR>/source/<SENSOR>.c. Change all occurrences of the respective sensor name within the code, for instance, OV4656 -> <SENSOR>, respecting capital letters where applicable. Step 2 - Check the information on the IsiCamDrvConfig_s data structure: Data members defined in this data structure include the sensor ID (CameraDriverID) and the function pointer to the IsiSensor data structure. By using the address of the IsiCamDrvConfig_s structure, the driver can then access the sensor API attached to the function pointer. The following is an example of the structure: /***************************************************************************** * Each sensor driver needs to declare this struct for ISI load *****************************************************************************/ IsiCamDrvConfig_t IsiCamDrvConfig = {     .CameraDriverID = 0x0000,     .pIsiHalQuerySensor = <SENSOR>_IsiHalQuerySensorIss,     .pfIsiGetSensorIss = <SENSOR>_IsiGetSensorIss, };   Important Note: Modify the CameraDriverID according to the chip ID of your sensor. Apply this change to any Chip ID occurrence within the code. Step 3 - Check sensor macro definitions: In case there is any macro definition in the ISS Driver code, which involves specific properties of the sensor, you should modify it according to your requirements. For example: #define <SENSOR>_MIN_GAIN_STEP         (1.0f/16.0f)   Step 4 - Modify ISP Media Server build tools: Changes required in this step include: Add a CMakeLists.txt file in $ISP_SOURCES_TOP/units/isi/drv/<SENSOR>/ that builds your sensor module. Modify the CMakeLists.txt located at $ISP_SOURCES_TOP/units/isi/drv/CMakeLists.txt to include and reference your sensor directory. Modify the $ISP_SOURCES_TOP/appshell/ and $ISP_SOURCES_TOP/mediacontrol/ build tools, since by default they refer to the construction of a particular sensor, for example, the OV4656, so it is necessary to change the name of the corresponding sensor. Modify the $ISP_SOURCES_TOP/build-all-isp.sh script to reference the sensor modules and generate the corresponding binaries when building the ISP media server instance.   Step 5 - ISP Media Server run script: You need to add the operation modes defined for your sensor in the script. Each operating mode is associated with an order (mode 0, mode 1 ... mode N), a name used to execute the command in the terminal (e.g <sensor>_custom_mode_1), a resolution, and a specific calibration file for the sensor. The script is located at $ISP_SOURCES_TOP/imx/run.sh .   Step 6 - Sensor<X> config: At $ISP_SOURCES_TOP/units/isi/drv/ you can find the files to configure each sensor entry to the ISP, called Sensor0_Entry.cfg and Sensor1_Entry.cfg. There, the associated calibration files are indicated for each sensor operating mode, including the calibration files in XML format and the Dewarp Unit configuration files in JSON format. In addition, the .drv file generated for your sensor is referenced, creating the association between the respective /dev/video<X> node and the sensor driver module outputted from the ISP Media Server. In case you are using only one ISP channel, just modify Sensor0_Entry.cfg. In case you require both instances of the ISP, you will need to modify both files. Sensor Calibration Files It is a requirement for using the ISP, to have a calibration file in XML format, specific to the sensor you are using and according to the resolution and working mode. To obtain the calibration files in XML format, there are 3 options: Use the NXP ISP tuning tool for this you will need to ask for access or sign a NDA document. Pay NXP professional services to do the tune. Pay a third-party vendor to do the tune   VVCAM Driver Creation The changes indicated below are based on the assumption that there is a functional sensor driver in its base form, and that it is compatible with the V4L2 API. From now on we focus on applying the changes suggested in the NXP documentation, specifically to establish the communication of the VVCAM Driver (kernel side) and the ISI Layer. Step 0 - Create the sensor driver entry: Developers must add the driver code to the file located at $ISP_SOURCES_TOP/vvcam/v4l2/sensor/<sensor>/<sensor>_xxxx.c, along with a Makefile for the sensor driver module. In the same way, as indicated in the ISS Driver section, you can refer to one of the sample drivers that are included as part of the ISP sources, to review details about the implementation of the driver and the structure of the required Makefile.   Step 1 - Add the VVCAM mode info data structure array: This array stores all the supported modes information for your sensor. The ISI layer can get all the modes with the VVSENSORIOC_QUERY command. The following is an example of the structure, please fill in the information using the attributes of your sensor and the modes it supports. #include "vvsensor.h" . . .   static struct vvcam_mode_info_s <sensor>_mode_info[] = {         {         .index = 0,         .width = ... ,         .height = ... ,         .hdr_mode = ... ,         .bit_width = ... ,         .data_compress.enable = ... ,         .bayer_pattern = ... ,         .ae_info = {                        .                        .                        .                        },         .mipi_info = {                        .mipi_lane = ... ,                        },         },         {         .index = 1,         .         .         .         }, }; Step 2 - Define sensor client to i2c : Define the client_to_sensor macro (in case you don't have any already) and check the segments of the driver code that require this macro. #define client_to_<sensor>(client)\         container_of(i2c_get_clientdata(client), struct <sensor>, subdev)   Step 3 - Define the V4L2-subdev IOCTL function: Define and implement the <sensor>_priv_ioctl, which is used to receive the commands and parameters passed down by the user space through ioctl() and control the sensor. long <sensor>_priv_ioctl(struct v4l2_subdev *subdev, unsigned int cmd, void *arg) {         struct i2c_client *client = v4l2_get_subdevdata(subdev);         struct <sensor> *sensor = client_to_<sensor>(client);         struct vvcam_sccb_data_s reg;         uint32_t value = 0;         long ret = 0;           if(!sensor){                return -EINVAL;         }           switch (cmd) {         case VVSENSORIOC_G_CLK: {                ret = custom_implementation();                break;         }         case VIDIOC_QUERYCAP: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_QUERY: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_CHIP_ID: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_RESERVE_ID: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_G_SENSOR_MODE:{                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_SENSOR_MODE: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_STREAM: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_WRITE_REG: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_READ_REG: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_EXP: {                ret = custom_implementation();                break;         }         case VVSENSORIOC_S_POWER:         case VVSENSORIOC_S_CLK:         case VVSENSORIOC_RESET:         case VVSENSORIOC_S_FPS:         case VVSENSORIOC_G_FPS:         case VVSENSORIOC_S_LONG_GAIN:         case VVSENSORIOC_S_GAIN:         case VVSENSORIOC_S_VSGAIN:         case VVSENSORIOC_S_LONG_EXP:         case VVSENSORIOC_S_VSEXP:          case VVSENSORIOC_S_WB:         case VVSENSORIOC_S_BLC:         case VVSENSORIOC_G_EXPAND_CURVE:                break;         default:                break;         }           return ret; }   As you can see in the example, some cases are implemented but others are not. Developers are free to implement the features they consider necessary, as long as a minimum base of operation of the driver is guaranteed (query commands, read and write registers, among others). It is the developer's responsibility to implement each custom function, for each case or scenario that may arise when interacting with the sensor. In addition to what was shown previously, a link must be created to make the ioctl connection with the driver in question. Link your priv_ioctl function on the v4l2_subdev_core_ops struct, as in the example below: static const struct v4l2_subdev_core_ops <sensor>_core_ops = {         .s_power       = v4l2_s_power,         .subscribe_event = v4l2_ctrl_subdev_subscribe_event,         .unsubscribe_event = v4l2_event_subdev_unsubscribe,      // IOCTL link         .ioctl = <sensor>_priv_ioctl, };   Step 4 - Verify your sensor's private data structure: After performing the modifications suggested, it would be a good practice to double-check your sensor's private data structure properties, in case there is one missing, and also check that the properties are initialized correctly on the driver's probe.   Step 5 - Modify VVCAM V4L2 sensor Makefile : At $ISP_SOURCES_TOP/vvcam/v4l2/sensor/Makefile, include your sensor object as follows: ... obj-m += <sensor>/ ... Important Note: There is a very common issue that appears when working with camera sensor drivers in i.MX8MP platforms. The kernel log message shows something similar to the following: mxc-mipi-csi2.<X>: is_entity_link_setup, No remote pad found! The link setup callback is required by the Media Controller when performing the linking process of the media entities involved in the capture process of the camera. Normally, this callback is triggered by the imx8-media-dev driver included as part of the Kernel sources. To make sure that the problem is not related to your sensor driver, verify the link setup callback is already created in the code, and if is not, you can add the following template: /* Function needed by i.MX8MP */ static int <sensor>_camera_link_setup(struct media_entity *entity,                                    const struct media_pad *local,                                    const struct media_pad *remote, u32 flags) {     /* Return always zero */         return 0; }   /* Add the link setup callback to the media entity operations struct */ static const struct media_entity_operations <sensor>_camera_subdev_media_ops = {         .link_setup = <sensor>_camera_link_setup, };     /* Verify the initialization process of the media entity ops in the sensor driver's probe function*/ static int <sensor>_probe(struct i2c_client *client, ...) {         /* Initialize subdev */         sd = &<sensor>->subdev;         sd->dev = &client->dev;         <sensor>->subdev.internal_ops = ...         <sensor>->subdev.flags |= ...         <sensor->subdev.entity.function = ...     /* Entity ops initialization */         <sensor->subdev.entity.ops = &<sensor>_camera_subdev_media_ops; } In most cases, adding the link setup function will solve the media controller issue, or at least it discards problems on the driver side. Device Tree Modifications On the Device Tree side, it is necessary to enable the ISP channels that will be used. Likewise, it is necessary to disable the ISI channels, which are normally the ones that connect to the MIPI_CSI2 ports to extract raw data from the sensor (in case the ISP is not used). A MIPI_CSI2 port can be mapped to either an ISI channel or an ISP channel, but not both simultaneously. In this guide, we focus on using the ISP, so any other custom configuration that you want to implement may vary from what is shown. In the code below, ISP channel 0 is enabled, and the connection is made to the port where the sensor is connected (mipi_csi_0). &mipi_csi_0 {         status = "okay";         port@0 {         // Example endpoint to <sensor>_ep                mipi0_sensor_ep: endpoint@1 {                        remote-endpoint = <&<sensor>_ep>;                };         }; };   &cameradev {         status = "okay"; };   &isi_0 {         status = "disabled"; };   &isi_1 {         status = "disabled"; };   &isp_0 {         status = "okay"; };   &isp_1 {         status = "disabled"; };   &dewarp {         status = "okay"; }; What is shown above does not represent a complete device tree file, is only a general skeleton of the points you should pay attention to when working with ISP channels. For simplicity, we omitted all the attributes that are normally defined when working with camera sensor drivers and their respective configurations in the i2c port of the hardware.   Note: Due to hardware restrictions when using ISP channels, it is recommended to use the isp_0 channel, when working with only one sensor. In case you need to use two sensors, you can enable both channels, taking into account the limitations regarding the output resolutions and the clock frequency when both channels are working simultaneously. What is not recommended is to use the isp_1 channel when working with a single sensor.   References ISP Independent Sensor Interface (ISI) API reference, I.MX8M Plus Camera Sensor Porting User guide: https://www.nxp.com/webapp/Download?colCode=IMX8MPCSPUG Sensor Calibration tool: https://www.nxp.com/webapp/Download?colCode=AN13565 i.MX8M Plus reference manual: https://www.nxp.com/webapp/Download?colCode=IMX8MPRM  
記事全体を表示
P3T1755 Demo   In this space I want to show you the things that you can create usign our products.   In  this demo I demostrate a use case creating a GUI for a Temperature Sensor.   We can create modern GUIs and more with LVGL combined with our powerful processors.               CPU USAGE As we can see  the CPU usage for this demo is around 2%   Pictures         This demo is based on the previous publused articles.   References: https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/Adding-support-to-P3T1755-on-Linux/ta-p/1855874 https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/How-to-run-LGVL-on-iMX-using-framebuffer/ta-p/1853768  
記事全体を表示
This guide is a continuation from our latest Debian 12 Installation Guide for iMX8MM, iMX8MP, iMX8MN and iMX93. Here we will describe the process to install the multimedia and hardware acceleration packages, specifically GPU, VPU and Gstreamer on i.MX8M Mini, i.MX8M Plus and i.MX8M Nano. The guide is based on the one provided by our colleague Build Ubuntu For i.MX8 Series Platform - NXP Community, which requires to previously build an image using Yocto Project with the following distro and image name. Distro name - fsl-imx-wayland Image name – imx-image-multimedia For more information please check our BSP documentation i.MX Yocto Project User’s Guide.   Hardware Requirements Linux Host Computer (Ubuntu 20.04 or later) USB Card reader or Micro SD to SD adapter SD Card Evaluation Kit Board for the i.MX8M Nano, i.MX8M Mini, i.MX8M Plus   Software Requirements Linux Ubuntu (20.04 tested) or Debian for Host Computer BSP version 6.1.55 built with Yocto Project   After built the image we can start the installation by following the steps below:   GPU Installation The GPU Installation consists of copy the files from packages imx-gpu-g2d, imx-gpu-viv, libdrm to the Debian system. As our latest installation guide, we will continue naming “mountpoint” to the directory where Debian system is mounted on our host machine. Regarding the path provided on each step, we put labels <build-path> and <machine> that you will need to change based on your environment. These are the paths that Yocto Project uses to save the packages. However, this could change on your environment and you can find the work directory from each package using the following command: bitbake -e <package-name> | grep ^WORKDIR= This command will show you the absolute path of the package work directory. 1. Install GPU Packages $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-gpu-g2d/6.4.11.p2.2-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-gpu-viv/1_6.4.11.p2.2-aarch64-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/libdrm/2.4.115.imx-r0/image/* mountpoint   2. Install Linux IMX Headers and IMX Parser $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/linux-imx-headers/6.1-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-poky-linux/imx-parser/4.8.2-r0/image/* mountpoint   3. Use chroot $ sudo LANG=C.UTF-8 chroot mountpoint/ qemu-aarch64-static /bin/bash   4. Install Dependencies $ apt install libudev-dev libinput-dev libxkbcommon-dev libpam0g-dev libx11-xcb-dev libxcb-xfixes0-dev libxcb-composite0-dev libxcursor-dev libxcb-shape0-dev libdbus-1-dev libdbus-glib-1-dev libsystemd-dev libpixman-1-dev libcairo2-dev libffi-dev libxml2-dev kbd libexpat1-dev autoconf automake libtool meson cmake ssh net-tools network-manager iputils-ping rsyslog bash-completion htop resolvconf dialog vim udhcpc udhcpd git v4l-utils alsa-utils git gcc less autoconf autopoint libtool bison flex gtk-doc-tools libglib2.0-dev libpango1.0-dev libatk1.0-dev kmod pciutils libjpeg-dev   5. Create a folder for Multimedia Installation. Here we will clone all the multimedia repositories.  $ mkdir multimedia_packages $ cd multimedia_packages   6. Build Wayland $ git clone https://gitlab.freedesktop.org/wayland/wayland.git $ cd wayland $ git checkout 1.22.0 $ meson setup build --prefix=/usr -Ddocumentation=false -Ddtd_validation=true $ cd build $ ninja install   7. Build Wayland Protocols IMX $ git clone https://github.com/nxp-imx/wayland-protocols-imx.git $ cd wayland-protocols-imx $ git checkout wayland-protocols-imx-1.32 $ meson setup build --prefix=/usr -Dtests=false $ cd build $ ninja install   8. Build Weston $ git clone https://github.com/nxp-imx/weston-imx.git $ cd weston-imx $ git checkout weston-imx-11.0.3 $ meson setup build --prefix=/usr -Dpipewire=false -Dsimple-clients=all -Ddemo-clients=true -Ddeprecated-color-management-colord=false -Drenderer-gl=true -Dbackend-headless=false -Dimage-jpeg=true -Drenderer-g2d=true -Dbackend-drm=true -Dlauncher-libseat=false -Dcolor-management-lcms=false -Dbackend-rdp=false -Dremoting=false -Dscreenshare=true -Dshell-desktop=true -Dshell-fullscreen=true -Dshell-ivi=true -Dshell-kiosk=true -Dsystemd=true -Dlauncher-logind=true -Dbackend-drm-screencast-vaapi=false -Dbackend-wayland=false -Dimage-webp=false -Dbackend-x11=false -Dxwayland=false $ cd build $ ninja install   VPU Installation To install VPU and Gstreamer please follow the steps below: 1. Install firmware-imx $ sudo cp -Pra <build-path>/tmp/work/all-poky-linux/firmware-imx/1_8.22-r0/image/lib/* mountpoint/lib/   2. Install VPU Driver $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-vpu-hantro/1.31.0-r0/image/* mountpoint $ sudo cp -Pra <build-path>/tmp/work/armv8a-<machine>-poky-linux/imx-vpuwrap/git-r0/image/* mountpoint   3. Use chroot $ sudo LANG=C.UTF-8 chroot mountpoint/ qemu-aarch64-static /bin/bash   4. Install dependencies for Gstreamer Plugins $ apt install libgirepository1.0-dev gettext liborc-0.4-dev libasound2-dev libogg-dev libtheora-dev libvorbis-dev libbz2-dev libflac-dev libgdk-pixbuf-2.0-dev libmp3lame-dev libmpg123-dev libpulse-dev libspeex-dev libtag1-dev libbluetooth-dev libusb-1.0-0-dev libcurl4-openssl-dev libssl-dev librsvg2-dev libsbc-dev libsndfile1-dev   5. Change directory to multimedia packages. $ cd multimedia-packages   6. Build gstreamer $ git clone https://github.com/nxp-imx/gstreamer -b lf-6.1.55-2.2.0 $ cd gstreamer $ meson setup build --prefix=/usr -Dintrospection=enabled -Ddoc=disabled -Dexamples=disabled -Ddbghelp=disabled -Dnls=enabled -Dbash-completion=disabled -Dcheck=enabled -Dcoretracers=disabled -Dgst_debug=true -Dlibdw=disabled -Dtests=enabled -Dtools=enabled -Dtracer_hooks=true -Dlibunwind=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   7. Build gst-plugins-base $ git clone https://github.com/nxp-imx/gst-plugins-base -b lf-6.1.55-2.2.0 $ cd gst-plugins-base $ meson setup build --prefix=/usr -Dalsa=enabled -Dcdparanoia=disabled -Dgl-graphene=disabled -Dgl-jpeg=disabled -Dopus=disabled -Dogg=enabled -Dorc=enabled -Dpango=enabled -Dgl-png=enabled -Dqt5=disabled -Dtheora=enabled -Dtremor=disabled -Dvorbis=enabled -Dlibvisual=disabled -Dx11=disabled -Dxvideo=disabled -Dxshm=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   8. Build gst-plugins-good $ git clone https://github.com/nxp-imx/gst-plugins-good -b lf-6.1.55-2.2.0 $ cd gst-plugins-good $ meson setup build --prefix=/usr -Dexamples=disabled -Dnls=enabled -Ddoc=disabled -Daalib=disabled -Ddirectsound=disabled -Ddv=disabled -Dlibcaca=disabled -Doss=enabled -Doss4=disabled -Dosxaudio=disabled -Dosxvideo=disabled -Dshout2=disabled -Dtwolame=disabled -Dwaveform=disabled -Dasm=disabled -Dbz2=enabled -Dcairo=enabled -Ddv1394=disabled -Dflac=enabled -Dgdk-pixbuf=enabled -Dgtk3=disabled -Dv4l2-gudev=enabled -Djack=disabled -Djpeg=enabled -Dlame=enabled -Dpng=enabled -Dv4l2-libv4l2=disabled -Dmpg123=enabled -Dorc=enabled -Dpulse=enabled -Dqt5=disabled -Drpicamsrc=disabled -Dsoup=enabled -Dspeex=enabled -Dtaglib=enabled -Dv4l2=enabled -Dv4l2-probe=true -Dvpx=disabled -Dwavpack=disabled -Dximagesrc=disabled -Dximagesrc-xshm=disabled -Dximagesrc-xfixes=disabled -Dximagesrc-xdamage=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   9. Build gst-plugins-bad $ git clone https://github.com/nxp-imx/gst-plugins-bad -b lf-6.1.55-2.2.0 $ cd gst-plugins-bad $ meson setup build --prefix=/usr -Dintrospection=enabled -Dexamples=disabled -Dnls=enabled -Dgpl=disabled -Ddoc=disabled -Daes=enabled -Dcodecalpha=enabled -Ddecklink=enabled -Ddvb=enabled -Dfbdev=enabled -Dipcpipeline=enabled -Dshm=enabled -Dtranscode=enabled -Dandroidmedia=disabled -Dapplemedia=disabled -Dasio=disabled -Dbs2b=disabled -Dchromaprint=disabled -Dd3dvideosink=disabled -Dd3d11=disabled -Ddirectsound=disabled -Ddts=disabled -Dfdkaac=disabled -Dflite=disabled -Dgme=disabled -Dgs=disabled -Dgsm=disabled -Diqa=disabled -Dkate=disabled -Dladspa=disabled -Dldac=disabled -Dlv2=disabled -Dmagicleap=disabled -Dmediafoundation=disabled -Dmicrodns=disabled -Dmpeg2enc=disabled -Dmplex=disabled -Dmusepack=disabled -Dnvcodec=disabled -Dopenexr=disabled -Dopenni2=disabled -Dopenaptx=disabled -Dopensles=disabled -Donnx=disabled -Dqroverlay=disabled -Dsoundtouch=disabled -Dspandsp=disabled -Dsvthevcenc=disabled -Dteletext=disabled -Dwasapi=disabled -Dwasapi2=disabled -Dwildmidi=disabled -Dwinks=disabled -Dwinscreencap=disabled -Dwpe=disabled -Dzxing=disabled -Daom=disabled -Dassrender=disabled -Davtp=disabled -Dbluez=enabled -Dbz2=enabled -Dclosedcaption=enabled -Dcurl=enabled -Ddash=enabled -Ddc1394=disabled -Ddirectfb=disabled -Ddtls=disabled -Dfaac=disabled -Dfaad=disabled -Dfluidsynth=disabled -Dgl=enabled -Dhls=enabled -Dkms=enabled -Dcolormanagement=disabled -Dlibde265=disabled -Dcurl-ssh2=disabled -Dmodplug=disabled -Dmsdk=disabled -Dneon=disabled -Dopenal=disabled -Dopencv=disabled -Dopenh264=disabled -Dopenjpeg=disabled -Dopenmpt=disabled -Dhls-crypto=openssl -Dopus=disabled -Dorc=enabled -Dresindvd=disabled -Drsvg=enabled -Drtmp=disabled -Dsbc=enabled -Dsctp=disabled -Dsmoothstreaming=enabled -Dsndfile=enabled -Dsrt=disabled -Dsrtp=disabled -Dtinyalsa=disabled -Dtinycompress=enabled -Dttml=enabled -Duvch264=enabled -Dv4l2codecs=disabled -Dva=disabled -Dvoaacenc=disabled -Dvoamrwbenc=disabled -Dvulkan=disabled -Dwayland=enabled -Dwebp=enabled -Dwebrtc=disabled -Dwebrtcdsp=disabled -Dx11=disabled -Dx265=disabled -Dzbar=disabled -Dc_args=-I/usr/include/imx $ cd build $ ninja install   10. Build imx-gst1.0-plugin $ git clone https://github.com/nxp-imx/imx-gst1.0-plugin -b lf-6.1.55-2.2.0 $ cd imx-gst1.0-plugin $ meson setup build --prefix=/usr -Dplatform=MX8 -Dc_args=-I/usr/include/imx $ cd build $ ninja install   11. Exit chroot $ exit   Verify Installation For verification process, boot your target from the SD Card. (Review your specific target documentation) 1. Verify Weston For this verification you will need to be root user. # export XDG_RUNTIME_DIR=/run/user/0 # weston   2. Verify VPU and Gstreamer Use the following Gstreamer pipeline for Hardware Accelerated VPU Encode. # gst-launch-1.0 videotestsrc ! video/x-raw, format=I420, width=640, height=480 ! vpuenc_h264 ! filesink location=test.mp4   Then you can reproduce the file with this command: # gplay-1.0 test.mp4   Finally, you have installed and verified the GPU, VPU and Multimedia packages. Now, you can start testing audio and video applications.
記事全体を表示
Hello, here Jorge. On this post I will explain how to enable MQS1 on i.MX8ULP. As background about how to setup the environment to build the image using Yocto, please take a look on our i.MX Yocto Project User's Guide: Requirements: i.MX 8ULP EVK. Serial console emulator (Tera Term, Putty, etc.). USB Type-C cable. Micro USB cable. Headphones/speakers. Linux PC. Build done in Linux 6.6.23_2.0.0. i.MX8ULP audio subsystem. i.MX 8ULP extends audio capabilities on i.MX 7ULP by adding dedicated DSP cores for voice trigger and audio processing, enabling lower latency and power efficiency to support variety of audio applications. Some of hardware blocks implemented on 8ULP to support audio use cases are the next: Cadence Fusion F1 DSP processor. Cadence HiFi4 DSP processor. PowerQuad hardware accelerator with fixed and floating + FFT. Digital Microphone interface with support of up-to 8 PDM channels. Up-to 8 independent SAI instances. Up-to 2 Medium Quality Sound (MQS). Sony/Philips Digital interface (SPDIF). As is described before, MQS0 and MQS1 are part of real time domain and application domain respectively. I’m going to focus this post on how to enable MQS1 on application domain. Medium Quality Sound (MQS)  This module is basically generates a PWM from PCM audio data. For the major part of typical audio applications will require an external CODEC to deliver the audio quality but, sometimes where the application does not demand this quality, MQS can provide a medium quality audio via GPIO pin that can directly drive the audio output to a speaker or headphone via inexpensive external amplifier/buffer instead of CODEC. The design of the MQS can be described as follows: Input the PCM audio data (from SAI) into a 16-bit register. Up-sample data to match PWM switching frequency. Perform a simple 2nd order Sigma-Delta smooth on the current data versus previous data. Convert the PCM register into a 6-bit PWM width register and output through a GPIO pin.   How to enable it? By default, our BSP does not enable clock for MQS1. This clock is controlled on CGC1 (AD), specifically on MQS1CLK (Multiplexer to select the audio clock connected to the MQS clock input). So, it is needed to modify imx8ulp-clock.h and clk-imx8ulp.c. Please take a look on patch attached at the end of this post to see the modification in drivers easily. These drivers have the definition/configuration for MQS1_SEL in CGC1 and needs to be added as follows: MQS1_SEL definition needs to bed added in imx8ulp-clock.h: #define IMX8ULP_CLK_MQS1_SEL 56 #define IMX8ULP_CLK_CGC1_END 57 MQS1_SEL configuration needs to be added in imx8ulp_clk_cgc1_init of clk-imx8ulp.c: clks[IMX8ULP_CLK_MQS1_SEL] = imx_clk_hw_mux2("mqs1_sel", base + 0x90c, 0, 2, sai45_sels, ARRAY_SIZE(sai45_sels)); Also, it is necessary to configure MQS1 on device tree of i.MX8ULP. Add this in soc: soc@0 of imx8ulp.dtsi: mqs1: mqs@0x29290064 { reg = <0x29290064 0x4>; compatible = "fsl,imx8qm-mqs"; assigned-clocks = <&cgc1 IMX8ULP_CLK_MQS1_SEL>; assigned-clock-parents = <&cgc1 IMX8ULP_CLK_SPLL3_PFD1_DIV1>; clocks = <&cgc1 IMX8ULP_CLK_MQS1_SEL>, <&cgc1 IMX8ULP_CLK_MQS1_SEL>; clock-names = "core", "mclk"; status = "disabled"; }; And create a new device tree, in this case is going to be named imx8ulp-evk-mqs.dts and is as follows: #include "imx8ulp-evk.dts" / { sound-simple-mqs { compatible = "simple-audio-card"; simple-audio-card,name = "imx-simple-mqs"; simple-audio-card,frame-master = <&sndcpu>; simple-audio-card,bitclock-master = <&sndcpu>; simple-audio-card,dai-link@0 { format = "left_j"; sndcpu: cpu { sound-dai = <&sai4>; }; codec { sound-dai = <&mqs1>; }; }; }; }; &cgc1 { assigned-clock-rates = <24576000>; }; &iomuxc1 { pinctrl_mqs1: mqs1grp { fsl,pins = < MX8ULP_PAD_PTF7__MQS1_LEFT 0x43 >; }; }; &mqs1 { #sound-dai-cells = <0>; pinctrl-names = "default"; pinctrl-0 = <&pinctrl_mqs1>; status = "okay"; }; &sai4 { #sound-dai-cells = <0>; assigned-clocks = <&cgc1 IMX8ULP_CLK_SAI4_SEL>; assigned-clock-parents = <&cgc1 IMX8ULP_CLK_SPLL3_PFD1_DIV1>; status = "okay"; }; Let’s apply these changes on our BSP, in my case I’m going to create a new layer in Yocto to add these modifications with a patch that can be found at the end on this post, here the steps: Install essential Yocto Project host packages: $ sudo apt install gawk wget git diffstat unzip texinfo gcc build-essential chrpath socat cpio python3 python3-pip python3-pexpect xz-utils debianutils iputils-ping python3-git python3-jinja2 python3-subunit zstd liblz4-tool file locales libacl1 Install the “repo” utility: $ mkdir ~/bin $ curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo $ chmod a+x ~/bin/repo $ export PATH=~/bin:$PATH Set up Git: $ git config --global user.name "Your Name" $ git config --global user.email "Your Email" $ git config –list Download the i.MX Yocto Project Community BSP recipe layers and create build folder: $ mkdir imx-yocto-bsp $ cd imx-yocto-bsp $ repo init -u https://github.com/nxp-imx/imx-manifest -b imx-linux-scarthgap -m imx-6.6.23-2.0.0.xml $ repo sync $ DISTRO=fsl-imx-wayland MACHINE=imx8ulp-lpddr4-evk source imx-setup-release.sh -b 8ulp_build Create the new layer: $ cd ~/imx-yocto-bsp/sources $ bibake-layers create-layer meta-mqs $ cd meta-mqs conf/layer.conf should be as follows: BBPATH .= ":${LAYERDIR}" BBFILES += "${LAYERDIR}/recipes-*/*/*.bb \ ${LAYERDIR}/recipes-*/*/*.bbappend" BBFILE_COLLECTIONS += "meta-mqs" BBFILE_PATTERN_meta-mqs = "^${LAYERDIR}/" BBFILE_PRIORITY_meta-mqs = "6" LAYERSERIES_COMPAT_meta-mqs = "nanbield" Let’s change the recipe: $ sudo rm -r recipes-example $ mkdir -p recipes-kernel/linux/files 0001-8ULP-MQS-Enable.patch should be copied to ~/imx-yocto-bsp/sources/meta-mqs/recipes-kernel/linux/files Add an append (on this case is called “linux-imx_%.bbappend”)to change the recipe with next content: FILESEXTRAPATHS:prepend := "${THISDIR}/files:" SRC_URI += "file:// 0001-8ULP-MQS-Enable.patch " addtask copy_dts after do_unpack before do_prepare_recipe_sysroot do_copy_dts () { if [ -n "${DTS_FILE}" ]; then if [ -f ${DTS_FILE} ]; then echo "do_copy_dts: copying ${DTS_FILE} in ${S}/arch/arm64/boot/dts/freescale" cp ${DTS_FILE} ${S}/arch/arm64/boot/dts/freescale/ fi fi } The next step is add the layer and build the image: $ cd ~/imx-yocto-bsp/8ulp_build $ bitbake-layers add-layer ~/imx-yocto-bsp/sources/meta-mqs Confirm that the layer has been added: $ bitbake-layers show-layers Build the image: $ bitbake imx-image-multimedia i.MX8ULP EVK limitations The i.MX8ULP has the next MQS1 pins available: But, in the EVK board, the mayor part of these pins are used for other functions such as: - Push button: - MIPI DSI:  - Etc… So, take the output signal of MQS1 pins of EVK board is difficult, in this article, I’m going to configure PTF7 only (MQS1_left) for practicality. If you are working with this board and you need to use these pins for MQS function you will need to manipulate the traces and take the required signals. If you are designing a custom board, planning is essential to avoid this issue. Flash the board. One the build has been finished, we will have the necessary files to flash the board and test it. If you are not too familiarized with this process I suggest you take a look on this post. First, put the board in serial download mode changing the boot configuration switches on the board:   The next step is connecting the power cable, micro-USB cable on the debug port and USB-C type cable to USB0 connector on the board. Then, turn-on the board and run the next command in terminal of build directory: uuu -b emmc_all imx-boot-imx8ulpevk-sd.bin-flash_singleboot_m33 imx-image-multimedia-imx8ulpevk.wic Now, power-off the board, change the boot mode to single boot-eMMC and power it on to test it. Test MQS1 in i.MX8ULP. To test MQS1 it is needed to change the device tree we created, we can do it with the next commands in U-boot: u-boot=> setenv fdtfile imx8ulp-evk-mqs.dtb u-boot=> saveenv u-boot=> boot Now we can test MQS1 on i.MX8ULP EVK, let's confirm that the clock is active in MQS module with the next command: $ cat /sys/kernel/debug/clk/clk_summary -n As you can see mqs1_sel is active and running at 24576000 Hz: And the card appears if we run the next command: $ aplay -l To play audio through MQS we can do it as any sound card: $ speaker-test -D sysdefault:CARD=imxsimplemqs -c 2 -f 48000 -F S16_LE -t pink -P 3 The signal should look like this in the pin output: And like this after a filter, for example the filter used in i.MX93 EVK.   With this post we have been able check the general operation of MQS, configure and compile the image with the required changes to enable MQS1 on EVK board and measure the output on the board. There is a considerable limitation on EVK board since we cannot test left and right outputs without intervene the base board, but this can be helpful as a reference to who would like to use this audio output on i.MX8ULP processor. Best regards. References. Yocto Project customization guide - NXP Community How to add a new layer and a new recipe in Yocto - NXP Community Flashing Linux BSP using UUU - NXP.  i.MX8ULP reference manual. Embedded Linux Projects Using Yocto Project Cookbook.
記事全体を表示
Hello, on this post I will explain how to record separated audio channels using an 8MIC-RPI-MX8 Board. As background about how to setup the board to record and play audio using i.MX boards, I suggest you take a look on the next post: How to configure, record and play audio using an 8MIC-RPI-MX8 Board. Requirements: I.MX 8M Mini EVK. Linux Binary Demo Files - i.MX 8MMini EVK. 8MIC-RPI-MX8 Board. Serial console emulator (Tera Term, Putty, etc.). Headphones/speakers. Waveform Audio Format WAV, known for WAVE (Waveform Audio File Format), is a subset of Microsoft’s Resource Interchange File Format (RIFF) specification for storing digital audio files. This format does not apply compression to the information and stores the audio with different sampling rates and bitrates. WAV files are larger in size compared to other formats such as MP3 which uses compression to reduce the file size while maintaining a good audio quality but, there is always some lose on quality since audio information is too random to be compressed with conventional methods, the main advantage of this format is provide an audio file without losses that is also widely used on studio. This files starts with a file header with data chunks. A WAV file consists of two sub-chunks: fmt chunk: data format. data chunk: sample data. So, is structured by a metadata that is called WAV file header and the actual audio information. The header of a WAV (RIFF) file is 44 bytes long and has the following format: How to separate the channels? To separate each audio channel from the recording we need to use the next command that will record raw data of each channel. arecord -D plughw:<audio device> -c<number of chanels> -f <format> -r <sample rate> -d <duration of the recording> --separate-channels <output file name>.wav arecord -D plughw:2,0 -c8 -f s16_le -r 48000 -d 10 --separate-channels sample.wav This command will output raw data of recorded channels as is showed below. This raw data cannot be used as a “normal” .wav file because the header information is missing. It is possible to confirm it if import raw data to a DAW and play recorded samples: So, to use this information we need to create the header for each file using WAVE library on python. Here the script that I used: import wave import os name = input("Enter the name of the audio file: ") os.system("arecord -D plughw:2,0 -c8 -f s16_le -r 48000 -d 10 --separate-channels " + name + ".wav") for i in range (0,8): with open(name + ".wav." + str(i), "rb") as in_file: data = in_file.read() with wave.open(name + "_channel_" + str(i) +".wav", "wb") as out_file: out_file.setnchannels(1) out_file.setsampwidth(2) out_file.setframerate(48000) out_file.writeframesraw(data) os.system("mkdir output_files") os.system("mv " + name + "_channel_" + "* " + "output_files") os.system("rm " + name + ".wav.*") If we run the script, will generate a directory with the eight audio channels in .wav format. Now, we will be able to play each channel individually using an audio player. References IBM, Microsoft Corporation. (1991). Multimedia Programming Interface and Data Specifications 1.0. Microsoft Corporation. (1994). New Multimedia Data Types and Data Techniques. Standford University. (2024, January 30). Retrieved from WAVE PCM sound file format: http://hummer.stanford.edu/sig/doc/classes/SoundHeader/WaveFormat/
記事全体を表示
One of the most popular use cases for embedded systems are projects destinated to show information and interact with users. These views are called GUI or Graphic User Interface which are designed to be intuitive, attractive, consistent, and clear. There are many tools that we can use to achieve great GUIs, mostly implemented for platforms such as Web, Android, and iOS. Here, we will need to introduce the concept of framework, basically, it is a set of tools and rules that provides a minimal structure to start with your development. Frameworks usually comes with configuration files, code snippets, files and folders organization helping us to save time and effort. Also, it is important to review the concept of SDK or Software Development Kit which is a set of tools that allows to build software for specific platforms. Usually supplies debugging tools, documentation, libraries, API’s, emulators, and sample code. Flutter is an open-source UI software development kit by Google that help us to create applications with great GUIs on different platforms from a single codebase. Depends on the reference, you can find Flutter defined as a framework or SDK and both are correct, however, an SDK could be a best definition thanks to Flutter supplies a wide and complete package to create an application in which framework is also included. This article is aimed at those that are in a prototyping stage looking for a different tool to develop projects. Also, this article pretends to be a theoretical introduction explaining the most important concepts. However, is a good practice to learn more about reviewing the official documentation from Flutter. (Flutter documentation | Flutter) Here is the structure used throughout this article: What is Flutter? Flutter details Platforms Programming language Official documentation Flutter for embedded systems What is Flutter? Flutter was officially released by Google in December 2018 with a main aim, to give developers a tool to create applications natively compiled for mobile (Android, iOS), web and desktop (Windows, Linux) from a single codebase. It means that as a developer, Flutter will create a structure with minimal code, configuration files, build files for each operating system, manifests, etc. in which we will add our custom code and finally build this code for our preferred OS. For example, we can create an application to review fruit and vegetable information and compile for Android and iOS with the same code. A basic Flutter development process based on my experience looks like the following diagram: Flutter has the following key features: Cross-platform development. Flutter allows the developer to create applications for different platforms using a single codebase. It means that you will not need to recreate the application for each platform you want to support.   Hot-reload. This feature allows the developer to see changes in real time without restarting the whole application, this results in time savings for your project.   High Performance Flutter apps achieve high performance due to the app code is compiled to native ARM code. With this tool no interpreters are involved.   UI Widgets Flutter supplies a set of widgets (UI components such as boxes, inputs text, buttons, etc.) predefined by UI systems guidelines Material on Android and Cupertino for iOS. Source: Material 3 Design Kit | Figma Community Source: Design - Apple Developer   Great community support. This feature could be subjective but, it is useful when we are developing our project find solutions to known issues or report new ones. Because of Flutter is an open source and is widely implemented in the industry this tool owns a big community, with events, forums, and documentation. Flutter Details Supported Platforms With Flutter you can create applications for: Android iOS Linux Debian Linux Ubuntu macOS web Chrome, Firefox, Safari, Edge Windows Supported deployment platforms | Flutter Programming Language Flutter use Dart, a programming language is an open-source language supported by Google optimized to use on the creation of user interfaces. Dart key features: Statically typed. This feature helps catching errors making the code robust ensuring that the variable’s value always match with the declared variable’s type. Null safety. All variables on Dart are non-nullable which means that every variable must have a non-null value avoiding errors at execution time. This feature also, make the code robust and secure. Async/Await. Dart is client-optimized which means that this language was specially created to ensure the best performance as a client application. Async/Await is a feature part of this optimization making easier to manage network requests and other asynchronous operations. Object oriented. Dart is an object-oriented language with classes and mixin. This is especially useful to use on Flutter with the usage of widgets. Compiler support of Just-In-Time (JIT) and Ahead-of-Time (AOT) JIT provides the support that enables the Hot Reload Flutter feature that I mentioned before. It is a complex mechanism, but Dart “detects” changes in your code and execute only these changes avoiding recompiling all the code. AOT compiler produces efficient ARM code improving start up time and performance. Official documentation Flutter has a rich community and documentation that goes from UI guidelines to an Architectural Overview. You can find the official documentation at the following links: Flutter Official Documentation: Flutter documentation | Flutter Flutter Community: Community (flutter.dev) Dart Official Documentation: Dart documentation | Dart Flutter for embedded systems So far, we know all the excellent features and platforms that Flutter can support. But, what about the embedded systems? On the official documentation we can find that Flutter may be used for embedded systems but in fact there is no an official supported platform. This SDK has been supported by their community, specially there is one repository on GitHub supported by Sony that provides documentation and Yocto recipes to support Flutter on embedded Linux. To understand the reason to differentiate between Flutter for Linux Desktop with official support and to create a specific Flutter support for embedded Linux is important to describe the basics of Flutter architecture. Based on the Flutter documentation the system is designed using layers that can be illustrated as follows:   Source: Flutter architectural overview | Flutter We can see as a top level “Framework” which is a high-level layer that includes widgets, tools and libraries that are in contact with developers. Below “Framework,” the layer “Engine” is responsible of drawing the widgets specified in the previous layer and provides the connection between high-level and low-level code. This layer is mostly written in C++ for this reason Flutter can achieve high performance running applications. Specifically for graphics rendering Flutter implements Impeller for iOS and Skia for the rest of platforms. The bottom layer is “Embedder” which is specific for each target and operating system this layer allows Flutter application to run as a native app providing the access to interact with different services managed by the operating systems such as input, rendering surfaces and accessibility. This layer for Linux Desktop uses GTK/GDK and X11 as backend that is highly dependent of unnecessary libraries and expensive for embedded systems which have constrained resources for computation and memory. The work around founded by Sony’s Flutter for Embedded Linux repository is to change this backend using a widely implemented backend for embedded systems Wayland. The following image illustrates the difference between Flutter for Linux Desktop and Flutter for Embedded Linux.   Source: What's the difference between Linux desktop and Embedded Linux · sony/flutter-embedded-linux Wiki · GitHub   Source: What's the difference between Linux desktop and Embedded Linux · sony/flutter-embedded-linux Wiki · GitHub Here is the link to the mentioned repository: GitHub - sony/flutter-elinux: Flutter tools for embedded Linux (eLinux) Finally, I would like to encourage you to read the official Flutter documentation and consider this tool as a great option compared to widely used tools on embedded devices such as Qt or Chromium. Also, please have a look to a great article written by Payam Zahedi delving into the implementation of Flutter for Embedded Linux measuring performance and giving conclusions about the usage of Flutter in embedded systems. (Flutter on Embedded Devices. Learn how to run Flutter on embedded… | by Payam Zahedi | Snapp Embedded | Medium).    
記事全体を表示
vpuwraper can fulfill VPU decoder/encoder, if customer’s user case is simple, for example they just need to encode yuv stream to H264, or decode H264 stream to yuv, There is no need to use gstreamer or V4L2 complex framework, you can use vpuwraper. Platform: i.MX8MP + L5.4.70.2.3.0 Build Procedure: mkdir vpu cd vpu git clone https://github.com/nxp-imx/imx-vpuwrap   cd imx-vpuwrap/ git tag -l   git switch -c rel_imx_5.4.70_2.3.0   source ../../.././5.4.70.2.3.0/sdk/environment-setup-aarch64-poky-linux   make -f Makefile_8mp   Test on i.MX8MP EVK board Pls find attached test log for decode and encode If busChromaU in YUV file is null, you will failed to encode it,pls apply patch vpuwraper patch for L5.4.70.2.3.0.patch to fix t If YUV file is interleave format, you need to add add interleave parameter : -interleave 1 ./test_enc_arm_elinux -i test.yuv -o aaa.h264 -f 2 -w 176 -h 96 -interleave 1   Thanks, Lambert
記事全体を表示
i.MX8 series contains internal HiFi4 DSP. It is targeted for Audio related signal processing. SOF (Sound Open Firmware) is open source audio DSP firmware, driver and SDK. This document introduces basic theory about IIR/FIR digital filters, how to design IIR/FIR digital filters and the Equalizer filters implementation by SOF. After that, the document also describes how HiFi4 DSP MAC engine accelerate the EQ filters calculation.
記事全体を表示
On behalf of Gopise Yuan. A collection of several GST debugging tips and known-how. When you need to play onto a DRM layer/plane directly without going through compositor, kmssink should be a good choice: // kmssink, with scale and adjust alpha property (opaque) and zpos (this requires kmssink>=1.16): gst-launch-1.0 filesrc location=/media/AVC-AAC-720P-3M_Alan.mov ! decodebin ! imxvideoconvert_g2d ! kmssink plane-id=37 render-rectangle="<100,100,720,480>" can-scale=false plane-properties=s,alpha=65535,zpos=2 When using playbin, you can still customize the pipeline besides the sink plugin, e.g. add a converter plugin: // Playbin with additional customization on converter before sink: gst-launch-1.0 playbin uri=file:///mnt/MP4_H264_AAC_1920x1080.mp4 video-sink="imxvideoconvert_g2d ! video/x-raw,format=BGRA,width=1920,height=1080 ! kmssink plane-id=44" GST can generate a pipeline graph for analyzing the pipeline in a intuitive manner: // Generate pipeline graph: 1. Export GST_DEBUG_DUMP_DOT_DIR=<dump-folder>, GST_DEBUG=4 2. Run pipeline with gst-launch or others. 3. Copy all dump files (.dot) from <dump-folder>. Note: one dump file will be created for each state transaction. Normally, what we need will be PAUSE_READY or READY_PAUSE, after which pipeline has been setup. 4. Convert the .dot file to PDF with Graphviz: dot -Tpdf 0.00.03.685443250-gst-launch.PAUSED_READY.dot > pipeline_PAUSED_READY.pdf  
記事全体を表示
This is based on L5.10.35 BSP where you have to install QT static build: Qt 5.15 static build: Assuming your sysroot is at "/sysroot-cross" and your toolchain is at "/Toolchain" your qt-source is at /Qt-5.15 PATH=/sysroot-cross/bin:/sysroot-cross/sbin:/Toolchain/bin mkdir /Qt-5.15/mkspecs/qws/linux-imx6-g++ create in this dir the textfile "qmake.conf" with this content: ####################### snip qmake.conf ############################## include(../../common/linux.conf) include(../../common/qws.conf) # modifications to g++.conf QMAKE_CC                = arm-linux-gnueabi-gcc QMAKE_CFLAGS            = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include QMAKE_CXX               = arm-linux-gnueabi-g++ QMAKE_CXXFLAGS          = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include QMAKE_INCDIR            = /sysroot-cross/include /sysroot-cross/usr/include QMAKE_LIBDIR            = /sysroot-cross/lib /sysroot-cross/usr/lib QMAKE_LINK              = arm-linux-gnueabi-g++ QMAKE_LINK_SHLIB        = arm-linux-gnueabi-g++ QMAKE_LFLAGS            = -L/sysroot-cross/lib -L/sysroot-cross/usr/lib -Wl,-rpath-link -Wl,/sysroot-cross/lib QMAKE_LFLAGS           += -Wl,-rpath-link -Wl,/sysroot-cross/usr/lib #Opengl QMAKE_INCDIR_OPENGL = /Vivante/include QMAKE_INCDIR_OPENGL += /Vivante/include/GL QMAKE_INCDIR_OPENGL += /Vivante/include/EGL QMAKE_INCDIR_OPENGL += /Vivante/include/GLES2 QMAKE_LIBDIR_OPENGL = /Vivante/lib QMAKE_INCDIR_OPENGL_ES1 = $$QMAKE_INCDIR_OPENGL QMAKE_LIBDIR_OPENGL_ES1 = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_OPENGL_ES1CL = $$QMAKE_INCDIR_OPENGL QMAKE_LIBDIR_OPENGL_ES1CL = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_OPENGL_ES2 = /Vivante/include QMAKE_INCDIR_OPENGL_ES2 += /Vivante/include/EGL QMAKE_INCDIR_OPENGL_ES2 += /Vivante/include/GLES2 QMAKE_LIBDIR_OPENGL_ES2 = $$QMAKE_LIBDIR_OPENGL QMAKE_INCDIR_EGL = $$QMAKE_INCDIR_OPENGL_ES2 QMAKE_LIBDIR_EGL = $$QMAKE_LIBDIR_OPENGL QMAKE_LIBS_EGL = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_ES2 = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_QT = -lEGL -lGAL -lGLESv2 -lGLES_CM QMAKE_LIBS_OPENGL_ES1 = QMAKE_LIBS_OPENGL_ES1CL = # modifications to linux.conf QMAKE_AR                = arm-linux-gnueabi-ar cqs QMAKE_OBJCOPY           = arm-linux-gnueabi-objcopy QMAKE_STRIP             = arm-linux-gnueabi-strip QMAKE_CFLAGS_RELEASE   = -pipe -isystem /sysroot-cross/include -isystem /sysroot-cross/usr/include load(qt_config) ####################### snip qmake.conf ############################## create in the same dir the text file "qplatformdefs.h" ####################### snip qplatformdefs.h ############################## #include "../../linux-g++/qplatformdefs.h" ####################### snip qplatformdefs.h ############################## now goto dir /Qt-5.15 cd /Qt-5.15 call configure with ./configure -opensource -confirm-license -release -no-rpath -no-fast \     -no-sql-ibase -no-sql-mysql -no-sql-odbc -no-sql-psql -no-sql-sqlite2 \     -no-qt3support -no-mmx -no-3dnow -no-sse -no-sse2 -no-sse3 -no-ssse3 \     -no-sse4.1 -no-sse4.2 -no-avx -no-optimized-qmake -no-nis -no-cups -pch \     -reduce-relocations -force-pkg-config -prefix /usr -no-armfpa -make libs \     -nomake docs -little-endian -embedded armv6 -qt-decoration-styled \     -depths all -xplatform qws/linux-imx6-g++ -iconv -largefile -qt-gfx-linuxfb \     -qt-gfx-multiscreen -qt-mouse-pc -qt-mouse-linuxinput -qt-libpng \     -plugin-gfx-directfb -system-zlib -no-accessibility -no-gfx-transformed \     -no-gfx-qvfb -no-gfx-vnc -no-kbd-tty -no-kbd-linuxinput -no-kbd-qvfb \     -no-mouse-linuxtp -no-mouse-tslib -no-mouse-qvfb -no-libmng -no-libtiff \     -no-gif -no-libjpeg -no-freetype -no-stl -no-glib -no-openssl -no-egl \     -no-xmlpatterns -no-exceptions -no-multimedia -no-audio-backend -no-phonon \     -no-phonon-backend -no-webkit -no-script -no-scripttools -no-svg -no-script \     -no-declarative -no-sql-sqlite -no-qdbus -no-opengl -static -nomake tools \     -nomake examples -nomake demos when configuring is finished call make after a looong time, when everything goes right, we have a staticly compiled Qt. DO NOT call "make install". We will install manually: copy from /Qt-5.15/bin the files moc, uic, rcc and qmake to somewhere in PATH, eg. /sysroot-cross/bin copy the contents of dir /Qt-5.15/mkspecs to /sysroot-cross/usr/mkspec copy the contents of dir /Qt-5.15/plugins to /sysroot-cross/usr/plugins copy the contents of dir /Qt-5.15/include to /sysroot-cross/usr/include copy the contents of dir /Qt-5.15/lib to /sysroot-cross/usr/lib Test application camtest: if you don't have/want directfb plugin remove from camtest.pro the lines LIBS += -L/sysroot-cross/usr/plugins/gfxdrivers QTPLUGIN += QDirectFBScreen and the lines from main.cpp #include <QtPlugin> Q_IMPORT_PLUGIN(qdirectfbscreen) generate makefile by typing /sysroot-cross/bin/qmake -spec /sysroot-cross/usr/mkspecs/qws/linux-imx6-g++ camtest.pro then make you should set and activate your framebuffers with this script ################# snip ################################ fbset -fb /dev/fb0 -g 1024 768 1024 2304 16 echo -n 0 > /sys/class/graphics/fb0/blank fbset -fb /dev/fb1 -g 1024 768 1024 1536 32 echo -n 0 > /sys/class/graphics/fb1/blank modprobe galcore modprobe uvcvideo modprobe mxc_v4l2_capture ################# snip ################################ if you use directfb then your /etc/directfbrc file should look like this: ######################## snip /etc/directfbrc ############# system=fbdev fbdev=/dev/fb1 mode=1024x768 depth=32 pixelformat=ARGB no-cursor window-surface-policy=systemonly ######################## snip /etc/directfbrc ############# to start the application with directfb: ./camtest -qws -display directfb without directfb using linuxfb: ./camtest -qws -display linuxfb:/dev/fb1 Notes about application: 1. The application shows 2 webcams in background-framebuffer (BG-FB). The foreground-framebuffer (FG-FB) shows the qt-gui. FG-FB is configured to be fully opaque and uses color-keying. On the BG-FB one cam is overlayed on the other cam using IPU. Optimization possibilities: the app copies the frames from the cams with memcpy. This wouldn't be necessary, when the kernel usb-webcam interface (uvc) would support V4L2_MEMORY_USERPTR method. through this way, you could pass the mapped IPU mmapped inbufs directly to v4l2 output buffers. If you get errors like NOSPC (-28) from uvc, this is a limitation of USB. My board is a MX6QSabre, where the two webcams are connected to the same usb-controller. With both webcams I had to limit the frame size to 320x250 and 160x120 at 25Hz. You might try higher res if you have other type of webcams (not usb). Have fun  
記事全体を表示
Header 1 Header 2 Video rendering gst-launch videotestsrc ! mfw_v4lsink Audio rendering gst-launch audiotestsrc ! alsasink WAV Audio rendering gst-launch filesrc location=test.wav ! wavparse ! alsasink Video rendering selecting caps gst-launch videotestsrc ! capsfilter name='video/x-raw-yuv,format=(fourcc)I420' ! mfw_v4lsink gst-launch videotestsrc ! 'video/x-raw-yuv,format=(fourcc)I420' ! mfw_v4lsink
記事全体を表示