i.MX处理器知识库

取消
显示结果 
显示  仅  | 搜索替代 
您的意思是: 

i.MX Processors Knowledge Base

讨论

排序依据:
What is HTML5 Video? HTML5 video is an element for the purpose of playing videos or movies in HTML5 specification. HTML5 video is intended by its creators to become the new standard way to show video on the web without plugins. Video will be shown inside the web page, like flash. HTML5 Video Web Page <video> element example <video src="movie.mp4" poster="movie.jpg" controls> </video> HTML5 video page source example <html>           <head>           </head>            <body>                      <video src="http://10.192.225.226/movie.mp4" width="640" height="480"  controls="true">                      </video>            </body> </html> HTML5 Video Rendering Path Performance Data in i.MX6Q with Android ICS With LVDS display, H264@1080p@20Mbps Can reach 30 fps With HDMI 1080p display, H264@1080p@10Mbps Can reach 25 fps HTML5 Video Website Some HTML5 Video website when accessing with android platform www.youtube.com www.iqiyi.com HTML5 reference document SPEC         http://dev.w3.org/html5/spec/single-page.html?utm_source=dlvr.it&utm_medium=feed Wikipedia page        http://en.wikipedia.org/wiki/HTML5
查看全文
gst-launch is the tool to execute GStreamer pipelines. Task Pipeline Looking at caps gst-launch -v  <gst elements> Enable log gst-launch --gst-debug=<element>:<level> gst-launch --gst-debug=videotestsrc:5 videotestsrc ! filesink location=/dev/null
查看全文
Check new updated version for with Morty here Step 1 : Get iMX Yocto AVS setup environment Review the steps under Chapter 3 of the i.MX_Yocto_Project_User'sGuide.pdf on the L4.X LINUX_DOCS to prepare your host machine. Including at least the following essential Yocto packages $ sudo apt-get install gawk wget git-core diffstat unzip texinfo \   gcc-multilib build-essential chrpath socat libsdl1.2-dev u-boot-tools Install the i.MX NXP AVS repo Create/Move to a directory where you want to install the AVS yocto build enviroment. Let's call this as <yocto_dir> $ cd <yocto_dir> $ repo init -u https://source.codeaurora.org/external/imxsupport/meta-avs-demos -b master -m imx7d-pico-avs-sdk_4.1.15-1.0.0.xml Download the AVS BSP build environment: $ repo sync Step 2: Setup yocto for Alexa_SDK image with AVS-SETUP-DEMO script: Run the avs-setup-demo script as follows to setup your environment for the imx7d-pico board: $ MACHINE=imx7d-pico DISTRO=fsl-imx-x11 source avs-setup-demo.sh -b <build_sdk> Where <build_sdk> is the name you will give to your build folder. After acepting the EULA the script will prompt if you want to enable: Sound Card selection The following Sound Cards are supported on the build: SGTL (In-board Audio Codec for PicoPi) 2-Mic Conexant The script will prompt if you are going to use the Conexant Card. If not then SGTL will be assumed as your selection Are you going to use Conexant Sound Card [Y/N]? Install Alexa SDK Next option is to select if you want to pre-install the AVS SDK software on the image. Do you want to build/include the AVS_SDK package on this image(Y/N)? If you select YES, then your image will contain the AVS SDK ready to use (after authentication). Note this AVS_SDK will not have WakeWord detection support, but it can be added on runtime. If your selection was NO, then you can always manually fetch and build the AVS_SDK on runtime. All the packages dependencies will be already there, so only fetching the AVS_SDK source code and building it is required. Finish avs-image configuration At the end you will see a text according with the configuration you select for your image build. Next is an example for a Preinstalled AVS_SDK with Conxant Sound Card support and WiFi/BT not enabled. ==========================================================   AVS configuration is now ready at conf/local.conf             - Sound Card = Conexant                                     - AVS_SDK pre-installed                                       You are ready to bitbake your AVS demo image now:               bitbake avs-image                                        ========================================================== Step 3: Build the AVS image Go to your <build_sdk> directory and start the build of the avs-image There are 2 options Regular Build: $ cd <yocto_dir>/<build_sdk> $ bitbake avs-image With QT5 support included: $ cd <yocto_dir>/<build_sdk> $ bitbake avs-image-qt5 The image with QT5 is useful if you want to add some GUI for example to render DisplayCards. Step 4 : Deploying the built images to SD/MMC card to boot on target board. After a build has succesfully completed, the created image resides at <build_sdk>/tmp/deploy/images/imx7d-pico/ In this directory, you will find the imx7d-pico-avs.sdcard image or imx7d-pico-avs-qt5.sdcard, depending on the build you chose on Step3. To Flash the .sdcard image into the eMMC device of your PicoPi board follow the next steps: Download the bootbomb flasher Follow the instruction on Section 4. Board Reflashing of the Quick Start Guide for AVS kit to setup your board on flashing mode. Copy the built SDCARD file $ sudo dd if=imx7d-pico-avs.sdcard of=/dev/sd bs=1M && sync $ sync Properly eject the pico-imx7d board: $ sudo eject /dev/sd NXP Documentation Refer to the Quick Start Quide for AVS SDK to fully setup your PicoPi board with Synaptics 2Mic and PicoPi i.mx7D For a more comprehensive understanding of Yocto, its features and setup; more image build and deployment options and customization, please take a look at the i.MX_Yocto_Project_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document. For a more detailed description of the Linux BSP, u-boot use and configuration, please take a look at the i.MX_Linux_User's_Guide.pdf document from the Linux documents bundle mentioned at the beginning of this document.
查看全文
Freescale does not have a specific GStreamer element to do JPEG encoding, so the standard 'jpegenc' should be used. Image Capture With a web camera gst-launch v4l2src num-buffers=1 ! jpegenc ! filesink location=sample.jpeg With an embedded camera gst-launch mfw_v4lsrc num-buffers=1 !  jpegenc ! filesink location=sample.jpeg More pipelines on GStreamer i.MX6 Pipelines
查看全文
OpenCV is a computer vision library originally developed by Intel. It is free for commercial and research use under the open source BSD license. The library is cross-platform. It focuses mainly on real-time image processing; as such, if it finds Intel's Integrated Performance Primitives on the system, it will use these commercial optimized routines to accelerate itself. Application OpenCV's application areas include: * 2D and 3D feature toolkits * Egomotion estimation * Face Recognition * Gesture Recognition * Human-Computer Interface (HCI) * Mobile robotics * Motion Understanding * Object Identification * Segmentation and Recognition * Stereopsis Stereo vision: depth perception from 2 cameras * Structure from motion (SFM) * Motion Tracking To support some of the above areas, OpenCV includes a statistical machine learning library that contains: * Boosting * Decision Trees * Expectation Maximization * k-nearest neighbor algorithm * Naive Bayes classifier * Artificial neural networks * Random forest * Support Vector Machine Installing OpenCV on i.MX 51 EVK Board running Ubuntu Linux Assuming that you already have the Ubuntu Linux running on your board, you can use this wiki page to guide you to get your USB camera running on your system in order to use real time image processing features of this library. In a brand new installation of Ubuntu some libraries is not installed by default, so you need to install them by your own hands (use synaptic to do that), here is the list of these libraries: libgtk2.0-dev libjpeg62-dev zlib1g-dev libpng12-dev libtiff4-dev libjasper-dev libgst-dev libgstreamer0.10-dev If you already have some of those libraries installed, make sure that is the DEV version. After installing those libraries you can download the stable OpenCV version here. Install it following the procedure below: 1 - untar the opencv package tar -xvzf opencv-1.1pre1.tar.gz  2 - change to OpenCV folder cd opencv-1.1.0  3 - configure the installation enabling gstreamer and letting to compile demo apps later ./configure --with-gstreamer --disable-apps You will get the following results: General configuration ================================================       Compiler:                         g++       CXXFLAGS:       DEF_CXXFLAGS:             -Wall -fno-rtti -pipe -O3 -fomit-frame-pointer       PY_CXXFLAGS:               -Wall -pipe -O3 -fomit-frame-pointer       OCT_CXXFLAGS:             -fno-strict-aliasing -Wall -Wno-uninitialized -pipe -O3 -fomit-frame-pointer        Install path:                      /usr/local  HighGUI configuration ================================================       Windowing system --------------       Use Carbon / Mac OS X:        no       Use gtk+ 2.x:                        yes       Use gthread:                         yes       Image I/O ---------------------       Use ImageIO / Mac OS X:       no       Use libjpeg:                            yes       Use zlib:                                yes       Use libpng:                             yes       Use libtiff:                               yes       Use libjasper:                          yes       Use libIlmImf:                          no             Video I/O ---------------------       Use QuickTime / Mac OS X:     no       Use xine:                                no       Use gstreamer:                        yes       Use ffmpeg:                             no       Use dc1394 & raw1394:     no       Use v4l:                                   yes       Use v4l2:                                 yes       Use unicap:                             no     Wrappers for other languages =========================================       SWIG Python                          no       Octave                                    no       Additional build settings ============================================       Build demo apps                      no Now run make ... 4 - Build OpenCV ./make 5 - Install OpenCV ./sudo make install if all steps above were executed properly, now you can compile the sample applications: 1 - change to samples/c directory cd samples/c 2 - change the build_all script mode to +x chmod +x build_all.sh 3 - run the script ./build_all.sh Now you can test. The results below were taken from the Laplacian filter sample processing in real-time images grabbed from an USB camera: Laplacian filter with USB Camera capture device Also, you can see how is it performance on a 3 windowed application performing color conversion and canny edge detection at the same time: http://www.youtube.com/watch?v=w9yQgdABT7c EOF !
查看全文
(DEPRECATED. Please check this document for Real Time Streaming) A server can be streaming video and a client, in this case a i.MX6 target, is receiving and decoding it. For example, a server with GStreamer and a web camera connected, can be streaming with the following command: $ # Pipeline 1 $ gst-launch v4l2src ! 'video/x-raw-yuv, format=(fourcc)I420, width=(int)1280, height=(int)800' ! ffenc_mpeg4 ! tcpserversink host=$CLIENT_IP port=$PORT and on the target, the client receives, decodes and display with $ # Pipeline 2 $ gst-launch tcpclientsrc host=$SERVER_IP port=$PORT  ! 'video/mpeg, width=(int)1280, height=(int)800, framerate=(fraction)10/1, mpegversion=(int)4, systemstream=(boolean)false' ! vpudec ! mfw_isink The filter caps between the tcpclientsrc and the decoder (vpudec) depend on the sink caps coming from the server encoder (ffenc_mpeg4), so these may change depending on your needs. Running the above pipelines require the environment variables SERVER_IP, CLIENT_IP and PORT. In case you want the i.MX6 to act as a server, just change the video source (either mfw_v4lsrc of v4l2src) and the encoder (vpuenc), so $ # Pipeline 3 $  gst-launch v4l2src  !  'video/x-raw-yuv, format=(fourcc)I420, width=(int)640, height=(int)480, interlaced=(boolean)false, framerate=(fraction)10/1'  ! vpuenc ! tcpserversink host=$CLIENT_IP port=$PORT For testing purposes, set SERVER_IP=127.0.0.1, CLIENT_IP=127.0.0.1 and PORT=500, and run pipeline 3 and 2 in two different consoles. Check with 'top' the  CPU usage and see that VPU is actually doing most of the work.
查看全文
BlueZ5 provides support for the core Bluetooth layers and protocols. It is flexible, efficient and uses a modular implementation. BlueZ5 has implemented the Bluetooth low level host stack for Bluetooth core specification 4.0 and 3.0+HS which includes GAP, L2CAP, RFCOMM, and SDP. Besides the host stack, BlueZ5 has also supported the following profiles itself or via a third party software. Profiles provided by BlueZ: A2DP 1.3 AVRCP 1.5 DI 1.3 HDP 1.0 HID 1.0 PAN 1.0 SPP 1.1 GATT (LE) profiles: PXP 1.0 HTP 1.0 HoG 1.0 TIP 1.0 CSCP 1.0 OBEX based profiles (by obexd): FTP 1.1 OPP 1.1 PBAP 1.1 MAP 1.0 Provided by the oFono project: HFP 1.6 (AG & HF)Supported Profiles BlueZ5 has been supported in the latest Freescale Linux BSP release, so it would be pretty easy to generate the binaries for Bluetooth core stack and its profiles. In order to support A2DP sink on a SabreSD board, the following software should be downloaded and installed onto the target rootfs too. sbc decoder version 1.3 (http://www.kernel.org/pub/linux/bluetooth/sbc-1.3.tar.gz) PulseAudio 5.0 (http://www.freedesktop.org/software/pulseaudio/releases/pulseaudio-5.0.tar.xz) PulseAudio package has some dependencies with bluetooth and sbc packages, and pulseaudio will detect if the two packages have been built and then decide which pulse plugin modules to be generated. So the building order will be 1) bluez5_utils or bluez_utils   2) sbc   3) pulseaudio. After compile and install the above software onto the target rootfs, you should be able to see the following executable under the directory /usr/bin From BlueZ5: bluetoothctl, hciconfig, hciattach (Needed by operating a UART bluetooth module) From PulseAudio: pulseaudio, pactl, paplay If the building dependency has been setup correctly, the following pulse plugin modules should be located under the directory /usr/lib/pulse-5.0/modules module-bluetooth-discover.so      module-bluetooth-policy.so        module-bluez5-device.so   module-bluez5-discover.so Edit the file /etc/dbus-1/system.d/pulseaudio-system.conf, and add the following lines in red: <policy user="pulse">     <allow own="org.pulseaudio.Server"/>    <allow send_destination="org.bluez"/>     <allow send_interface="org.freedesktop.DBus.ObjectManager"/> </policy> Edit the file /etc/dbus-1/system.d/bluetooth.conf, and add the following lines: <policy user="pulse">      <allow send_destination="org.bluez"/>      <allow send_interface="org.freedesktop.DBus.ObjectManager"/> </policy> Adding the following settings at the bottom of the pulseaudio system configuration file which locates in /etc/pulse/system.pa ### Automatically load driver modules for Bluetooth hardware .ifexists module-bluetooth-policy.so load-module module-bluetooth-policy .endif .ifexists module-bluetooth-discover.so load-module module-bluetooth-discover .endif load-module module-switch-on-connect load-module module-alsa-sink device_id=0 tsched=true tsched_buffer_size=1048576 tsched_buffer_watermark=262144 On the system that can automatically detect the alsa cards, the above line #13 should be removed.  Also make sure "auth-anonymous=1" is added to the following line, which can resolve the issue: "Denied access to client with invalid authorization data". load-module module-native-protocol-unix auth-anonymous=1 Selecting a audio re-sampling algorithm and configuring the audio output by adding the following settings to the file daemon.conf locating in /etc/pulse resample-method = trivial enable-remixing = no enable-lfe-remixing = no default-sample-format = s16le default-sample-rate = 48000 alternate-sample-rate = 24000 default-sample-channels = 2 Pulseaudio can be started as a daemon or as a system-wide instance. To run PulseAudio in system-wide mode, the program will automatically drop privileges from "root" and change to the "pulse" user and group. In this case, before launching the program, the "pulse" user and group needs to be created on the target system.  In the example below, "/var/run/pulse" is the home directory for "pulse" user. adduser -h /var/run/pulse pulse addgroup pulse-access adduser pulse pulse-access Because PulseAudio needs to access the sound devices, add the user "pulse" to the "audio" group too. adduser pulse audio Starting bluetoothd and pulseaudio: /usr/libexec/bluetooth/bluetoothd -d & pulseaudio --system --realtime & To verify if the pulseaudio has been set up correctly, you can play a local wave file by using the following command. If you can hear the sound, the system should have been configured correctly. paplay -vvv audio8k16S.wav After setting up the pulseaudio, launch bluetoothctl to pair and connect to a mobile phone. After connecting to a mobile phone, you should be able to see the following information in bluetoothctl console: [bluetooth]# show Controller 12:60:41:7F:03:00         Name: BlueZ 5.21         Alias: BlueZ 5.21         Class: 0x1c0000         Powered: yes         Discoverable: no         Pairable: yes         UUID: PnP Information           (00001200-0000-1000-8000-00805f9b34fb)         UUID: Generic Access Profile    (00001800-0000-1000-8000-00805f9b34fb)         UUID: Generic Attribute Profile (00001801-0000-1000-8000-00805f9b34fb)         UUID: A/V Remote Control        (0000110e-0000-1000-8000-00805f9b34fb)         UUID: A/V Remote Control Target (0000110c-0000-1000-8000-00805f9b34fb)         UUID: Message Notification Se.. (00001133-0000-1000-8000-00805f9b34fb)         UUID: Message Access Server     (00001132-0000-1000-8000-00805f9b34fb)         UUID: Phonebook Access Server   (0000112f-0000-1000-8000-00805f9b34fb)         UUID: IrMC Sync                 (00001104-0000-1000-8000-00805f9b34fb)         UUID: OBEX File Transfer        (00001106-0000-1000-8000-00805f9b34fb)         UUID: OBEX Object Push          (00001105-0000-1000-8000-00805f9b34fb)         UUID: Vendor specific           (00005005-0000-1000-8000-0002ee000001)         UUID: Audio Source              (0000110a-0000-1000-8000-00805f9b34fb)         UUID: Audio Sink                (0000110b-0000-1000-8000-00805f9b34fb)         Modalias: usb:v1D6Bp0246d0515         Discovering: no If you can see the audio sink UUID, you are ready to enjoy the bluetooth music now.
查看全文
This document describes the i.MX 8QXP MEK mini-SAS connectors features on Linux and Android use cases, covering the supported daughter cards, the process to change Device Tree (DTS) files or Boot images, and enable these different display options on the board.
查看全文
The document includes the following contents: (1)document how to port ov5646 to android jb4.2.2 (2) ov5645 driver for Linux 3.0.35 (3) ov5645 schematic based on i.MX6Q/DL (4)ov5645 for android camera HAL   [Note:]      P5V29A-0JG is a camera module based on OV5645, and PAO532-0JG is based on OV5640, both manufactured by NINGBO SUNNY OPOTECH CO.LTD (China), If customer wants to use them on i.MX6 platform, can send me email to ask for datasheets of P5V29A & PAO532 , or discuss corresponding questions on porting.   Email: weidong.sun@freescale.com
查看全文
This document is a user guide for the GStreamer version 1.0 based accelerated solution included in all the i.MX 8 family SoCs supported by NXP BSP L5.4.24_1.1.0. Some instructions assume a host machine running a Linux distribution, such as Ubuntu, connected to i.MX 8 device. These commands were tested using Ubuntu 18.04 LTD, and while Ubuntu is not required on the host machine, other distributions have not been tested. These instructions are targeted for use with the following hardware: • i.MX 8MQ EVK • i.MX 8MN EVK • i.MX 8MN EVK • i.MX 8QXP MEK B0 • i.MX 8QM MEK B0   Release History v1.0 - Mar 2020 - Initial release. v2.0 - Sep 2020: Added the following content: - Mux/Demux Examples - Audio Examples - Image Examples - Transcode Examples - Streaming Examples - Multi-Display Examples - Scaling and Rotation Examples - Zero-copy Examples - Debug Examples Maintainers: . Marco Franchi . Pedro Jardim
查看全文