i.MX Processors Knowledge Base

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

i.MX Processors Knowledge Base

Discussions

Sort by:
gst-launch is the tool to execute GStreamer pipelines. Task Pipeline Looking at caps gst-launch -v  <gst elements> Enable log gst-launch --gst-debug=<element>:<level> gst-launch --gst-debug=videotestsrc:5 videotestsrc ! filesink location=/dev/null
View full article
Fast GPU Image Processing in the i.MX 6x by Guillermo Hernandez, Freescale Introduction Color tracking is useful as a base for complex image processing use cases, like determining what parts of an image belong to skin is very important for face detection or hand gesture applications. In this example we will present a method that is robust enough to take some noise and blur, and different lighting conditions thanks to the use of OpenGL ES 2.0 shaders running in the i.MX 6X  multimedia processor. Prerequisites This how-to assumes that the reader is an experienced i.mx developer and is familiar with the tools and techniques around this technology, also this paper assumes the reader has intermediate graphics knowledge and experience such as the RGBA structure of pictures and video frames and programming OpenGL based applications, as we will not dig in the details of the basic setup. Scope Within this paper, we will see how to implement a very fast color tracking application that uses the GPU instead of the CPU using OpenGL ES 2.0 shaders. Step 1: Gather all the components For this example we will use: 1.      i.MX6q ARD platform 2.      Linux ER5 3.      Oneric rootfs with ER5 release packages 4.      Open CV 2.0.0 source Step 2: building everything you need Refer to ER5 User´s Guide and Release notes on how to build and boot the board with the Ubuntu Oneric rootfs. After you are done, you will need to build the Open CV 2.0.0 source in the board, or you could add it to the ltib and have it built for you. NOTE: We will be using open CV only for convenience purposes, we will not use any if its advanced math or image processing  features (because everything happens on the CPU and that is what we are trying to avoid), but rather to have an easy way of grabbing and managing  frames from the USB camera. Step 3: Application setup Make sure that at this point you have a basic OpenGL Es 2.0 application running, a simple plane with a texture mapped to it should be enough to start. (Please refer to Freescale GPU examples). Step 4: OpenCV auxiliary code The basic idea of the workflow is as follows: a)      Get the live feed from the USB camera using openCV function cvCapture() and store into IplImage structure. b)      Create an OpenGL  texture that reads the IplImage buffer every frame and map it to a plane in OpenGL ES 2.0. c)      Use the Fragment Shader to perform fast image processing calculations, in this example we will examine the Sobel Filter and Binary Images that are the foundations for many complex Image Processing algorithms. d)      If necessary, perform multi-pass rendering to chain several image processing shaders  and get an end result. First we must import our openCV relevant headers: #include "opencv/cv.h" #include "opencv/cxcore.h" #include "opencv/cvaux.h" #include "opencv/highgui.h" Then we should define a texture size, for this example we will be using 320x240, but this can be easily changed to 640 x 480 #define TEXTURE_W 320 #define TEXTURE_H 240 We need to create an OpenCV capture device to enable its V4L camera and get the live feed: CvCapture *capture; capture = cvCreateCameraCapture (0); cvSetCaptureProperty (capture, CV_CAP_PROP_FRAME_WIDTH,  TEXTURE_W); cvSetCaptureProperty (capture, CV_CAP_PROP_FRAME_HEIGHT, TEXTURE_H); Note: when we are done, remember to close the camera stream: cvReleaseCapture (&capture); OpenCV has a very convenient structure used for storing pixel arrays (a.k.a. images) called IplImage IplImage *bgr_img1; IplImage *frame1; bgr_img1 = cvCreateImage (cvSize (TEXTURE_W, TEXTURE_H), 8, 4); OpenCV has a very convenient function for capturing a frame from the camera and storing it into a IplImage frame2 = cvQueryFrame(capture2); Then we will want to separate the camera capture process from the pos-processing filters and final rendering; hence, we should create a thread to exclusively handle the camera: #include <pthread.h> pthread_t camera_thread1; pthread_create (&camera_thread1, NULL, UpdateTextureFromCamera1,(void *)&thread_id); Your UpdateTextureFromCamera() function should be something like this: void *UpdateTextureFromCamera2 (void *ptr) {       while(1)       {             frame2 = cvQueryFrame(capture);             //cvFlip (frame2, frame2, 1);  // mirrored image             cvCvtColor(frame2, bgr_img2, CV_BGR2BGRA);       }       return NULL;    } Finally, the rendering loop should be something like this: while (! window->Kbhit ())       {                         tt = (double)cvGetTickCount();             Render ();             tt = (double)cvGetTickCount() - tt;             value = tt/(cvGetTickFrequency()*1000.);             printf( "\ntime = %gms --- %.2lf FPS", value, 1000.0 / value);             //key = cvWaitKey (30);       }       Step 5: Map the camera image to a GL Texture As you can see, you need a Render function call every frame, this white paper will not cover in detail the basic OpenGL  or EGL setup of the application, but we would rather focus on the ES 2.0 shaders. GLuint _texture; GLeglImageOES g_imgHandle; IplImage *_texture_data; The function to map the texture from our stored pixels in IplImage is quite simple: we just need to get the image data, that is basically a pixel array void GLCVPlane::PlaneSetTex (IplImage *texture_data) {       cvCvtColor (texture_data, _texture_data, CV_BGR2RGB);       glBindTexture(GL_TEXTURE_2D, _texture);       glTexImage2D (GL_TEXTURE_2D, 0, GL_RGB, _texture_w, _texture_h, 0, GL_RGB, GL_UNSIGNED_BYTE, _texture_data->imageData); } This function should be called inside our render loop: void Render (void) {   glClearColor (0.0f, 0.0f, 0.0f, 0.0f);   glClear (GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);   PlaneSetTex(bgr_img1); } At this point the OpenGL texture is ready to be used as a sampler in our Fragment Shader  mapped to a 3D plane Lastly,  when you are ready to draw your plane with the texture in it: // Set the shader program glUseProgram (_shader_program); … // Binds this texture handle so we can load the data into it /* Select Our Texture */ glActiveTexture(GL_TEXTURE0); //Select eglImage glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, g_imgHandle); glDrawArrays (GL_TRIANGLES, 0, 6); Step 6: Use the GPU to do Image Processing First we need to make sure we have the correct Vertex Shader and Fragment shader, we will  focus only in the Fragment Shader, this is where we will process our image from the camera. Below you will find the most simple fragment shader, this one only colors pixels from the sample texture const char *planefrag_shader_src =       "#ifdef GL_FRAGMENT_PRECISION_HIGH                    \n"       "  precision highp float;                            \n"       "#else                                          \n"       "  precision mediump float;                    \n"       "#endif                                        \n"       "                                              \n"       "uniform sampler2D s_texture;                  \n"       "varying  vec3      g_vVSColor;                      \n"       "varying  vec2 g_vVSTexCoord;                        \n"       "                                              \n"       "void main()                                    \n"       "{                                              \n"       "    gl_FragColor = texture2D(s_texture,g_vVSTexCoord);    \n"       "}                                              \n"; Binary Image The most Simple Image Filter is the Binary Image, this one converts a source image to a black/white output, to decide if a color should be black or white we need a threshold,  everything below that threshold will be black, and any color above should be white.               The shader code is as follows: const char* g_strRGBtoBlackWhiteShader =     #ifdef GL_FRAGMENT_PRECISION_HIGH                            precision highp float;                            #else                                            precision mediump float;                          #endif                                            varying  vec2 g_vVSTexCoord;                  uniform sampler2D s_texture;                    uniform float threshold;                                                                        void main() {                                    vec3 current_Color = texture2D(s_texture,g_vVSTexCoord).xyz;         float luminance = dot (vec3(0.299,0.587,0.114),current_Color);         if(luminance>threshold)                      \n"             gl_FragColor = vec4(1.0);                \n"           else                                  \n"                          gl_FragColor = vec4(0.0);                \n"       }                                        \n"; You can notice that the main operation is to get a luminance value of the pixel, in order to achieve that we have to multiply a known vector (obtained empirically) by the current pixel, then we simply compare that luminance value with a threshold. Anything below that threshold will be black, and anything above that threshold will be considered a white pixel. SOBEL Operator Sobel is a very common filter, since it is used as a foundation for many complex Image Processing processes, particularly in edge detection algorithms. The sobel operator is based in convolutions, the convolution is made of a particular mask, often called a kernel (on common therms, usually a 3x3 matrix). The sobel operator calculates the gradient of the image at each pixel, so it tells us how it changes from the pixels surrounding the current pixel , meaning how it increases or decreases (darker to brighter values).           The shader is a bit long, since several operations must be performed, we shall discuss each of its parts below: First we need to get the texture coordinates from the Vertex Shader: const char* plane_sobel_filter_shader_src = #ifdef GL_FRAGMENT_PRECISION_HIGH                    precision highp float;                          #else                                    precision mediump float;                        #endif                                          varying  vec2 g_vVSTexCoord;                  uniform sampler2D s_texture;                    Then we should define our kernel, as stated before, a 3x3 matrix should be enough, and the following values have been tested with good results: mat3 kernel1 = mat3 (-1.0, -2.0, -1.0,                                          0.0, 0.0, 0.0,                                              1.0, 2.0, 1.0);    We also need a convenient way to convert to grayscale, since we only need grayscale information for the Sobel operator, remember that to convert to grayscale you only need an average of the three colors: float toGrayscale(vec3 source) {                    float average = (source.x+source.y+source.z)/3.0;        return average;              } Now we go to the important part, to actually perform the convolutions. Remember that by the OpenGL ES 2.0 spec, nor recursion nor dynamic indexing is supported, so we need to do our operations the hard way: by defining vectors and multiplying them. See the following code:   float doConvolution(mat3 kernel) {                              float sum = 0.0;                                    float current_pixelColor = toGrayscale(texture2D(s_texture,g_vVSTexCoord).xyz); float xOffset = float(1)/1024.0;                    float yOffset = float(1)/768.0; float new_pixel00 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-  xOffset,g_vVSTexCoord.y-yOffset)).xyz); float new_pixel01 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y-yOffset)).xyz); float new_pixel02 = toGrayscale(texture2D(s_texture,  vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y-yOffset)).xyz); vec3 pixelRow0 = vec3(new_pixel00,new_pixel01,new_pixel02); float new_pixel10 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-xOffset,g_vVSTexCoord.y)).xyz);\n" float new_pixel11 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y)).xyz); float new_pixel12 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y)).xyz); vec3 pixelRow1 = vec3(new_pixel10,new_pixel11,new_pixel12); float new_pixel20 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x-xOffset,g_vVSTexCoord.y+yOffset)).xyz); float new_pixel21 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x,g_vVSTexCoord.y+yOffset)).xyz); float new_pixel22 = toGrayscale(texture2D(s_texture, vec2(g_vVSTexCoord.x+xOffset,g_vVSTexCoord.y+yOffset)).xyz); vec3 pixelRow2 = vec3(new_pixel20,new_pixel21,new_pixel22); vec3 mult1 = (kernel[0]*pixelRow0);                  vec3 mult2 = (kernel[1]*pixelRow1);                  vec3 mult3 = (kernel[2]*pixelRow2);                  sum= mult1.x+mult1.y+mult1.z+mult2.x+mult2.y+mult2.z+mult3.x+     mult3.y+mult3.z;\n"     return sum;                                      } If you see the last part of our function, you can notice that we are adding the multiplication values to a sum, with this sum we will see the variation of each pixel regarding its neighbors. The last part of the shader is where we will use all our previous functions, it is worth to notice that the convolution needs to be applied horizontally and vertically for this technique to be complete: void main() {                                    float horizontalSum = 0.0;                            float verticalSum = 0.0;                        float averageSum = 0.0;                        horizontalSum = doConvolution(kernel1);        verticalSum = doConvolution(kernel2);            if( (verticalSum > 0.2)|| (horizontalSum >0.2)||(verticalSum < -0.2)|| (horizontalSum <-0.2))                        averageSum = 0.0;                      else                                                    averageSum = 1.0;                    gl_FragColor = vec4(averageSum,averageSum,averageSum,1.0);                }    Conclusions and future work At this point, if you have your application up and running, you can notice that Image Processing can be done quite fast, even with images larger than 640 480. This approach can be expanded to a variety of techniques like Tracking, Feature detection and Face detection. However, these techniques are out of scope for now, because this algorithms need multiple rendering passes (like face detection), where we need to perform an operation, then write the result to an offscreen buffer and use that buffer as an input for the next shader and so on.  But Freescale is planning to release an Application Note in Q4 2012 that will expand this white paper and cover these techniques in detail.
View full article
Dithering Implementation for Eink Display Panel by Daiyu Ko, Freescale Dithering a.          Dithering in digital image processing Dithering is a technique used in computer graphics to create the illusion of color depth in images with a limited color palette (color quantization). In a dithered image, colors not available in the palette are approximated by a diffusion of colored pixels from within the available palette. The human eye perceives the diffusion as a mixture of the colors within it (see color vision). Dithered images, particularly those with relatively few colors, can often be distinguished by a characteristic graininess, or speckled appearance. Figure 1. Original photo; note the smoothness in the detail http://en.wikipedia.org/wiki/File:Dithering_example_undithered_web_palette.png Figure 2.Original image using the web-safe color palette with no dithering applied. Note the large flat areas and loss of detail. http://en.wikipedia.org/wiki/File:Dithering_example_dithered_web_palette.png Figure 3.Original image using the web-safe color palette with Floyd–Steinberg dithering. Note that even though the same palette is used, the application of dithering gives a better representation of the original b.         Applications Display hardware, including early computer video adapters and many modern LCDs used in mobile phonesand inexpensive digital cameras, show a much smaller color range than more advanced displays. One common application of dithering is to more accurately display graphics containing a greater range of colors than the hardware is capable of showing. For example, dithering might be used in order to display a photographic image containing millions of colors on video hardware that is only capable of showing 256 colors at a time. The 256 available colors would be used to generate a dithered approximation of the original image. Without dithering, the colors in the original image might simply be "rounded off" to the closest available color, resulting in a new image that is a poor representation of the original. Dithering takes advantage of the human eye's tendency to "mix" two colors in close proximity to one another. For Eink panel, since it is grayscale image only, we can use the dithering algorism to reduce the grayscale level even to black/white only but still get better visual results. c.          Algorithm There are several algorithms designed to perform dithering. One of the earliest, and still one of the most popular, is the Floyd–Steinberg dithering algorithm, developed in 1975. One of the strengths of this algorithm is that it minimizes visual artifacts through an error-diffusion process; error-diffusion algorithms typically produce images that more closely represent the original than simpler dithering algorithms. (Original) Threshold Bayer   (ordered)                                     Example (Error-diffusion): Error-diffusion dithering is a feedback process that diffuses the quantization error to neighboring pixels. Floyd–Steinberg dithering only diffuses the error to neighboring pixels. This results in very fine-grained dithering. Jarvis, Judice, and Ninke dithering diffuses the error also to pixels one step further away. The dithering is coarser, but has fewer visual artifacts. It is slower than Floyd–Steinberg dithering because it distributes errors among 12 nearby pixels instead of 4 nearby pixels for Floyd–Steinberg. Stucki dithering is based on the above, but is slightly faster. Its output tends to be clean and sharp. Floyd–Steinberg Jarvis,   Judice & Ninke Stucki                         Error-diffusion dithering (continued): Sierra dithering is based on Jarvis dithering, but it's faster while giving similar results. Filter Lite is an algorithm by Sierra that is much simpler and faster than Floyd–Steinberg, while still yielding similar (according to Sierra, better) results. Atkinson dithering, developed by Apple programmer Bill Atkinson, resembles Jarvis dithering and Sierra dithering, but it's faster. Another difference is that it doesn't diffuse the entire quantization error, but only three quarters. It tends to preserve detail well, but very light and dark areas may appear blown out. Sierra Sierra   Lite Atkinson                              2.     Eink display panel characteristic a.       Low resolution Eink only has couple resolution modes for display      DU                  (1bit, Black/White)      GC4                (2bit, Gray scale)      GC16              (4bit, Gray scale)      A2                   (1bit, Black/White, fast update mode) b.      Slow update time For 800x600 panel size (per frame)      DU                  300ms                              GC4                450ms                              GC16              600ms                               A2                   125ms 3.       3.     Effect by doing dithering for Eink display panel a.       Low resolution with better visual quality By doing dithering to the original grayscale image, we can get better visual looking result. Even if the image becomes black and white image, with the dithering algorism, you will still get the feeling of grayscale image. b.      Faster update with Eink’s animation waveform Since the DU/A2 mode could update the Eink panel faster than grayscale mode, with dithering, we can get no only the better visual looking result, but also we can use DU/A2 fast update mode to show animation or even normal video files. 4.       4.     Our current dithering implementation a.       Choose a simple and effective algorism Considering Eink panel’s characteristics, we compared couple dithering algorism and decide to use Atkinson dithering algorism. It is simple and the result is better especially for Einkblack/white display case. b.      Made a lot of optimization so that it will not affect update time too much With the simplicity of the Atkinson dithering algorism, we can also put a lot of effort to do the optimization in order to reduce the dithering processing time and make it practical for actual use. c.       Current algorism performance and result Currently, with Atkinson dithering algorism, our processing time is about 70ms. 5.       5.     Availability a.       We implemented both Y8->Y1 and Y8->Y4 dithering with the same dithering algorism. b.      Implemented into our EPDC driver with i.MX6SL Linux 3.0.35 version release. c.       Also implemented in our Video for Eink demo 6.       6.     References a.       Part of dithering introduction from www.wikipedia.org
View full article
Note: All these gstreamer pipelines have been tested using a i.MX6Q board with a kernel version 3.0.35-2026-geaaf30e. Tools: gst-launch gst-inspect FSL Pipeline Examples: GStreamer i.MX6 Decoding GStreamer i.MX6 Encoding GStreamer Transcoding and Scaling GStreamer i.MX6 Multi-Display GStreamer i.MX6 Multi-Overlay GStreamer i.MX6 Camera Streaming GStreamer RTP Streaming Other plugins: GStreamer ffmpeg GStreamer i.MX6 Image Capture GStreamer i.MX6 Image Display Misc: Testing GStreamer Tracing GStreamer Pipelines GStreamer miscellaneous
View full article
Detailed Features List of i.MX35 PDK board I.MX35 CPU Card Additional Resources I.MX35 PDK Board Flashing SD Card i.MX35 PDK Board Flashing NAND i.MX35 PDK Linux Booting SD Loading Redboot Binary Directly to RAM Fixing Redboot RAM Bug Fixing Redboot RAM bug (CSD1 not activated)
View full article
If you are a Windows user and don't want to install Linux on your machine, VMware is a virtual machine used to install Linux under Windows. It's a good way to start with Linux (if you're unfamiliar with it) and also start your i.MX development. Installing VMWare - VMWare Workstation [VMWare Workstation (Click here to go to Download page)] VMWare Workstation is available in commercial and trial versions. With Workstation is possible to create your own installation image—installing a new operating system as you would install it in a new machine. - VMWare Player [VMWare Player (Click here to go to Download page)] VMWare Player is available in a free version. With Player is only possible to run images previously made. - VMWare Images at ThoughtPolice site [ThoughtPolice site (Click here to go to Download page)] This site has many ready VMWare images from many Linux distributions. It just needs to be downloaded, unziped and it's ready to be used with VMware. Workstation or Player.
View full article
on Host: libx11-dev libpng-dev libjpeg-dev libxext-dev x11proto-xext-dev qt3-dev-tools-embedded libxtst-dev On Target (i.MX device) alsa-utils libpng tslib
View full article
OpenGL OpenGL is the premier environment for developing portable, interactive 2D and 3D graphics applications. Since its introduction in 1992, OpenGL has become the industry's most widely used and supported 2D and 3D graphics application programming interface (API), bringing thousands of applications to a wide variety of computer platforms. OpenGL fosters innovation and speeds application development by incorporating a broad set of rendering, texture mapping, special effects, and other powerful visualization functions. Developers can leverage the power of OpenGL across all popular desktop and workstation platforms, ensuring wide application deployment. Source: http://www.opengl.org/about/overview/ On i.MX processors, OpenGL takes the advantage of GPU (Graphics Processing Unit) block to improve 3D performance. Installing and running Demos Get more information on how to install and run demos using OpenGL on i.MX31 on this application note: AN3723 - Using OpenGL Applications on the i.MX31 ADS Board - http://www.freescale.com/files/dsp/doc/app_note/AN3723.pdf?fsrch=1 Develop a simple OpenGL ES 2.0 application under Linux This tutorial shows how to develop a simple OpenGL ES 2.0 application with LTIB and an i.MX51 EVK board. This tutorial can be adapted to i.MX53 that share the same 3D GPU core (Z430) and API (OpenGL ES 2.0).
View full article
NFS Network File System (NFS) is a network file system protocol originally developed by Sun Microsystems in 1984, allowing a user on a client computer to access files over a network as easily as if the network devices were attached to its local disks. The use of NFS makes the development work of user space applications easy and fast since all target root file system is located into host (PC) where the applications can be developed and crosscompiled to target system. The target system will use this file system located on host as if it is located on target. NFS service will be used to transfer the root file system from host to target. NFS resources are listed below: All Boards Deploy NFS All Boards NFS on Fedora NFS on Fedora All Boards NFS on Slackware NFS on Slackware All Boards NFS on Ubuntu NFS on Ubuntu
View full article
On power-up of a system, the bootloader performs initial hardware configuration, and is responsible for loading the Linux kernel in memory. Several bootloaders are available which support i.MX SoCs: Barebox (http://www.barebox.org/) RedBoot (http://ecos.sourceware.org/redboot/) U-Boot (http://www.denx.de/wiki/U-Boot/) Qi (http://wiki.openmoko.org/wiki/Qi)
View full article
Introduction to Linux In 1989, a Finnish student called Linus Torvalds started an improvement work of Minix's Kernel, an operational system like Unix wrote by Andrew Tannenbaum, calling his own Kernel as Linux, a mix of Linus and Minix. The main idea about this new Kernel is that it's free. Anyone can develop to improve the Kernel, share the software, change it specific application and distribute without any fee or restriction. All the code is open. Finally, in 1991, Linus launched the first official Linux version, later joining Richard Stallman's GNU project in 1992 with the objective to produce the complete operational system that we know today. Linux had fast success on the x86 architecture and soon it had ported for various processor architectures and became very popular on embedded devices in many different applications like automotive, industrial, telecommunications, consumer and internet appliances. The use of Linux in embedded devices has many advantages: Supports many devices, file systems, networking protocols High portability, since it has been ported for many CPU architectures Large number of ready applications Mature source code with constant actualization Large community of programmers and users over the world testing and adding new features Which Linux distribution should I install on my PC? There are many distributions of Linux available for free. Below are a list of some of them: Ubuntu (IPA: [uːˈbuːntuː] in English,[3][ùbúntú] in Zulu) is an operating system for desktops, laptops, and servers. It has consistently been rated among the most popular of the many Linux distributions. Ubuntu's goals include providing an up-to-date yet stable Linux distribution for the average user and having a strong focus on usability and ease of installation. It is a derivative of Debian, another popular distribution. Ubuntu is sponsored by Canonical Ltd, owned by South African entrepreneur Mark Shuttleworth. The name of the distribution comes from the African concept of ubuntu which may be rendered roughly as "humanity toward others", "we are people because of other people", or "I am who I am because of who we all are", though other meanings have been suggested. This Linux distribution is named as such to bring the spirit of the philosophy to the software world. Ubuntu is free software and can be shared by any number of users. Kubuntu and Xubuntu are official subprojects of the Ubuntu project, aiming to bring the KDE and Xfce desktop environments, respectively, to the Ubuntu core (by default Ubuntu uses GNOME for its desktop environment). Edubuntu is an official subproject designed for school environments, and should be equally suitable for children to use at home. Gobuntu is an official subproject that is aimed at adhering strictly to the Free Software Foundation's Four Freedoms. The newest official subproject is JeOS. Ubuntu JeOS (pronounced "Juice") is a concept for what an operating system should look like in the context of a virtual appliance. Ubuntu releases new versions every six months, and supports those releases for 18 months with daily security fixes and patches to critical bugs. LTS (Long Term Support) releases, which occur every two years, are supported for 3 years for desktops and 5 years for servers. The most recent LTS version, Ubuntu 10.04 LTS (Lucid Lynx), was released on 29 April 2010. The current non-LTS version is 10.10 (Maverick Meerkat) released on 10/10/10 (10 October 2010). Click here for official site Mandriva Linux (formerly Mandrakelinux or Mandrake Linux) is a Linux distribution created by Mandriva (formerly Mandrakesoft). It uses the RPM Package Manager. The product lifetime of Mandriva Linux releases is 18 months for base updates and 12 months for desktop updates. Click here for official site openSUSE, (pronounced /ˌoʊpɛnˈsuːzə/), is a community project, sponsored by Novell and AMD, to develop and maintain a general purpose Linux distribution. After acquiring SUSE Linux in January 2004, Novell decided to release the SUSE Linux Professional product as a 100% open source project, involving the community in the development process. The initial release was a beta version of SUSE Linux 10.0, and as of October 2007 the current stable release is openSUSE 10.3. Beyond the distribution, openSUSE provides a web portal for community involvement. The community assists in developing openSUSE collaboratively with representatives from Novell by contributing code through the open Build Service, writing documentation, designing artwork, fostering discussion on open mailing lists and in Internet Relay Chat channels, and improving the openSUSE site through its wiki interface. Novell markets openSUSE as the best, easiest distribution for all users. Like most distributions it includes both a default graphical user interface (GUI) and a command line interface option; it allows the user (during installation) to select which GUI they are comfortable with (either KDE, GNOME or XFCE), and supports thousands of software packages across the full range of open source development. Click here for official site Fedora is an RPM-based, general purpose Linux distribution, developed by the community-supported Fedora Project and sponsored by Red Hat. Fedora's mission statement is: "Fedora is about the rapid progress of Free and Open Source software." One of Fedora's main objectives is not only to contain free and open source software, but also to be on the leading edge of such technologies. Also, developers in Fedora prefer to make upstream changes instead of applying fixes specifically for Fedora – this ensures that updates are available to all Linux distributions. Click here for official site Debian (pronounced [ˈdɛbiən]) is a computer operating system (OS) composed entirely of software which is both free and open source (FOSS). Its primary form, Debian GNU/Linux, is a popular and influential Linux distribution. It is a multipurpose OS; it can be used as a desktop or server operating system. Debian is known for strict adherence to the Unix and free software philosophies. Debian is also known for its abundance of options — the current release includes over twenty-six thousand software packages for eleven computer architectures. These architectures range from the Intel/AMD 32-bit/64-bit architectures commonly found in personal computers to the ARM architecture commonly found in embedded systems and the IBM eServer zSeries mainframes. Throughout Debian's lifetime, other distributions have taken it as a basis to develop their own, including: Ubuntu, MEPIS, Dreamlinux, Damn Small Linux, Xandros, Knoppix, Linspire, sidux, Kanotix, and LinEx among others. A university's study concluded that Debian's 283 million source code lines would cost 10 billion USA Dollars to develop by proprietary means. Prominent features of Debian are its APT package management system, its strict policies regarding its packages and the quality of its releases. These practices afford easy upgrades between releases and easy automated installation and removal of packages. Debian uses an open development and testing process. It is developed by volunteers from around the world and supported by donations through SPI, a non-profit umbrella organization for various free software projects. The default install provides popular programs such as: OpenOffice, Iceweasel (a rebranding of Firefox), Evolution mail, CD/DVD writing programs, music and video players, image viewers and editors, and PDF viewers. Only the first CD/DVD is necessary for the default install; the remaining discs contain all 26,000+ extra programs and packages currently available. If a user does not wish to download the CDs/DVDs, these extras can be downloaded and installed separately using the package manager. Debian can also be configured to download and install updates automatically. Click here for official site Slackware is a Linux distribution created by Patrick Volkerding of Slackware Linux, Inc. Slackware was one of the earliest distributions, and is the oldest currently being maintained. Slackware aims for design stability and simplicity, and to be the most Unix-like GNU/Linux distribution. Click here for official site
View full article
  Some customers want to expose their i3c device on the /dev, In order to develop their i3c APP or operation the i3c device like I2C. But in our default BSP code, we do not support this feature for I3C device, This article will introduce how to make the i3c device expose to the user space. Board : i.MX 93 EVK BSP Version : lf-6.1.55-2.2.0 I3C device : LSM6DSOXTR Step 1 : Rework the i.MX93 EVK Board, Install the R1010.      Step 2 : Apply the add_i3c_device_to_dev.patch file to the linux kernel code              Command : git apply add_i3c_device_to_dev.patch Step 3 : Re-compile the kernel Image file.              Command : make imx_v8_defconfig                                  make Step 4 : Boot your board with "imx93-11x11-evk-i3c.dtb" file and see if you can see the I3C device on the /dev directory. Result : We can see the i3c device is appeared in /dev directory, The i2c-8 is an i2c device mounted to the i3c bus. The i3c is backward compatible with i2c device. It will simulate the I2C signal loading i2c device.                 PS : You can also use the i2ctool detect i2c-8 device. As shown in the following picture:   Note : If you need the patch file, Please contact me any time for free.
View full article
  Just sharing some experiences during the development and studying.   Although, it appears some hardwares, it focuses on software to speed up your developing on your  hardware.     杂记共享一下在开发和学习过程中的经验。    虽然涉及一些硬件,但其本身关注软件,希望这些能加速您在自己硬件上的开发。   02/07/2024 Device Tree Standalone Compile under Windows https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/Device-Tree-Standalone-Compile-under-Windows/ta-p/1855271   02/07/2024 i.MX8X security overview and AHAB deep dive i.MX8X security overview and AHAB deep dive - NXP Community   11/23/2023 “Standalone” Compile Device Tree https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/Standalone-Compile-Device-Tree/ta-p/1762373     10/26/2023 Linux Dynamic Debug https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/Linux-Dynamic-Debug/ta-p/1746611   08/10/2023 u-boot environment preset for sdcard mirror u-boot environment preset for sdcard mirror - NXP Community   06/06/2023 all(bootloader, device tree, Linux kernel, rootfs) in spi nor demo imx8qxpc0 mek all(bootloader, device tree, Linux kernel, rootfs)... - NXP Community     09/26/2022 parseIVT - a script to help i.MX6 Code Signing parseIVT - a script to help i.MX6 Code Signing - NXP Community   Provide  run under windows   09/16/2022   create sdcard mirror under windows create sdcard mirror under windows - NXP Community     08/03/2022   i.MX8MM SDCARD Secondary Boot Demo https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/i-MX8MM-SDCARD-Secondary-Boot-Demo/ta-p/1500011     02/16/2022 mx8_ddr_stress_test without UI   https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/mx8-ddr-stress-test-without-UI/ta-p/1414090   12/23/2021 i.MX8 i.MX8X Board Reset https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/i-MX8-i-MX8X-Board-Reset/ta-p/1391130       12/21/2021 regulator userspace-consumer https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/regulator-userspace-consumer/ta-p/1389948     11/24/2021 crypto af_alg blackkey demo crypto af_alg blackkey demo - NXP Community   09/28/2021 u-boot runtime modify Linux device tree(dtb) u-boot runtime modify Linux device tree(dtb) - NXP Community     08/17/2021 gpio-poweroff demo https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/gpio-poweroff-demo/ta-p/1324306         08/04/2021 How to use gpio-hog demo https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/How-to-use-gpio-hog-demo/ta-p/1317709       07/14/2021 SWUpdate OTA i.MX8MM EVK / i.MX8QXP MEK https://community.nxp.com/t5/i-MX-Processors-Knowledge-Base/SWUpdate-OTA-i-MX8MM-EVK-i-MX8QXP-MEK/ta-p/1307416     04/07/2021 i.MX8QXP eMMC Secondary Boot https://community.nxp.com/t5/i-MX-Community-Articles/i-MX8QXP-eMMC-Secondary-Boot/ba-p/1257704#M45       03/25/2021 sc_misc_board_ioctl to access the M4 partition from A core side sc_misc_board_ioctl to access the M4 partition fr... - NXP Community     03/17/2021 How to Changei.MX8X MEK+Base Board  Linux Debug UART https://community.nxp.com/t5/i-MX-Community-Articles/How-to-Change-i-MX8X-MEK-Base-Board-Linux-Debug-UART/ba-p/1246779#M43     03/16/2021 How to Change i.MX8MM evk Linux Debug UART https://community.nxp.com/t5/i-MX-Community-Articles/How-to-Change-i-MX8MM-evk-Linux-Debug-UART/ba-p/1243938#M40       05/06/2020 Linux fw_printenv fw_setenv to access U-Boot's environment variables Linux fw_printenv fw_setenv to access U-Boot's env... - NXP Community     03/30/2020 i.MX6 DDR calibration/stress for Mass Production https://community.nxp.com/docs/DOC-346065     03/25/2020 parseIVT - a script to help i.MX6 Code Signing https://community.nxp.com/docs/DOC-345998     02/17/2020 Start your machine learning journey from tensorflow playground Start your machine learning journey from tensorflow playground      01/15/2020 How to add  iMX8QXP PAD(GPIO) Wakeup How to add iMX8QXP PAD(GPIO) Wakeup    01/09/2020 Understand iMX8QX Hardware Partitioning By Making M4 Hello world Running Correctly https://community.nxp.com/docs/DOC-345359   09/29/2019 Docker On i.MX6UL With Ubuntu16.04 https://community.nxp.com/docs/DOC-344462   09/25/2019 Docker On i.MX8MM With Ubuntu https://community.nxp.com/docs/DOC-344473 Docker On i.MX8QXP With Ubuntu https://community.nxp.com/docs/DOC-344474     08/28/2019 eMMC5.0 vs eMMC5.1 https://community.nxp.com/docs/DOC-344265     05/24/2019 How to upgrade  Linux Kernel and dtb on eMMC without UUU How to upgrade Linux Kernel and dtb on eMMC without UUU     04/12/2019 eMMC RPMB Enhance and GP https://community.nxp.com/docs/DOC-343116   04/04/2019 How to Dump a GPT SDCard Mirror(Android O SDCard Mirror) https://community.nxp.com/docs/DOC-343079   04/04/2019 i.MX Create Android SDCard Mirror https://community.nxp.com/docs/DOC-343078   04/02/2019: i.MX Linux Binary_Demo Files Tips  https://community.nxp.com/docs/DOC-343075   04/02/2019:       Update Set fast boot        eMMC_RPMB_Enhance_and_GP.pdf   02/28/2019: imx_builder --- standalone build without Yocto https://community.nxp.com/docs/DOC-342702   08/10/2018: i.MX6SX M4 MPU Settings For RPMSG update    Update slide CMA Arrangement Consideration i.MX6SX_M4_MPU_Settings_For_RPMSG_08102018.pdf   07/26/2018 Understand ML With Simplest Code https://community.nxp.com/docs/DOC-341099     04/23/2018:     i.MX8M Standalone Build     i.MX8M Standalone Build.pdf     04/13/2018:      i.MX6SX M4 MPU Settings For RPMSG  update            Add slide CMA Arrangement  Consideration     i.MX6SX_M4_MPU_Settings_For_RPMSG_04132018.pdf   09/05/2017:       Update eMMC RPMB, Enhance  and GP       eMMC_RPMB_Enhance_and_GP.pdf 09/01/2017:       eMMC RPMB, Enhance  and GP       eMMC_RPMB_Enhance_and_GP.pdf 08/30/2017:     Dual LVDS for High Resolution Display(For i.MX6DQ/DLS)     Dual LVDS for High Resolution Display.pdf 08/27/2017:  L3.14.28 Ottbox Porting Notes:         L3.14.28_Ottbox_Porting_Notes-20150805-2.pdf MFGTool Uboot Share With the Normal Run One:        MFGTool_Uboot_share_with_NormalRun_sourceCode.pdf Mass Production with programmer        Mass_Production_with_NAND_programmer.pdf        Mass_Production_with_emmc_programmer.pdf AndroidSDCARDMirrorCreator https://community.nxp.com/docs/DOC-329596 L3.10.53 PianoPI Porting Note        L3.10.53_PianoPI_PortingNote_151102.pdf Audio Codec WM8960 Porting L3.10.53 PianoPI        AudioCodec_WM8960_Porting_L3.10.53_PianoPI_151012.pdf TouchScreen PianoPI Porting Note         TouchScreen_PianoPI_PortingNote_151103.pdf Accessing GPIO From UserSpace        Accessing_GPIO_From_UserSpace.pdf        https://community.nxp.com/docs/DOC-343344 FreeRTOS for i.MX6SX        FreeRTOS for i.MX6SX.pdf i.MX6SX M4 fastup        i.MX6SX M4 fastup.pdf i.MX6 SDCARD Secondary Boot Demo        i.MX6_SDCARD_Secondary_Boot_Demo.pdf i.MX6SX M4 MPU Settings For RPMSG        i.MX6SX_M4_MPU_Settings_For_RPMSG_10082016.pdf Security        Security03172017.pdf    NOT related to i.MX, only a short memo
View full article