ユニバーシティ・プログラムのナレッジベース

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

University Programs Knowledge Base

ディスカッション

ソート順:
After completing the LED, Motor Control and servo tutorials, students should be comfortable with many of the subjects necessary to enable and input data from the Line Scan Camera. The line scan camera module consists of a CMOS linear sensor array of 128 pixels and an adjustable lens. This camera has a 1x128 resolution. The camera is mounted on a boom above the car to ensure the greatest field of view. Determining the angle of orientation about the pivot at the top of the boom will change the “look ahead” distance of the camera and enable more efficient steering algorithms Solution Overview One method of implementation is to take the entire readout of the camera and store it in the memory. Then a line detection algorithm can be used to locate the position of the black line. Due to varying lighting conditions, some level of pixel thresholding may be necessary as the intensity differences across the data may not always produce a clear indication of the line location. A good approach is to use an algorithm that looks for changes in the magnitude of voltage from one portion of the array to another, since the camera’s AO magnitude is directly related to the brightness the pixel array senses. If the microcontroller finds a significant decrease in magnitude followed by large increase in magnitude this would give us a good indication of the location of the line. For this a derivative function can be utilized. Once we have successfully determined the position of the black line, immediately adjust the wheels to adjust the direction of the car so that the black line will remain in the center of the camera’s view. Sample camera output (for illustrative purposes only) The camera outputs an analog signal from 0 to 5V depending on the grey-scale value of the image. to simplify our sample we will assume that we have set limits for the line and have transformed the data to digital bits using a threshold value. 0’s are high intensity (non-line locations), 1’s are low intensity (black or line locations) 10000000000000000000000000000000001111101000000000000000000010000000000000000 Since the camera provides a 128x1 bit picture, and the camera will be pointing down at the track which is a fixed width. A control algorithm should be developed to line up the 1’s in the center of the 128 bits. The center of the field of view will be require calibration and testing, but it is assumed that the camera will remain in a fixed location pointing down the center of the forward looking axis of rotation. Usage For normal operation of the camera, the following signals must be produced and processed: CK (clock) - latches SI and clocks pixels out (low to high) continuous signal SI (serial input to sensor) begins a scan / exposure discrete pulses, pulse must go low before rising edge of next clock pulse AO (analog output) - Analog pixel input from the sensor (0-Vdd) or or tri-stated The CK and SI signals are simple ON/OFF signals which can be produce using a GPIO Pin, setting the pin high and low corresponding to the desired exposure time of the camera. The only other requirement is to read the Analog Output of the camera which requires the initialization of the Analog Module and setting it to the proper pinout.  Actual camera output, below:                                                                                                                        Yellow = SI, Green = Camera Signal, Purple = clock More camera waveforms and information (Power Point) available here This link shows a video of the camera connected to the oscilloscope http://www.youtube.com/watch?v=YOAd3ERnXiQ To obtain this signal, connect channel 1, 2 and 3 of an oscilloscope to the SI pulse (Trigger off this signal), CLK, and AO signals. GPIO Details are provided in the LED tutorial. The timing for creation and read of the signals is crucial and is detailed in the diagram below. This information can also be found in the Line Scan Datasheet. Analog Read: The Analog Output (AO) signal from the camera needs to be processed and read by the microcontroller's Analog to Digital Converter (ADC). This ADC device converts a continuous signal into a discrete number which is proportional to the signal voltage. An 8 bit ADC has 256 discrete levels (2^8). If a analog signal between 0 and 5 volts is sampled, a digital discrete number of 0 would correspond to zero volts, and a digital discrete number of 255 would correspond to 5 volts. A number such as 145w would correspond to about 2.8 volts. The maximum signal sample rate is limited by the microcontroller. Proper configuration of the ADC peripheral and the multiplexer of the chip will configure a pin to read in an analog signal when calling the function. More details on analog to digital converters can be found on the wikipedia site here. Read/Write In write mode, the GPIO pin can be set, cleared, or toggled via software initiated register settings. Microcontroller Reference Manual: Analog to Digital Converter You will find high level information about GPIO usage in several different areas of a reference manual. See the reference-manual article for more general information. Relevant Chapters: (see GPIO chapters for clock and SI Creation)  Introduction: System Modules: System Integration Modules (SIM) - provides system control and chip configuration registers Chip Configuration: Signal Multiplexing: Port control and interrupts Hardware The device discussed within this tutorial is the Line Scan Camera featuring TAOS 1401  Focusing the camera: Once the sensor is perfectly working the next step is to find the best position of the lens that will generate the clearest images. The best way to do it is using an oscilloscope: Connect the SI and AO signals to the oscilloscope Set the SI pulse so that it can be clearly seen and then trig the AO signal with the SI signal using the trig function Fix the camera looking at a sheet of paper with a black line in the center The image of the black line will appear on the oscilloscope screen Screw the camera until you find the position where the line seems the clearest Camera Circuit   5 wires must be connected  ground power SI CLK AO Camera Limitations According to the datasheet:  "The sensor consists of 128 photodiodes arranged in a linear array. Light energy impinging on a photodiode generates photocurrent, which is integrated by the active integration circuitry associated with that pixel. During the integration period, a sampling capacitor connects to the output of the integrator through an analog switch. The amount of charge accumulated at each pixel is directly proportional to the light intensity and the integration time." Integration Time: T T = (1/fmax)*(n-18)pixels + 20us, where n is the number of pixels Minimum integration time: 33.75us Maximum integration time: capacitors will saturate if exceeding 100ms frequency range 5 Khz - 8 Mhz (8 Mhz is fmax in equation above) The integration time is the following: It occurs between the 19th CLK cycle and the next SI pulse. The CLK frequency itself has little to do with the integration time. One each rising edge, the clock outputs one of the previously sampled intensity values. This means that integration time should be set by varying the time between SI pulses, not changing the clock frequency. Make the CLK frequency high, and have as much time as needed between the two SI pulses to obtain the desired intensity value. Helpful Hints Light can be transmitted through the pcb on the back of the camera. This unwanted extra light shining on the CMOS linear sensor can induce significant errors into your signals received. A shroud or housing for the camera unit can easily eliminate this problem. One of the easiest solutions is to place a piece of electrical tape across the back of the camera in the highlighted area indicated in the picture below. When testing the car on the track or transporting it, it is not uncommon for the focus on the camera to loosen or change. Therefore it is recommended that after adjusting your camera focus for maximum performance you make mark (ex. metallic sharpie) between the lens and its body so you can realign the camera lens to it's proper position easily if it does shift.   *When hooking up the linescan camera, regardless of position or focus there is a drop off at each end of the image data. This is easily viewed with an oscilloscope. This effect is undesirable, particularly when you are finding your line position utilizing a derivative approach. These fallouts cause erroneous derivative values, and hence a poor line position solution. Two solutions we found useful were: (1) Ignoring the first 10-15 pixels and last 10-15 pixels of the image data array, and then determining the line position; (2) Often when making decisions in the code as to where the line was at it was found useful to use a threshold value for the difference in the derivative position, and secondly a binary threshold on the camera data. Note that the falloff depends on camera focus, position, etc. Therefore, these threshold values and pixels in which to ignore are relative to a specific instance. The problem however is common to the camera.  * Saving previous line position values Since the camera can read the line very quickly while the servo can only update every 20ms, there are multiple camera reads before the servo can update, if you are reading the camera fast and then overriding without saving them in some form then those camera reads are being wasted and are better off not having occurred. What can help is to create some sort of filter by bringing new values into an array with previous values and preforming some sort of averaging. The following code will take the new line position value and place it in a 1xA array where A is defined by CAMERA_AVG. NO AVERAGING IS OCCURRING HERE all that is happening is the camera values are being saved in a simple array, what is done with them is up to you. The way this works is that it shifts the entire array so the oldest data point is discarded in order to make room for the new line position at the other end of the array. It will only adds the new value if there is one available if not it copies the previous first position value to the new first position value. CAMERA_AVG => an integer value for how long the averaging length will occur gfpLineAverage => global floating point array of camera center line values fpLinePos => returned from read camera this is the center line position ReadCamera() => is the read camera function call returns a floating point value of fpLinePos // this will shift the values up and throw away the oldest value // then add a new reading for (i=CAMERA_AVG;i>0;i—) { gfpLineAverage[i]=gfpLineAverage[i-1]; } // if no line was detected the previous camera value will be passed on if (fpLinePos=ReadCamera()) { gfpLineAverage[0]= fpLinePos; } For example an array of of center line position values ranging from 0-127 could look like. Initial values [51 50 52 54 58 55] New position of 45 read [45 51 50 52 54 58] New position of 44 read [44 45 51 50 52 58] No value read [44 44 45 51 50 52] No value read [44 44 44 45 51 50] New position of 50 read [50 44 44 44 45 51] Tutorials Line Scan Camera: Kinetis ARM Cortex M4 Tutorial Specifics of how to configure the K40 ADC, to create the delay code is covered in the K40: Line Scan Camera Tutorial. Line Scan Camera: Qorivva Tutorial Specifics of how to configure and program the trk-mpc5604b board to blink an LED is covered in the qorivva:line-scan-camera Tutorial. Additional Resources Freescale app note on interfacing with a linescan camera Freescale app note on interfacing with an RCA camera
記事全体を表示
A simple demo code for TWR-K60D100
記事全体を表示
Here are some special offers from our partners from around the world: MathWorks support software for The Freescale Cup participants Stay tuned... more to come
記事全体を表示
Line scan camera data processing - Part I
記事全体を表示
In this training video we will examine some concepts in approaching a vehicle control system.  This includes the stages in data flow and update rates of the control software.   The concept of differential steering will be introduced.
記事全体を表示
Examines the core used in the MKL25Z128VLK4 device that is mounted on the FRDM-KL25Z board. The audience will be guided through the process of acquiring documentation for both the device and the core. A brief overview of the ARM cortex series will be presentated and how it relates to the embedded systems landscape.
記事全体を表示
Hello Freescale Cup Teams,   MathWorks is pleased to support the 2015 Freescale Cup EMEA Competition! Take advantage of our: Complimentary Access to MATLAB & Simulink Your team is eligible for an offer of Complimentary Software Licenses. Your team leader or faculty advisor should review and complete the Student Competition Software Request Form http://www.mathworks.com/academia/student-competitions/software/Freescale_Cup_Offer%20of%20Complimentary%20Software%20License(s).pdf to take advantage of our software offer.   Deploy your Simulink models directly to the Freedom board and shield MathWorks is offering hardware support for the Freescale Cup hardware (FRDM-KL25Z, FRDM-MC-SHLD).  Find all relevant information on http://www.mathworks.com/hardware-support/frdm-kl25z.html and install your the package without additional fees. For more information visit the hardware support page http://www.mathworks.de/hardware-support/ and the MakerZone http://makerzone.mathworks.com/ .   Interactive tutorials There are a total of five tutorials, narrated by specialists from MathWorks that include interactive exercises to reinforce learning on our dedicated webpage: http://www.mathworks.de/academia/student-competitions/freescale-cup/ .   Technical support Send an email to freescalecup@mathworks.com .   We are looking forward to working with you and wish you all the best.    Best regards, The MathWorks Student Competition Program  
記事全体を表示
Introduction to basic motor DC motor control. The concept of an H-Bridge will be shown as well as some useful ways to control the motor. View Video Link : 1467
記事全体を表示
Continue showing how to start a project from scratch.  In this second part,  we will see how to import new files into a CodeWarrior project to build a project. View Video Link : 1458
記事全体を表示
In this two part series we take a deeper look at the inner workings of a microcontroller. This video will examine a "generic" microcontroller. Components that are common to most microcontrollers will be examine.   View Video Link : 1453
記事全体を表示
Review the servo example code provided in the FRDM-TFC. TPM peripheral initialization and a simple driver interface will be shown. View Video Link : 1464
記事全体を表示
In this video, we will examine a commercial of the shelf (COTS) H-Bridge IC. Example code for the FRDM-TFC will also be examined.
記事全体を表示
How to interrupt the core from a core peripheral.... The Systick Timer.
記事全体を表示
Notes: Will ask - Do you want to add the Remote System to your workspace? Click yes Build - select flash Plug in your K40 board to the usb (tower is not needed in this step) Click on debug as it will ask you which configuration you want to launch: Select the internal flash one. Bottom right you will see it "Launching with a little green light indicating that it is programming your board. After clicking debug as, you will enter the debug Eclipse "view" nothing will happen until you press "resume" Download the Zip file which is located: LED BLINK 96MHZ How to: Set up a debug: Program the FLASH Click on project in codewarrior projects menu There is noe issue with the Kinetis chips errata 2448. The code which is in our zip file already has these changes made, but if you download Kinetis example code from the official freescale site instead of using the wiki code - it may not work. Read more about the work - around here: here ++ Test to make sure everything is working properly CodeWarrior typically defaults to a "pause" setting when the debug is first started. To test wheter the code is working you will need to press "resume"
記事全体を表示
Project Summary In this project, you will learn how to do basic electrical automation and control via the web.  Think of the NEST.... only more open and hackable!   Using Websockets, Javascipt and HTML5,  you will have a simple way of viewing remote data and be able to control some solid state relays.   This framework will allow you to create more complex IoT applications.    The example will combine a FRDM-K64F and a FRDM-AUTO to read a temperature sensor and control a solid state relay. Skills Developed: Embedded Systems Networking Electrical Control Systems HTML5/Javascript - Websockets SOIC8 and 1206 Surface mount soldering Internet of "Things" Materials: FRDM-K64F FRDM-AUTO Development Tools mbed.org Google Chrome Notepad++ Example Code mbed.org Github Step 0: Prerequisite Videos The videos are organized into a nice YouTube playlist: FRDM-AUTO Hardware Overview MonkeyDo Software Overview Websockets & The MonkeyDo communication model Solid state relay introduction & sage Opto-coupler introduction & usage MonkeyDo system demonstration Step 1: Get a FRDM-AUTO & FRDM-K64 The build package is on the FRDM-AUTO site.   Note that for this exercise you only need to build the "OPTION 1" version.  Please let us know if you are interested in a pre-assembled version.  If there is enough demand we will get a lot assembled for purchase, I will get a Kickstarter going!   Don't be afraid to build it yourself,  Soldering is fun!  There is plenty of good stuff on the web on how to do SMT soldering.  All of the parts on the board are fairly simply once you get the hang of it and everything can be hand soldered  The key is having some decent tools. Step 2: Put it Together Assemble the FRDM-AUTO and K64F.   When you get started, do NOT hook up anything to the solid state relays until you are sure  things are working. WARNING:   Wiring to household power can be dangerous!   You are 100% responsible for what you do. Be careful and never apply power until you fundamentally understand what you are wiring up! Step 3: Download If you have never used the mbed environment,   make sure to careful read this page.   Get the "blinky" programming working before you try anything else. Download the example firmware to the FRDM-K64F.    Make sure to press the reset button. Step 4: Follow Along Make sure to watch the demo video.   Load the example javascript pages from the github repo and recreate what you see in the demo video.   Note:   You should NOT use the websocket server used in the demo code.     When you register for an mbed account, you automatically get your own websocket server channel. See Websocket server by Mbed. Step 5: Hack and Slash! Make something cool!   Be cool and publish your work! Some Ideas to Extend the System Get the opto-couplers into the Websocket system and see if you can report their state Make a basic thermostat using the temperature sensor and relay to control a heater. Report status via the websockets interface  
記事全体を表示
In this video we will look at the example code provided for the FRDM-TFC for use with the mbed development environment. Alternatively, you can see the same example code as it is used with CodeWarrior here:
記事全体を表示
Using a "warp drive controller" as a fun example, this video will introduce the audience to basic hardware interfacing concepts, device register documentation and how one interacts with hardware. View Video Link : 1456
記事全体を表示
EGR280 sophomore design and ECE470/570 Microprocessor based system design at Oakland University (in South East Michigan). Using CW HC12(x) special edition and Wytec Dragon12 dev boards.
記事全体を表示
This guide provides all the participants of the Freescale Cup finals with the key information to get organised during the event. This is the final version
記事全体を表示
One of the finalist vehicles for the Freescale Cup China 2012 event. In 2012, we switched the black lines to the outer edges for a new challenge twist for the students to adapt to.
記事全体を表示