For a couple months I’ve been working with the K64F microcontroller for my master dissertation. (I am an electro-mechanical engineer with specialization in automation) The goal of the thesis is to create a system able to sample underwater sounds at 300 KHz (period is 3,3333 µs), check the frequencies (with RFFT) and if some criteria are met, it should write the raw data away (in a .wav file) to a SD-card. Energy consumption is an important factor as well. Currently I am using the MCUXpresso ide for the software in the microcontroller.
The working of the code should look the following: We have several input buffers (=array of signed short integers because the output of ADC is 16 bit) with the size of the RFFT (1024, 2048 or 4096) and two bigger output buffers. When one input buffer is filled, another one should start filling. Meanwhile, the data of the first input buffer is being processed (RFFT, check if certain criteria are met and if the criteria are met, the data has to be copied to the output buffer). If the output buffer, with let’s say 10 times the size of an input buffer, is completely filled, it should also write away the data to the SD card. The timing of the samples is currently realized with the PIT, the ADC in the interrupt routine triggered by the PDB, for the RFFT I use the CMSIS library and to write away my data I use the example of the SDK (2.5).
Yet, even now, I have doubts the microcontroller can handle the speed. I have done some speed tests where I put a certain pin high at the beginning of a function and low at the end, so I can observe what the system is doing on the oscilloscope. When I time every function apart (let’s say the interrupt routine with ADC conversion, RFFT calculation and writing away are the 3 main functions), it should be able to handle everything with some extra time left. But as soon as I start putting the code together, it goes wrong. Some things take way more time as intended. For example, clearing the PIT flag and waiting till the AD conversion is done takes about 1 µs, but reading the actual conversion value, converting it to a float32_t and writing it to one of the input buffers suddenly takes up about 1,3 µs. Resulting in an interrupt routine taking way longer than intended (2,3 µs instead of 1 µs), with only one result: the microcontroller can’t handle the speed. Because one sample should be taken every 3,33 µs. This is just one example where it goes wrong.
I could ask a lot of questions about certain problems I have/had to deal with, but mainly I have one big important question: Can my microcontroller handle the speed? If not, are there other microcontroller systems (preferably also programmable with MCUXpresse ide) with better processor speed with a good energy efficiency? And if it if should be able to handle the speed, do I need to switch to another approach to get the maximum efficiency?