We have been testing our floating point algorithms (a lot of FFT's and matrix multiplies) on a i.mx6 quad (Utilite Pro.) We are considering going to a dual or
dual Lite. The clock speed reduction from 1.2Ghz to 1Ghz should be straight forward as far what to expect in the decreasing (double precision) floating point
performance. What we are not so sure about the effect of the size decrease in L2 cache(1Mb to 512kb.) Are there any published tests or knowledge from
Freescale about floating point performance related to the L2 cache size? Our algorithms are currently running between 600msecs and 750msecs and we
need to insure the we do not go over 1sec. Any help is appreciated.