Sorry for the "newbie" question, but... Why does ARM have GPIO and FGPIO mappings for the same IO ports? Ok, FGPIO is faster. Cool... but then, why the GPIO mapping? Why not ONLY FGPIO?
I don't know that this is ARM specific but rather peripheral/bus implementation specific. The K series don't have FGPIO (the KL does) [but does support bit banding GPIO space] and the FGPIO is accessible with zero wait states by the core, so it makes sense to use these when the SW is accessing the GPIO. However, not all bus masters may have access to the FGPIO space (eg. DMA can only access GPIO space and not FGPIO) so it is necessary to have both to allow flexibility and just having FGPIO may restrict certain operations (such as the DMA access that was mentioned).
Tahnks, I had not realized K series didn't have FGPIO. I find this cumbersome! I use KL series in some projects, K series in others, and I have some libraries (display access and so on) that have to work on both. It's hell to use DME/FGPIO on KL, Bitband on K!
Other problem I see - and it seems there's no solution: Bitband is intended to improve bit access for IO AND RAM... but I didn't find any means of using it for RAM variables short of allocating variables in a chunk of RAM "stolen" from the linker. It seems the tool chain simply does not support RAM Bitband.
If you are controlling displays via GPIO the speed is probably not of much consequence - often the accesses have to be slowed down (eg. character LCDs) otherwise the ports are too fast for the display, especially when using the faster devices.
Generally I would use FGPIO with the KL series (just set the GPIO block address to suit and the rest is the same).
There doesn't look to be any advantage of bit banding GPIO writes since the SET, CLEAR and TOGGLE registers do this anyway (with more flexibility). Bit banding reads could save a couple of instruction cycles when the state of one single bit is to be decided but generally the saving is unlikely to be critical.
Bit banding variables can have restrictions since only half or the RAM (RAM_U) is in the bitband region (RAM_L not). This means that variables need to be located correctly to start with.
Again there is a slight performance improvement when testing a bit in a variable and the possibility of performing a read-modify-write operation on a bit in memory, but again the saving is unlikely to be critical for anything more than very special cases.
In situations where it is advantageous to be able to manipulate bits in a variable (or efficiently test a bit in variable or GPIO) it is probably best to calculate the alias address of the bit in the specific variable (taking its address and calculating the corresponding alisas address and then the offset for the bit) just once at run time to create a pointer for further use - GPIO has a fixed alias so can use a fixed address - rather than getting the linker script involved. There should be no tool chain dependencies or restrictions.
You're right, of course. The differences in performance aren't critical. Fact is I'm something of a purist! I don't accept well the fact of a feature being implemented in the hardware, and you not being capable of using it because the software isn't capable of dealing with it!
I agree wholeheartedly with this statement! It seems like 'quite a shortcoming' of ARM tool chains to NOT support the basic 'ARM hardware option' of bit-banding.
For my purposes, I made VERY good use of the 'K' bit-banding in building an in-RAM monochrome-display bitmap and 'blasting' that to my display-device over DMA-SPI. The bit-band access allowed me to create quite tight loops for line-draw and bitblt operations.
Unfortunately, I did have to 'manually allocate' the bit-array at the top of RAM-U and shrink the RAM to the linker -- just plain cumbersome. I have other applications that use a 'large number' of 'boolean flag' arrays, and while it would be 'very nice' if the tools just used the 'most efficient' access means for these, here performance was a 'secondary' concern and I let it slide...
Retrieving data ...