I was testing the wireless uart sample application on FRDM KW38 target. After a couple of tests the target does not scan/connect the phone. It is a sporadic issue. The test scenario is described below:
Target hardware: FRDM KW38
SDK version: 2.6.9
MCU Xpresso version: 11.1.0
Target application: frdmkw38_wireless_uart_bm
Mobile application: NXP IoT Toolbox (Android)
App pool details(same as in the sample app):
#define AppPoolsDetails_c \
_block_size_ 32 _number_of_blocks_ 4 _eol_ \
_block_size_ 80 _number_of_blocks_ 6 _eol_ \
_block_size_ 288 _number_of_blocks_ 16 _eol_ \
_block_size_ 312 _number_of_blocks_ 1 _eol_ \
_block_size_ 400 _number_of_blocks_ 2 _eol_
1. Start wireless uart application on KW38
2. Connect to wireless uart using NXP IoT Toolbox android application
3. From Phone UI send continuous messages from a place where signal strength is very low (some packets were missing)
(Sometimes device will disconnect and reconnect)
4. After several messages, the memory pool count increases to a very high value and then does not appear to decrease anytime later even if BT is disconnected
5. Once the memory pool count reaches to a high value (Snapshot attached below), BLE stack does not respond (No callbacks, no scanning/connection possible without FRDM KW38 system restart)
Seems like a memory leak in BLE stack. Can you please look into this issue?
Hi, I hope you're doing well!
Could you please take a look at the following Application Note for the KW3xA/Z?
The Memory Pool Optimizer will help calculate the corresponding size and amount of blocks for the memory pools allocated at compile time of the application for these devices, to avoid possible memory pool issues like the one you seem to be experiencing.
Please let me know if you need any more information.
Thank you for the response.
Yes, I had already gone through that document. Also I had tried to tune the memory pools in my app, but did not help. Anyhow, the problem I reported is on NXP BLE sample app on NXP KW38 freedom board. So, I request you to take a look at the reported problem (you may try to reproduce the issue as well) and provide a solution. On this issue, the BLE stack becomes non functional. System restart is the only recovery option in my experience. Kindly address it as a critical issue.
Hi @Nidhin ,
Using the MEM_TRACKING define will add some software overhead and in this case, it is possible that the LL FW processing time for each ACL data packet to take more time than over the air transmission.
Did you tried to run the same test without the MEM_TRACKING define?
Thank you for the response.
Yes, I had done the same test with and without the MEM_TRACKING enabled for my custom application where I had the same problem (BLE stack not responding until a target restart). So I tried the same case on NXP sample application, with MEM_TRACKING enabled.
Thank you for the confirmation. We will address this issue in the next release (@ mid of February).
Until then, from my point of view, the best option is to increase the number of buffers. I'm expecting that at some point, you are receiving at the application level an internal error event with the status gBleOutOfMemory_c and the error source: gL2capRxPacket_c.
When this error is occurring, please look into the memTrack structure to see active buffers: pCaller and requested_size in order to increase the buffer set accordingly.
Below is the BlockTracking_t structure where I highlighted the parameters under interest.
typedef PACKED_STRUCT BlockTracking_tag
void *blockAddr; /*Addr of Msg, note that this pointer is 4 bytes bigger than the addr in the pool has the header of the msg is 4 bytes */
uint16_t blockSize; /*Size of block in bytes.*/
uint16_t fragmentWaste; /*Size requested by allocator.*/
void *allocAddr; /*Return address of last Alloc made */
void *freeAddr; /*Return address of last Free made */
uint16_t allocCounter; /*No of time this msg has been allocated */
uint16_t freeCounter; /*No of time this msg has been freed */
memTrackingStatus_t allocStatus; /*1 if currently allocated, 0 if currently free */
I will keep you updated on the progress on our side.