AnsweredAssumed Answered

DAC_LDD issue with SetBufferSize() on K60 tower

Question asked by ScottW on Aug 1, 2011
Latest reply on Aug 17, 2011 by Processor Expert
I think there is an issue with the generated SetBufferSize() function for the DAC_LDD bean on the K60 tower; I think it's setting the DACx_C2.DACBFUP bits to the wrong value (specifically, off-by-one). I added the DAC_LDD bean to my project, turned the Data Buffer property to "Enabled" and then changed the SetBuffer() and SetBufferSize() method settings to "Generate Code". In the generated DA1_Init() function it sets the DACBFUP bits to 0xF, since I left the buffer size at the default of 16. However, if I call DA1_SetBufferSize() with a buffer size of 16, it ends up setting these bits to 0x0. DA1_SetBufferSize() doesn't flag this as an error (it passes the if(Size > DA1_BUFFER_MAX_SIZE) test) but then it calls DAC_PDD_SetBufferSize() where the parameter gets masked with DAC_C2_DACBFUP_MASK (which is 0xF) which zeroes out DACBFUP. Is this an off-by-one error with the generated code? It seems like it should be subtracting one before passing the parameter to DAC_PDD_SetBufferSize(), since the DAC's buffer read pointers are zero-based.

Outcomes