RT10xx SAI basic and SDCard wave file play

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

RT10xx SAI basic and SDCard wave file play

RT10xx SAI basic and SDCard wave file play

RT10xx SAI basic and SDCard wave file play

1. Introduction

NXP RT10xx's audio modules are SAI, SPDIF, and MQS. The SAI module is a synchronous serial interface for audio data transmission. SPDIF is a stereo transceiver that can receive and send digital audio, MQS is used to convert I2S audio data from SAI3 to PWM, and then can drive external speakers, but in practical usage, it still need to add the amplifier drive circuit.

When we use the SAI module, it will be related to the audio file play and the data obtained. This article will be based on the MIMXRT1060-EVK board, give the RT10xx SAI module basic knowledge, PCM waveform format, the audio file cut, and conversion tool, use the MCUXpresso IDE CFG peripheral tool to create the SAI project, play the audio data, it will also provide the SDcard with fatfs system to read the wave file and play it.

2. Basic Knowledge and the tools

Before entering the project details and testing, just provide some SAI module knowledge, wave file format information, audio convert tools.

2.1 SAI module basic

RT10xx SAI module can support I2S, AC97, TDM, and codec/DSP interface.

SAI module contains Transmitter and Receiver, the related signals:

    SAI_MCLK: master clock, used to generate the bit clock, master output, slave input.

    SAI_TX_BCLK: Transmit bit clock, master output, slave input

    SAI_TX_SYNC: Transmit Frame sync, master output, slave input, L/R channel select

    SAI_TX_DATA[4]:Transmit data line, 1-3 share with RX_DATA[1-3]

    SAI_RX_BCLK: receiver bit clock

    SAI_RX_SYNC: receiver frame sync

    SAI_RX_DATA[4]: receiver data line

SAI module clocks: audio master clock, bus clock, bit clock

SAI module Frame sync has 3 modes:

     1)Transmit and receive using its own BCLK and SYNC

     2)Transmit async, receive sync: use transmit BCLK and SYNC, transmit enable at first, disable at last.

     3)Transmit sync, receive async: use receive BCLK and SYNC, receiver enable at first, disable at last.

Valid frame sync is also ignored (slave mode) or not generated (master mode) for the first four-bit clock cycles after enabling the transmitter or receiver.

kerryzhou_0-1617962787823.png

Pic 1

SAI module clock structure:

kerryzhou_1-1617962787968.png

Pic 2

SAI module 3 clock sources:  PLL3_PFD3, PLL5, PLL4

In the above picture, SAI1_CLK_ROOT, which can be used as the MCLK, the BCLK is:

BCLK= master clock/(TCR2[DIV]+1)*2

Sample rate = Bitclockfreq /(bitwidth*channel

2.2 waveform audio file format

WAVE file is used to save the PCM encode data, WAVE is using the RIFF format, the smallest unit in the RIFF file is the CK struct, CKID is the data type, the value can be: “RIFF,LIST,fmt, “data” etc. RIFF file is little-endian.

RIFF structure:

typedef unsigned long DWORD;//4B
typedef unsigned char BYTE;//1B
typedef DWORD         FOURCC;    // 4B
typedef struct {
     FOURCC ckID; //4B
     DWORD ckSize; //4B
     union {
          FOURCC fccType;          // RIFF form type 4B
          BYTE ckData[ckSize];     //ckSize*1B
     } ckData;

} RIFFCK;
kerryzhou_2-1617962787993.png

Pic 3

Take a 16Khz 2 channel wave file as the example:

kerryzhou_3-1617962788034.png

Pic 4

Yellow: CKID  Green: data length   Purple: data

The detailed analysis as follows:

kerryzhou_4-1617962788084.png

Pic 5

We can find, the real audio data, except the wave header, the data size is 1279860bytes.

2.3 Audio file convert

In practical usage, the audio file may not the required channel and the sample rate configuration, or the format is not the wave, or the time is too long, then we can use some tool to convert it to your desired format.

We can use the ffmpeg tool:

https://ffmpeg.org/

About the details, check the ffmpeg document, normally we use these command:

mp3 file converts to 16k, 16bit, 2 channel wave file:

ffmpeg -i test.mp3 -acodec pcm_s16le -ar 16000 -ac 2 test.wav

or: ffmpeg -i test.mp3 -aq 16 -ar 16000 -ac 2 test.wav

test.wav, cut 35s from 00:00:00, and can convert save to test1.wav:

ffmpeg -ss 00:00:00 -i test.wav -t 35.0 -c copy test1.wav

kerryzhou_5-1617962788127.png

Pic 6

kerryzhou_6-1617962788169.png

Pic 7

2.4 Obtain wave L/R channel audio data

Just like the SDK code, save the L/R audio data directly in the RT RAM array, so here, we need to obtain the audio data from the wav file.

We can use the python readout the wav header, then get the audio data size, and save the audio data to one array in the .h files. The related Python code can be:

import sys
import wave

def wav2hex(strWav, strHex):
    with wave.open(strWav, "rb") as fWav:
        wavChannels = fWav.getnchannels()
        wavSampleWidth = fWav.getsampwidth()
        wavFrameRate = fWav.getframerate()
        wavFrameNum = fWav.getnframes()
        wavFrames = fWav.readframes(wavFrameNum)
        wavDuration = wavFrameNum / wavFrameRate
        wafFramebytes = wavFrameNum * wavChannels * wavSampleWidth
        print("Channels: {}".format(wavChannels))
        print("Sample width: {}bits".format(wavSampleWidth * 8))
        print("Sample rate: {}kHz".format(wavFrameRate/1000))
        print("Frames number: {}".format(wavFrameNum))
        print("Duration: {}s".format(wavDuration))
        print("Frames bytes: {}".format(wafFramebytes))
        fWav.close()
        pass

    with open(strHex, "w") as fHex:
        # Print WAV parameters
        fHex.write("/*\n");
        fHex.write("  Channels: {}\n".format(wavChannels))
        fHex.write("  Sample width: {}bits\n".format(wavSampleWidth * 8))
        fHex.write("  Sample rate: {}kHz\n".format(wavFrameRate/1000))
        fHex.write("  Frames number: {}\n".format(wavFrameNum))
        fHex.write("  Duration: {}s\n".format(wavDuration))
        fHex.write("  Frames bytes: {}\n".format(wafFramebytes))
        fHex.write("*/\n\n")
        # Print WAV frames
        fHex.write("uint8_t music[] = {\n")
        print("Transferring...")
        i = 0
        while wafFramebytes > 0:
            if(wafFramebytes < 16):
                BytesToPrint = wafFramebytes
            else:
                BytesToPrint = 16
            fHex.write("    ")
            for j in range(0, BytesToPrint):
                if j != 0:
                    fHex.write(' ')
                fHex.write("0x{:0>2x},".format(wavFrames[i]))
                i+=1
                j+=1
            fHex.write("\n")
            wafFramebytes -= BytesToPrint
        fHex.write("};\n")
        fHex.close()
        print("Done!")

wav2hex(sys.argv[1], sys.argv[2])

Take the music1.wave as an example:

kerryzhou_7-1617962788279.png

Pic 8

2.4 Audio data relationship with audio wave

16bit data range is: -32768 to 32767, the goldwave related value range is(-1~1).Use goldwave tool to open the example music1.wav, check the data in 1s position, the left channel relative data is -0.08227, right channel relative data is -0.2257.

kerryzhou_8-1617962788415.jpegkerryzhou_9-1617962788545.jpeg

Pic 9                                                                          pic 10

Now, calculate the L/R real data, and find the position in the music1.h.

kerryzhou_10-1617962788774.jpeg

Pic 11

From pic 8, we can know, the real wave R/L data from line 11, each line contains 16 bytes of data. So, from music1.wav related value, we can calculate the related data, and compare it with the real data in the array, we can find, it is totally the same.

3. SAI MCUXpresso project creation

Based on SDK_2.9.2_EVK-MIMXRT1060, create one SAI DMA audio play project. The audio data can use the above music1.h.

Create one bare-metal project:

Drivers check:

clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,xip_device

Utilities check:

      Debug_console,lpuart_adapter,serial_manager,serial_manager_uart

Board components check:

      Xip_board

Abstraction Layer check:

      Codec, codec_wm8960_adapter,lpi2c_adapter

Software Components check:

      Codec_i2c,lists,wm8960

After the creation of the project, open the clocks, configure the clock, core, flexSPI can use the default one, we mainly configure the SAI1 related clocks:

kerryzhou_11-1617962788862.png

Pic 12

Select the SAI1 clock source as PLL4, PLL4_MAIN_CLK configure as 786.48MHz.

SAI1 clock configure as 6.144375MHz.

After the configuration, update the code.

Open Pins tool, configure the SAI1 related pins, as the codec also need the I2C, so it contains the I2C pin configuration.

kerryzhou_12-1617962789025.png

Pic 13

Update the code.

Open peripherals, configure DMA, SAI, NVIC.

kerryzhou_13-1617962789197.png

Pic 14

kerryzhou_14-1617962789294.png

Pic 15

DMA配置如下:

kerryzhou_15-1617962789389.png

pic16

After configuration, generate the code.

In the above configuration, we have finished the SAI DMA transfer configuration, SAI master mode, 16bits, the sample rate is 16kHz, 2channel, DMA transfer, bit clock is 512Khz, the master clock is 6.1443Mhz.

void callback(I2S_Type *base, sai_edma_handle_t *handle, status_t status, void *userData)
{
    if (kStatus_SAI_RxError == status)
    {
    }
    else
    {
        finishIndex++;
        emptyBlock++;
        /* Judge whether the music array is completely transfered. */
        if (MUSIC_LEN / BUFFER_SIZE == finishIndex)
        {
            isFinished = true;

            finishIndex = 0;
            emptyBlock  = BUFFER_NUM;
            tx_index = 0;
            cpy_index = 0;
        }
    }
}
int main(void) {

	 sai_transfer_t xfer;
    /* Init board hardware. */
    BOARD_ConfigMPU();
    BOARD_InitBootPins();
    BOARD_InitBootClocks();
    BOARD_InitBootPeripherals();
#ifndef BOARD_INIT_DEBUG_CONSOLE_PERIPHERAL
    /* Init FSL debug console. */
    BOARD_InitDebugConsole();
#endif

    PRINTF(" SAI wav module test!\n\r");
     /* Use default setting to init codec */
     if (CODEC_Init(&codecHandle, &boardCodecConfig) != kStatus_Success)
     {
         assert(false);
     }
     /* delay for codec output stable */
     DelayMS(DEMO_CODEC_INIT_DELAY_MS);
     CODEC_SetVolume(&codecHandle,2U,50); // set 50% volume



     EnableIRQ(DEMO_SAI_IRQ);
     SAI_TxEnableInterrupts(DEMO_SAI, kSAI_FIFOErrorInterruptEnable);

     PRINTF(" MUSIC PLAY Start!\n\r");
     while (1)
     {
     	PRINTF(" MUSIC PLAY Again\n\r");
     	isFinished = false;
         while (!isFinished)
         {
             if ((emptyBlock > 0U) && (cpy_index < MUSIC_LEN / BUFFER_SIZE))
             {
                 /* Fill in the buffers. */
                 memcpy((uint8_t *)&buffer[BUFFER_SIZE * (cpy_index % BUFFER_NUM)],
                        (uint8_t *)&music[cpy_index * BUFFER_SIZE], sizeof(uint8_t) * BUFFER_SIZE);
                 emptyBlock--;
                 cpy_index++;
             }
             if (emptyBlock < BUFFER_NUM)
             {
                 /*  xfer structure */
                 xfer.data     = (uint8_t *)&buffer[BUFFER_SIZE * (tx_index % BUFFER_NUM)];
                 xfer.dataSize = BUFFER_SIZE;
                 /* Wait for available queue. */
                 if (kStatus_Success == SAI_TransferSendEDMA(DEMO_SAI, &SAI1_SAI_Tx_eDMA_Handle, &xfer))
                 {
                     tx_index++;
                 }
             }
         }

     }
}

 

4. SAI test result

    To check the real L/R data sendout situation, we modify the music array first 16 bytes data as:

0x55,0xaa,0x01,0x00,0x02,0x00,0x03,0x00,0x04,0x00,0x05,0x00,0x06,0x00,0x07,0x00

Then test SAI_MCLK,SAI_TX_BCLK,SAI_TX_SYNC,SAI_TXD pin wave, and compare with the defined data, because the polarity is configured as active low, it is falling edge output, sample at rising edge.

The test point on the MIMXRT1060-EVK board is using the codec pin position:

kerryzhou_16-1617962789516.png

Pic 17

4.1 Logic Analyzer tool wave

kerryzhou_17-1617962789628.jpeg

Pic 18

MCLK clock frequency is 6.144375Mhz, BCLK is 512KHz, SYNC is 16KHz.

kerryzhou_18-1617962790147.png

Pic 19

The first frame data is:1010101001010101 0000000000000001

0XAA55  0X0001

It is the same as the array defined L/R data.

SYNC low is Left 16 bit, High is right 16 bit.

4.2 Oscilloscope test wave

Just like the logic analyzer, the oscilloscope wave is the same:

kerryzhou_19-1617962790181.png

Pic 20

Add the music.h to the project, and let the main code play the music array data in loop, we will hear the music clear when insert the headphone to on board J12 or add a speaker.

5. SAI SDcard wave music play

This part will add the sd card, fatfs system, to read out the 16bit 16K 2ch wave file in the sd card, and play it in loop.

5.1 driver add

    Code is based on SDK_2.9.2_EVK-MIMXRT1060, just on the previous project, add the sdcard, sd fatfs driver, now the bare-metal driver situation is:

Drivers check:

cache, clock, common, dmamux, edma,gpio,i2c,iomuxc,lpuart,sai,sai_edma,sdhc, xip_device

Utilities check:

      Debug_console,lpuart_adapter,serial_manager,serial_manager_uart

Middleware check:

      File System->FAT File System->fatfs+sd, Memories

Board components check:

      Xip_board

Abstraction Layer check:

      Codec, codec_wm8960_adapter,lpi2c_adapter

Software Components check:

      Codec_i2c,lists,wm8960

5.2 WAVE header analyzer with code

   From previous content, we can know the wav header structure, we need to play the wave file from the sd card, then we need to analyze the wave header to get the audio format, audio data-related information. The header analysis code is:

uint8_t Fun_Wave_Header_Analyzer(void)
{
    char * datap;
    uint8_t ErrFlag = 0;


    datap = strstr((char*)Wav_HDBuffer,"RIFF");
     if(datap != NULL)
     {
    	 wav_header.chunk_size = ((uint32_t)*(Wav_HDBuffer+4)) + (((uint32_t)*(Wav_HDBuffer + 5)) <<  + (((uint32_t)*(Wav_HDBuffer + 6)) << 16) +(((uint32_t)*(Wav_HDBuffer + 7)) << 24);
    	 movecnt += 8;

     }
     else
     {
    	ErrFlag = 1;
        return ErrFlag;
     }

	 datap = strstr((char*)(Wav_HDBuffer+movecnt),"WAVEfmt");
	 if(datap != NULL)
	 {
		 movecnt += 8;
		 wav_header.fmtchunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) <<  + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24);
		 wav_header.audio_format = ((uint16_t)*(Wav_HDBuffer+movecnt+4) + (uint16_t)*(Wav_HDBuffer+movecnt+5));
		 wav_header.num_channels = ((uint16_t)*(Wav_HDBuffer+movecnt+6) + (uint16_t)*(Wav_HDBuffer+movecnt+7));
		 wav_header.sample_rate  = ((uint32_t)*(Wav_HDBuffer+movecnt+8)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 9)) <<  + (((uint32_t)*(Wav_HDBuffer +movecnt+ 10)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 11)) << 24);
		 wav_header.byte_rate    = ((uint32_t)*(Wav_HDBuffer+movecnt+12)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 13)) <<  + (((uint32_t)*(Wav_HDBuffer +movecnt+ 14)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 15)) << 24);
		 wav_header.block_align  = ((uint16_t)*(Wav_HDBuffer+movecnt+16) + (uint16_t)*(Wav_HDBuffer+movecnt+17));
		 wav_header.bps          = ((uint16_t)*(Wav_HDBuffer+movecnt+18) + (uint16_t)*(Wav_HDBuffer+movecnt+19));

		 movecnt +=(4+wav_header.fmtchunk_size);
	 }
     else
     {
    	 ErrFlag = 1;
        return ErrFlag;
     }

	 datap = strstr((char*)(Wav_HDBuffer+movecnt),"LIST");
	 if(datap != NULL)
	 {

		 movecnt += 4;

		 wav_header.list_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) <<  + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24);
		 movecnt +=(4+wav_header.list_size);

	 } //LIST not Must

	 datap = strstr((char*)(Wav_HDBuffer+movecnt),"data");
	 if(datap != NULL)
	 {

		 movecnt += 4;

		 wav_header.datachunk_size = ((uint32_t)*(Wav_HDBuffer+movecnt+0)) + (((uint32_t)*(Wav_HDBuffer +movecnt+ 1)) <<  + (((uint32_t)*(Wav_HDBuffer +movecnt+ 2)) << 16) +(((uint32_t)*(Wav_HDBuffer +movecnt+ 3)) << 24);
		 movecnt += 4;

		 ErrFlag = 0;
	 }
     else
     {
    	 ErrFlag = 1;
        return ErrFlag;
     }

	 PRINTF("Wave audio format is %d\r\n",wav_header.audio_format);
	 PRINTF("Wave audio channel number is %d\r\n",wav_header.num_channels);
	 PRINTF("Wave audio sample rate  is %d\r\n",wav_header.sample_rate);
	 PRINTF("Wave audio byte rate is %d\r\n",wav_header.byte_rate);
	 PRINTF("Wave audio block align is %d\r\n",wav_header.block_align);
	 PRINTF("Wave audio bit per sample is %d\r\n",wav_header.bps);
	 PRINTF("Wave audio data size is %d\r\n",wav_header.datachunk_size);

	 return ErrFlag;

}

Mainly divide RIFF to 4 parts: “RIFF”,“fmt”,“LIST”,“data”. The 4 bytes data follows the “data” is the whole audio data size, it can be used to the fatfs to read the audio data. The above code also recodes the data position, then when using the fatfs read the wave, we can jump to the data area directly.

5.3 SD card wave data play

    Define the array audioBuff[4* 512], used to read out the sd card wave file, and use these data send to the SAI EDMA and transfer it to the I2S interface until all the data is transmitted to the I2S interface.

    Callback record each 512 bytes data send out finished, and judge the transmit data size is reached the whole wave audio data size.

5.4 sd card wave play result

   Prepare one wave file, 16bit 16k sample rate, 2 channel file, named as music.wav, put in the sd card which already does the fat32 format, insert it to the MIMXRT1060-EVK J39, run the code, will get the printf information:

Please insert a card into the board.

Card inserted.
Make file system......The time may be long if the card capacity is big.
 SAI wav module test!
 MUSIC PLAY Start!
Wave audio format is 1
Wave audio channel number is 2
Wave audio sample rate  is 16000
Wave audio byte rate is 64000
Wave audio block align is 4
Wave audio bit per sample is 16
Wave audio data size is 2728440
Playback is begin!
Playback is finished!

At the same time, after inserting the headphone or the speaker into the J12, we can hear the music.

Attachment is the mcuxpresso10.3.0 and the wave samples.

 

Labels (1)
Attachments
No ratings
Version history
Last update:
‎04-12-2021 10:57 PM
Updated by: