Bidirectional audio communication in SLN-LOCAL2-IOT via SAI3

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

Bidirectional audio communication in SLN-LOCAL2-IOT via SAI3

5,144件の閲覧回数
jnj
Contributor III

Hi All,

I need to utilise SAI3 which is connected to audio amplifier(TFA) in sln-local2-iot for bidirectional audio data communication with a host processor.For this I made nxp as slave and host processor as master and changed the transmitter and receiver as sync.On wake word detection I tried recording data (audio:can i help you?)in host processor .Recording was successful.But recorded audio was mono.Similarly tried sending stereo data from host processor to NXP and NXP playback same to host processor .While recording that data ,host processor is getting only mono part .right channel is missing.I had an issue like when host processor is sending an audio data to NXP initially data was not correct(verified by printing data(sSaiRxBuffer.data) in SLN_AMP_RxCallback() ) .So I replaced the SAI_RxSetFrameSyncConfig() by the following .

 

void SAI_RxSetFrameSyncConfig(I2S_Type *base, sai_master_slave_t masterSlave, sai_frame_sync_t *config)
{
assert(config != NULL);
assert((config->frameSyncWidth - 1UL) <= (I2S_RCR4_SYWD_MASK >> I2S_RCR4_SYWD_SHIFT));

uint32_t rcr4 = base->RCR4;

rcr4 &= ~(I2S_RCR4_FSE_MASK | I2S_RCR4_FSP_MASK | I2S_RCR4_FSD_MASK | I2S_RCR4_SYWD_MASK);

#if defined(FSL_FEATURE_SAI_HAS_FRAME_SYNC_ON_DEMAND) && FSL_FEATURE_SAI_HAS_FRAME_SYNC_ON_DEMAND
rcr4 &= ~I2S_RCR4_ONDEM_MASK;
rcr4 |= I2S_RCR4_ONDEM(config->frameSyncGenerateOnDemand);
#endif

rcr4 |=
I2S_RCR4_FSE(config->frameSyncEarly) | I2S_RCR4_FSP(config->frameSyncPolarity) |
I2S_RCR4_FSD(((masterSlave == kSAI_Master) || (masterSlave == kSAI_Bclk_Slave_FrameSync_Master)) ? 1UL : 0U) |
I2S_RCR4_SYWD(config->frameSyncWidth - 1UL);

base->RCR4 = rcr4;
}

After this changes only started getting correct data at NXP.Is this changes cause any issues in getting the stereo data?What changes should we make in the sln_local2_iot_local_demo to get the stereo data that is sent by the host processor?

I already asked the same in the community forum.you can refer the following link for more details.Kindly help me on resolving the issue.I am stuck with this.

https://community.nxp.com/t5/i-MX-Processors/I2S-bidirectional-communication-in-SLN-LOCAL2-IOT/m-p/1...

0 件の賞賛
返信
30 返答(返信)

3,920件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj 

  Which detail situation you get dec13_8_mic2.wav?

  With usb_aec_alignment_tool which get the DMIC 2 <test name>_mic2.wav, or your own modified code which save the wave from SAI3?

Best Regards,

Kerry

0 件の賞賛
返信

3,916件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

dec13_8_mic2.wav is recorded with my code changes. file2.wav is recorded in pc using usb-aec-alignment tool.

0 件の賞賛
返信

3,906件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj 

   You mentioned you: changing from stereo to mono I got proper audio files.

  Also share one proper audio file, do you test this with 3 DMIC? Then the same operation, when change DMIC from number 3 to number 2, meet the issues, right?

Best Regards,

Kerry

0 件の賞賛
返信

3,891件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

Recording was proper with both usb-aec-alignment tool and my code changes when selecting no of microphones as 3.But while selecting no of microphones as 2 i am getting issue in both usb-aec-alignment tool and modified code.

Please see the audio files

Where we can find the definitions of funtions declared in audio/voice/sln_afe.h or could you please explain how the dmic data collection is taking place ?

0 件の賞賛
返信

3,880件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj 

  Thanks for your information. Please keep patient, I will test usb_aec_alignment_tool on my side to check the wav, will give you updated information later.

 

Best Regards,

Kerry

0 件の賞賛
返信

3,850件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

Did you get a chance to check usb-aec-alignment tool?Could you please share the source code for "libs/libsln_dsp_toolbox.a"?

0 件の賞賛
返信

3,843件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj 

  As I know, the lib source code is not shared.

  If you need it, you can contact the solution team directly:

local-commands@nxp.com

 You also can post your question about usb-aec-alignment tool to that email, as it will contact the solution team.

When you send the email, please also contains these information:

Customer information

Project Annual Volume:

End Application Name:

Project requirement:

Then the solution team will also help you to check it.

Best Regards,

Kerry

 

 

 

0 件の賞賛
返信

4,178件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj 

   Thanks so much for your patient, and so sorry for the later reply!

   Today, I check your question carefully, at first, confirm the question at first:

  1. You want to use the SAI3 interface on the SLN-LOCAL2-IOT do the SAI communicate with the a host processor.

   1.jpg

 

Communication is 2 channel instead of 1 channel, right?

In the SLN-LOCAL2-IOT board, the SAI3 is connected to the audio amplifier TFA9894 chip, so you didn't connect to the TFA, just connect the RT106S board in the SLN-LOCAL2-IOT SAI3 interface to your host processor, right?

 

2. About the code, you mentioned you just  mono, not stereo(eg. 2ch), in fact, this is mainly related to the code configuration.

  To our RT106X SDK, this code:

SDK_2_10_1_EVK-MIMXRT1060\boards\evkmimxrt1060\driver_examples\sai\edma_record_playback

It is the 2ch 16Khz,16bit code, do you refer to that code?

I also attach one project, which I configured for 2ch 48Khz, 32bit in the previous time.

The code is the RT is the slave mode, it is using SAI2, you can change it to SAI3, then the following code is configure for the 2ch mode, and the sample rate, bit width:

#define DEMO_SAI SAI2
#define DEMO_SAI_CHANNEL (0)
#define DEMO_SAI_IRQ SAI2_IRQn
#define DEMO_SAITxIRQHandler SAI2_IRQHandler
#define DEMO_SAI_TX_SYNC_MODE kSAI_ModeAsync
#define DEMO_SAI_RX_SYNC_MODE kSAI_ModeSync
#define DEMO_SAI_TX_BIT_CLOCK_POLARITY kSAI_PolarityActiveLow
#define DEMO_SAI_MCLK_OUTPUT true
#define DEMO_SAI_MASTER_SLAVE kSAI_Slave
#endif

#define DEMO_AUDIO_DATA_CHANNEL (2U)
#define DEMO_AUDIO_BIT_WIDTH kSAI_WordWidth32bits
#define DEMO_AUDIO_SAMPLE_RATE (kSAI_SampleRate48KHz)
#define DEMO_AUDIO_MASTER_CLOCK DEMO_SAI_CLK_FREQ

 

You can ignore the codec code, you can call:

SAI_TransferRxSetConfigEDMA(DEMO_SAI, &rxHandle, &saiConfig);

Just refer to the SDK or my attached code, that is the 2ch code, and I also have tested the SAI wave in the previous time. My attached code, each sync is 4 word, each word is 32bit.

If you want to use 2ch, each syn is 2 word, you can refer to the SDK directly. All the configuration can be defined, from your description, your code is just configure for 1 ch, not 2ch, if you want to get 2ch, just configure it.

If you still have issues, you even can use the local analyzer to test the wave.

More details about the sai, you also can check my post:

https://community.nxp.com/t5/i-MX-RT-Knowledge-Base/RT10xx-SAI-basic-and-SDCard-wave-file-play/ta-p/...

 

Wish it helps you!

Best Regards,

Kerry

 

 

 

0 件の賞賛
返信

4,158件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

Thanks for the response.

Communication is 2 channel instead of 1 channel, right? 

Ans:yes

 

In the SLN-LOCAL2-IOT board, the SAI3 is connected to the audio amplifier TFA9894 chip, so you didn't connect to the TFA, just connect the RT106S board in the SLN-LOCAL2-IOT SAI3 interface to your host processor, right?

 

Ans:Yes connected the RT106S board in the SLN-LOCAL2-IOT SAI3 interface to the host processor

The connection is as per your diagram only.I will check your code .But could you please tell me which configuration for stereo is missing in sln_local2_iot_local_demo or required changes for the same ? 

0 件の賞賛
返信

4,153件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj 

 I think you even don't need to care about the local_demo about SAI3 at first, as that totally not the same configuration, you even can test the RT1060 SDK directly by modify the SAI3 interace and the pins, and the xip from qspi to hyperflash, just let the stereo works at first with the RT1060 SDK.

 

Best Regards,

Kerry

 

0 件の賞賛
返信

4,141件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

I compared your code with sln-local2-iot sdk and did changes.Now if  host processor sends stereo data to NXP ,it will send back the same stereo audio to host processor and host processor is able to record stereo data successfully.Thank you so much for your support.Could you please provide information  for the following scenario?

 

Data from two mics over SAI1 is taken(stereo) and feedback to host processor over SAI3 .

 

Thanks in advance

0 件の賞賛
返信

4,120件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj ,

   Sorry for my later reply, as these days, help your colleague test the low power issues.

   Now, talk your issues.

   Data from two mics over SAI1 is taken(stereo) and feedback to host processor over SAI3 .

You mean, you want to collect the two mics data, then one for left channel, another for right channel, then send it to the SAI3?

  If yes, I think you need to merge the data together from SAI1, then send it to the SAI3.

  SDK_2_10_1_EVK-MIMXRT1060\boards\evkmimxrt1060\driver_examples\sai\edma_record_playback

  This demo uses both headphone mic and board main mic(P1) as input source. The headphone mic provides left channel data, and main mic (P1) provides right channel data. 

   So, you still can refer to it, as this code also collect two mic data, then send to the phone different channel.

 

Wish it helps you!

Best Regards,

Kerry

 

 

Wish it helps you!

Best Regards,

Kerry

0 件の賞賛
返信

4,114件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

It's okay no issues . Thanks for the response.

You mean, you want to collect the two mics data, then one for left channel, another for right channel, then send it to the SAI3? 

Ans:Yes exactly 

I tried to integrate the following code snippet in the sln-local2-iot demo(in wake word detection part(sln_local_voice.c) to achieve the same .

SAI_TransferTxCreateHandleEDMA(SAI1, &txHandle, tx_callback, NULL, &dmaTxHandle);
SAI_TransferRxCreateHandleEDMA(SAI1, &rxHandle, rx_callback, NULL, &dmaRxHandle);

/* I2S mode configurations for mic over SAI1 */
SAI_GetClassicI2SConfig(&saiConfig_mic, DEMO_AUDIO_BIT_WIDTH, kSAI_Stereo, 1U << DEMO_SAI_CHANNEL);
saiConfig_mic.syncMode = DEMO_SAI_TX_SYNC_MODE;
saiConfig_mic.bitClock.bclkPolarity = DEMO_SAI_TX_BIT_CLOCK_POLARITY;
saiConfig_mic.masterSlave = DEMO_SAI_MASTER_SLAVE;
saiConfig_mic.serialData.dataWordNum = 4U;
//SAI_TransferTxSetConfigEDMA(DEMO_SAI, &txHandle, &saiConfig);
saiConfig_mic.syncMode = DEMO_SAI_RX_SYNC_MODE;
SAI_TransferRxSetConfigEDMA(DEMO_SAI, &rxHandle, &saiConfig_mic);



SAI_TransferTxSetConfigEDMA(SAI1, &txHandle, &saiConfig_mic);

/* set bit clock divider */

SAI_TxSetBitClockRate(DEMO_SAI, DEMO_AUDIO_MASTER_CLOCK, DEMO_AUDIO_SAMPLE_RATE, DEMO_AUDIO_BIT_WIDTH,
DEMO_AUDIO_DATA_CHANNEL);
SAI_RxSetBitClockRate(DEMO_SAI, DEMO_AUDIO_MASTER_CLOCK, DEMO_AUDIO_SAMPLE_RATE, DEMO_AUDIO_BIT_WIDTH,
DEMO_AUDIO_DATA_CHANNEL);

 

while(1){
if (emptyBlock > 0)
{
xfer.data = Buffer + rx_index * BUFFER_SIZE;
xfer.dataSize = BUFFER_SIZE;
if (kStatus_Success == SAI_TransferReceiveEDMA(SAI1, &rxHandle, &xfer))
{
rx_index++;
configPRINTF(("in transferrecieve------if*****************************-------------------------------\r\n"));
}
if (rx_index == BUFFER_NUMBER)
{
rx_index = 0U;
}
// configPRINTF(("in transferrecieve-------------------------------------\r\n"));
}
if (emptyBlock < BUFFER_NUMBER)
{
xfer.data = Buffer + tx_index * BUFFER_SIZE;
xfer.dataSize = BUFFER_SIZE;
if (kStatus_Success == SAI_TransferSendEDMA(SAI3, &txHandle, &xfer))
{
tx_index++;
}
if (tx_index == BUFFER_NUMBER)
{
tx_index = 0U;
}
configPRINTF(("in transfersend-------------------------------------\r\n"));
}


}

I put some prints in the  rx_callback().But when application starts no prints are coming That means no data is coming.Is that so? .Could you please help me in resolving the issue.

 

Thanks in advance

 

0 件の賞賛
返信

4,110件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj ,

  I have an idea, do you try to modify the RT1060 SDK my mentioned project at first, just change the SAI interface to the SLN-LOCAL2-IOT related SAI port, then collect the two mic data, and send to SAI3, whether that method works or not?

  I think you can try to test that method works, then add to the local_demo, it will be a little easy.

  I don't recommend you use a lot of printf during the DMA transfer, it may influence the SAI transfer, especially the callback.

  I think you can debug the code , then add the breakpoint in the call back to test it.

 

Wish it helps you!

Best Regards,

Kerry

0 件の賞賛
返信

4,105件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

Thanks for the response.I will check and get back to you.Is APIs related document available for sln-local2-iot?If yes could you please share ?

0 件の賞賛
返信

4,101件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj 

   Do you mean the SAI API document, if yes, just check the related SDK doc.

Eg. SDK_2_8_0_SLN-LOCAL2-IOT_doc

  When you download the SDK code, it also have an item to download the doc for the SDK, RT106X SDK is the same.

Best Regards,

Kerry

0 件の賞賛
返信

4,082件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

 

I tried the IMXRT1060 SDK.But in this NXP is setting as slave.Is the configuration same for sln-local2-iot.In schematics I have seen some differences in the sln-local2-iot and evkimxrt1060.So can we try the  code and configuration of imxrt1060  in the sln-local2-iot.Could you please tell me a way to capture the audio from DMIC  to a particular buffer and after that give to SAI3 in sln-local2-iot?I am stuck with this.Could you please help? 

0 件の賞賛
返信

4,074件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj ,

  Yes, RT1060 is the slave, in my memory, you also need the RT1060 SAI3 as slave, now, do you need it as master? If yes,  you just need to modiy the mode from slave to master. Other code still the same.

  #define DEMO_SAI_MASTER_SLAVE kSAI_Master

  About the RT106S MIC, it conect to the codec:

kerryzhou_0-1638953473666.png

So, the SDK receive the I2S data from the codec directly.

But your talk is understand, to the LOCAL2 board, the MIC is the DMIC, so it need the code to get the voice data, really don't the same as the RT106X EVK.

kerryzhou_1-1638953762459.png

So, you need to use the local_demo audio folder related MIC files to do the DMIC operation at first, then collect the PCM data, and send to your SAI3.

This is the thought.

 

Wish it helps you!

Best Regards,

Kerry

 

0 件の賞賛
返信

4,048件の閲覧回数
jnj
Contributor III

Hi @kerryzhou ,

I tried by adding the below lines in the function audio_processing_task() in audio/audio_processing/audio_processing_task.c (sln-local2-iot sdk ).

uint8_t *mic_output_pcm = (uint8_t *)(pcmIn);
SLN_AMP_Write(mic_output_pcm, PCM_AMP_SAMPLE_COUNT * PCM_SAMPLE_SIZE_BYTES );

But not getting proper audio(like modulated) at host processor side while recording  with the following command

arecord -D hw:tegrasndt186ref,0 -f S16_LE -r 16000 -d 1500 -c 1 file.wav. Please see the below attached wav file .Could you please tell me which configuration is missing ?

Also we  tried flashing   sln_local2_iot_usb_aec_alignment_tool and run python script to capture audio data from mic.But when we give  the number of Microphones used as 2 we are getting the following file(file2.wav :that is also not proper) .Here PC is linux based(ubuntu 18.04).

 

0 件の賞賛
返信

4,028件の閲覧回数
kerryzhou
NXP TechSupport
NXP TechSupport

Hi @jnj 

   Thanks for your effort.

   I will help you to check internally.

  Confirm the question again:

  Collect audio data from M1, M2(SAI1), then merge it to the PCM, M1 as left ch, M2 as Right ch data, then send it to the SAI3 to your another process, right?

  Could you tell me where you input this command:

arecord -D hw:tegrasndt186ref,0 -f S16_LE -r 16000 -d 1500 -c 1 file.wav

The  sln_local2_iot_usb_aec_alignment_tool and run python script?

Best Regards,

Kerry

0 件の賞賛
返信