Microphone Data-Capture, Processing and Output via Line-out with NXP i.MX RT600: RT685 EVK @16 kHz:

キャンセル
次の結果を表示 
表示  限定  | 次の代わりに検索 
もしかして: 

Microphone Data-Capture, Processing and Output via Line-out with NXP i.MX RT600: RT685 EVK @16 kHz:

94件の閲覧回数
shri00
Contributor II

Hello,

I'm trying to implement a DMIC → MU → DSP(process) → I2S/WM8904(lineout) pipleline on rt600.

Hardware / Tools

Board: EVK-MIMXRT685 (onboard WM8904)
Cores: CM33 (M33) + HiFi4 DSP
SDK / IDE: MCUXpresso SDK_2_11_0_EVK-MIMXRT685 , MCUXpresso IDE v25.6, xtensa xplorer 9.0.18

Base projects I’m modifying:

CM33: `boards/evkmimxrt685/driver_examples/dmic/dmic_i2s_dma`

I kept the DMIC→I2S pipeline and added MU send/receive to talk to the DSP.
DSP: `boards/evkmimxrt685/dsp_examples/mu_polling/dsp`

I added the custom audio processing: read shared input, process, write shared output.

The input and output buffers are being stored and fetched in shared memory space.

Context:

I’m building on the dmic_i2s_dma (CM33) example and the mu_polling (DSP) example. The SDK demo is set up for 48 kHz, but my processing library requires 16 kHz, so I’ve changed the clocks and codec settings accordingly. The rest of the flow follows the SDK structure.

Expected per-frame flow (16 kHz, 16-bit)

Capture (DMIC -> ring):
The DMIC DMA writes 1024 bytes per frame into a stereo-interleaved slot (16-bit samples). That’s 512 samples total = 256 stereo frames (L,R). I enable only Left; Right is zero.
Therefore, each 1024-byte slot contains 256 valid Left samples (512 bytes) and 256 zero Right samples. This 1024-byte slot is then copied to shared memory for the DSP.

Process (DSP):
The DSP reads the 1024-byte interleaved input, extracts the 256 valid Left samples (512 bytes) that the library expects, runs the AINR process, and produces 256 output samples (mono).

Re-interleave (DSP -> shared):
The DSP expands the 256-sample mono output back to interleaved stereo by inserting zeros for Right, yielding 512 samples total (L,R) = 1024 bytes. This 1024-byte processed block is written to the shared output buffer.

Playback (CM33 -> I2S/WM8904):
The CM33 copies the processed 1024-byte block from shared memory back into the current ring slot and queues it to I2S for Line-Out.
In parallel, a ping-pong (double) buffer is used so that while the DMIC is filling one 1024-byte slot, I2S is transmitting the previous slot—ensuring continuous audio.

What works vs. what breaks

If I bypass the DSP path on CM33 (i.e., comment out the MU + processing and send captured buffer straight to I2S), Line-Out audio is clean.
As soon as I enable the DSP path (MU exchange + process), I get static/garbled audio.

If I bypass inside the DSP (i.e., copy input->output in dsp_main.c and don’t call the process function), Line-Out audio is clean. The Library I'm using is perfectly fine and inside the process function, I'm just copying the input to the library input parameter, and copying the library output to my output buffer. A simple for loop or a memcpy. 

This makes me suspect clocking, DMA interleave usage, doing MU in the DMIC ISR, or sync between cores.

I am attaching what's essential off of my code. I would really appreciate any hints, debugging tips or inputs. 

Thanks!

ラベル(1)
タグ(3)
0 件の賞賛
返信
1 返信

15件の閲覧回数
Omar_Anguiano
NXP TechSupport
NXP TechSupport

I suspect the issue may be related to synchronization between the DSP and M33 cores, as the DSP processes data much faster than the M33. This can lead to buffer underruns or overruns if proper handshaking isn’t implemented.

I suggest generating a sine wave and performing a frequency sweep through the pipeline. Observing the output will help identify whether the distortion is due to timing mismatches, sample rate errors, or buffer handling issues.

BR,
Omar

0 件の賞賛
返信