I would like to port our internal voice algorithm into i.MX 8MM, but I needs some audio source code from SoC chip which including each microphone raw and AEC reference signals, as well as transport Tx signal via USB. There have some detail demand.
The sample rate can support 16K and 24K.
Support dump original microphone signal and AEC reference signal data together.
The delay between microphone signal and reference signal is stable.
At currently stage, I will use IMX 8MM as a USB device.
There need two audio input sources.
Microphone input : Audio from digital microphones. (4 microphones)
Rx input: Audio from the far-end device. For example, receive the audio signal via USB.
There need two audio output sources.
RX out: Processed audio(just bypass) to play in the near-end device speaker.
TX out: Processed audio(just bypass) to send back the far-end device. For example, transport the signal to far-end device via USB.
Solved! Go to Solution.
Hello,
If you have extracted the toolchain from Yocto it should be correct. You can generate a .sh to install the toolchain using Yocto with the following command:
$ bitbake imx-image-multimedia -c populate_sdk
The default path for installing the toolchain is /opt, your toolchain should have the asound header on a couple of paths. If this is not available there may be a problem with the toolchain.
/opt/<DISTRO>/<BSP>/sysroots/armv8a-poky-linux/usr/include/sys/asoundlib.h
/opt/<DISTRO>/<BSP>/sysroots/armv8a-poky-linux/usr/include/asoundlib.h
/opt/<DISTRO>/<BSP>/sysroots/armv8a-poky-linux/usr/include/alsa/asoundlib.h
As for the Real Time Edge image, it does not include the AFE so you would need to install it manually. You may copy the files from /unit_tests/nxp-afe and /usr/lib/nxp-afe/ to the same paths on the Real Time Edge image to run the AFE.
The asound.conf file ready to be used with the AFE is available in /unit_tests/nxp-afe/asound.conf
For the i.MX8MM you may use the default asound.conf available in this directory. There are separate configurations for other boards or configurations.
I hope that this information helps.
Hello,
Thanks your detialed information, and I found some problems during experiment. There have four questions, could you help me check it?
1, it will happen error when I build https://github.com/nxp-imx/nxp-afe. It can't found alsa/asoundlib.h, Could you help check it?
2, The AFE is already integrated on the i.MX Linux BSP. Do you mean file "nxp-afe_git.bb" in the folder meta-imx/meta-bsp/recipes-multimedia/nxp-afe?
3, Because I can't build success by make with source code. So I try to build it using bitbake nxp-afe in the yocto and generate a package. And I install this package in the IMX8MM board. And I do test according to the file "/unit_tests/nxp-afe/TODO.md". It can work and I can hear the sound from the headphone. But the output file from record only have 1 channel, and it didn't include reference signal. How to setup AFE if I need 4 channel mic signal and 1 channel reference?
4, Running the script UAC_VCOM_composite.sh after connect the board to Windows PC. The PC can recognized sink/mic card. But the board no sound output when I play music in PC. And the PC can't capture the data when I start record.
Hi,
1) The error found is most likely due to the toolchain used. I would recommend extracting the toolchain from the Yocto BSP and use it to build the AFE.
2) Correct, there is a recipe that compiles and install the AFE on the images generated in Yocto. The demo images available also have the AFE installed in the following path. /unit_tests/nxp_afe
3) You may use the Signal Processor Dummy as reference, in the code for this dummy you can setup the number of channels, sampling rate etc. It’s part of the nxp-afe repository and it’s locate on nxp-afe/src/SignalProcessor
4) Once the UAC gadget is ready, it will be enumerated as a soundcard in ALSA. You would need to configure your audio pipeline to play or record from this soundcard. You can see your playback devices with the command: $ aplay -l And your recording devices with: $ arecord -l
I hope that this information helps!
Regards
Hello,
If you have extracted the toolchain from Yocto it should be correct. You can generate a .sh to install the toolchain using Yocto with the following command:
$ bitbake imx-image-multimedia -c populate_sdk
The default path for installing the toolchain is /opt, your toolchain should have the asound header on a couple of paths. If this is not available there may be a problem with the toolchain.
/opt/<DISTRO>/<BSP>/sysroots/armv8a-poky-linux/usr/include/sys/asoundlib.h
/opt/<DISTRO>/<BSP>/sysroots/armv8a-poky-linux/usr/include/asoundlib.h
/opt/<DISTRO>/<BSP>/sysroots/armv8a-poky-linux/usr/include/alsa/asoundlib.h
As for the Real Time Edge image, it does not include the AFE so you would need to install it manually. You may copy the files from /unit_tests/nxp-afe and /usr/lib/nxp-afe/ to the same paths on the Real Time Edge image to run the AFE.
The asound.conf file ready to be used with the AFE is available in /unit_tests/nxp-afe/asound.conf
For the i.MX8MM you may use the default asound.conf available in this directory. There are separate configurations for other boards or configurations.
I hope that this information helps.
Hello,
You may find NXP’s Audio Front End (AFE) useful, which allows to load signal processors from different vendors and configure all the necessary streams to make the whole chain functional. You can find more information in section 8.3.3 of the i.MX Linux User’s Guide (link below). You can setup the AFE to your needs (TX and RX streams, number of microphones, etc).
https://www.nxp.com/docs/en/user-guide/IMX_LINUX_USERS_GUIDE.pdf
The repository for the AFE can be found on the link below. The AFE is already integrated on the i.MX Linux BSP.
https://github.com/nxp-imx/nxp-afe
Please note that stability of the delay between the microphone signal and reference signal would be dependent on the audio chain setup and the load of the Linux system, as interruptions from other services may affect this delay.
As for an USB solution to transport the audio streams, one alternative is using an USB composite gadget as to allow input and output streams. You can find an example attached that creates a USB Composite using the Serial Gadget (g_serial) and an UAC Gadget (g_audio) drivers already in Linux. Please note that an UDC must be available to run the script, so please connect the board to a Windows or Linux host before running the script.
I hope this information helps!