I was going through audio codec flow in iMX7ULP and i could see, there are two drivers which are controlling the wm8960 codec in the iMX7ULP eval board:
In the M4 SDK, components/codec/wm8960/fsl_wm8960.c
In the Linux (4.14), sound/soc/codecs/rpmsg_wm8960.c (Which pass i2c commands to M4 domain via virtual I2S channel)
And both of them are actually loaded. So the audio codec (wm8960) is initialized in M4 SDK and also there are another initialization code of wm8960 in linux (sound/soc/codecs/rpmsg_wm8960.c - rpmsg_wm8960_codec_probe() API )
I am wondering what is the need of two audio codec drivers
I can see if only virtual I2C commands are required in M4 domain for audio codec, CODEC_I2C_Send and CODEC_I2C_Receive function callbacks should have been enough and not the actual fsl_wm8960.c driver)
Jai Ganesh Sridharan
as you mentioned, one driver is based on linux, one is for M4 in SDK, if you tested M4, just use SDK to test
Hello Joan Xie,
What i meant is two drivers are used to control audio codec in M4 itself. This is how it works
1. M4 boots up with default sample application (power_mode_switch)
2. A7 Linux boots up and when i try to play some audio using aplay, both the linux and M4 drivers are used, but wm8960 is connected to LPI2C0 controller which is only controlled by M4. Linux driver makes use of virtual I2S channel to communicate I2C commands to M4
So, question is, why two drivers are required for single use case ?
for imx7ulp, some of the hardware components can be controlled only by the M4, but they also must provide functionality on the linux side. In order to do that, some drivers that make use of rpmsg are specified in the default dtb.
"The A7 core will not boot without the M4 core running the SRTM component that is available in the M4 SDK
(applications like power_mode_switch and rpmsg_lite_pingpong_rtos enable it)."
I can understand that rpmsg driver is required for each components residing in M4 domain so that A7 domain (running Linux) can interact with M4 domain controlled interfaces like SAI, audio codecs.
My question is:
In the Linux (4.14), sound/soc/codecs/rpmsg_wm8960.c -> This has intelligence about wm8960 codec (registers available and their bit fields - for example how to configure PLL, initialization sequence) and also
In the M4 SDK, components/codec/wm8960/fsl_wm8960.c -> This has also intelligence about wm8960 codec.
Does this architecture sounds good ? And also, i could see the drivers on the linux side with respect to audio is heavily hard coded for wm8960 rather than to be generic. (Choosing "fsl,imx-audio-rpmsg" in the device tree choses wm8960 codec by default).
Is there any plan to make this audio architecture to be generic ?
for wm8960, I suggest that you can refe to the imx7ulp-evk-wm8960.dts as below:
in linux, if you choose wm8960 in the kernel:
• SoC Audio supports for WM8958, WM8960, and WM8962 CODEC. In menuconfig,
this option is available:
-> Device Drivers
-> Sound card support
-> Advanced Linux Sound Architecture
-> ALSA for SoC audio support
-> SoC Audio for Freescale CPUs
-> SoC Audio support for i.MX boards with wm8962 (or
stereo codec SoC driver source in sound/soc/fsl,
sound/soc/fsl/imx-wm8960.c:Machine layer for stereo CODEC ALSA SoC (CODEC as I2S
codecs/wm8960.c:CODEC layer for stereo CODEC ALSA SoC
they are linux side driver for wm8960, for more detailed information about dts file, you also can refer to the document: