Content originally posted in LPCWare by ccrome on Tue May 01 09:55:09 MST 2012
hi there,
Thanks for the information. Basically, we're building a gadget that needs to appear to a PC as a USB Audio device. So, that's 1 full speed USB port accounted for :-) On the other hand we need to connect to many audio codecs (say, 16 channels, both directions). For that we'll use a serial port, say the SSP probably, and put the codecs into TDM mode (the codecs can run 256 bits/frame, so we can fit 16 channels on one wire). Then internal to our box, we need to do a ton of processing, for which we'll use ... something else. Probably a cortex A8, A9, A15, DSP, whatever comes with the best price/performance fit for the particular application (beagleboard, beaglebone, pandaboard, whatever)
Why don't we:
a) connect the codecs directly to the signal processor?
b) use the host processor in device mode?
a: Turns out that it's really a major hassle -- each processor requires custom kernel drivers to drive the codec board (and depends on OS, and even OS version), so when we switch from one processor to another (or even one linux rev to another) it would require custom kernel development. Additionally, not all SOCs/SOC boards have serial ports exposed that can handle the audio TDM. So, at best, it's hard to switch to a signal processing platform, at worst its impossible.
b: Not all SOCs do USB device very well. some are broken as high speed device in one way or another, and Linux support as an audio devices doesn't work the way I need it to (it's half duplex, etc...)
So, it seems like the best compromise is to have an intermediary microcontroller -- LPC18xx or LPC43xx -- act as a proxy between the codecs<->DSP and between DSP<->HOST. Pretty much all SOCs/processors/borads that we might use as a DSP platform can do USB host just fine, and if I write a libusb host application, then it's portable to linux/mac/windows with virtually no software changes at all. The cost, of course, is that we'll need an extra microcontroller, but that's not too terrible for us in the grand scheme of things. It's really about software development cycle/cost as we move forward to new processing platforms. I think this scheme can keep latency down to an acceptable level (say, a few milliseconds), so I don't really see the down side.
So, the connection to the PC will be USB Audio Class. But, the connection to the other processor can be custom, and needs to be high speed. I plan to use a high speed, interrupt endpoint so I can reserve bandwidth and latency. We can't use isochronous because we must get every sample. Don't want to use bulk because I want as low latency as we can get. So, HS interrupt is the only thing left :-) FS interrupt can't carry very much data, but HS interrupt can (as per the USB 2.0 spec).
We don't need a multi-threaded environment I don't think. We're basically just shuttling data from SSP<->HS-USB and from HS-USB<->USB-AUDIO, in a pretty much lock-step fashion. There will need to be some synchronization and buffering, but I don't think that'll be tragic. BTW, we can suffer the normal USB audio hiccups on the FS host USB audio device class interface -- that's no problem. But we can not suffer even a single sample drop on the high speed interface, we need perfect synchrony between the codecs and DSP processor.
Do you think I could use the nxpUSBlib for the audio class device, and just write a bare-hardware interface to the other USB port for the HS interrupt connection? Or, what if I create 2 complete copies of the libusb stuff, and prefix one with a prefix, so there are no function/variable clashes? Would that work?
I'm a newbie both to LPC development, and to USB development, so any help would be most appreciated :-)
Thanks,
-Caleb