I am looking at the RTCS manual, specifically the select() function, and wondering why it takes rtcs_fd_set pointers instead of just taking arrays of file descriptors and their respective lengths. Is there some reasoning behind this?
On the one hand, it is somewhat inefficient because the user is forced to use the biggest array ever used in one of these fd_set structures. On the other hand, it leaves room for a dangerous configuration change by making RTCSCFG_FD_SETSIZE configurable in user_config.h. If I change it to, say, 4 from the default 8, everything that uses rtcs_fd_set is at risk of not having a big enough FD set array to operate properly (not to mention possible overruns in unsafe code).
Am I missing something? I'm sure there's a good reason for this design decision, but I can't quite figure out what it is. If a Freescale employee who dealt with that part of the code would explain the reasoning behind it, that would be ideal, but I would also appreciate any speculation on why it's done this way or relevant historical background from other similar APIs.
As a safeguard, I do something similar to
#if RTCSCFG_FD_SETSIZE < 5 #error "RTCS FD SET SIZE TOO SMALL" #endif
in code that requires a certain FD set size to function. Of course there's also the ability to calculate constants and loop counts based on RTCSCFG_FD_SETSIZE to make things more generic, so the fact that it's changeable really isn't a big deal if the code handles it properly. Mostly, the thing that puzzles me here is the willingness to potentially waste some RAM in an embedded system for the sake of having fewer arguments in API function calls...