I am posting this in case it might help someone else who runs into the same problem.
I am using MQX 4.2
When you call the RTCS socket shutdown() function, that function calls code in the sock_close.c file that calls the MQX function _task_stop_preemption() during certain critical operations. This effectively makes the calling task the highest priority task until a matching by a call to _task_start_preemption() is made. Most of the time, this works as intended and the task is only the highest priority task for a very short interval. However, the function in sock_close.c is complicated and there are many execution paths in that function. During certain error conditions, occasionally the function will exit without calling _task_start_preemption(). When this happens, the calling task remains the highest priority task. This can be viewed by clicking on the task name in the task summary list or ready queue of the TAD (Task Aware Debugging) and the "Pre-emption Disabled scheduling flag TASK_PREEMPTION_DISABLED will be set for this task. You may also notice that other higher priority tasks are Ready but not running as a result.
When this running task voluntarily yields time, higher priority tasks can then again run, but the flag remains set. This means that once the task again becomes eligible for execution, it again becomes the highest priority task, even though numerically higher priority tasks may become Ready while it is running.
The problem will persist until a second call to the socket shutdown() function is made and the function operates normally with _task_start_preemption() called as intended.
I believe that this is a bug in the RTCS library. Instead of rewriting the library, an explicit call to _task_start_preemption() can be made immediately after the call to shutdown(). If pre-emption was already stopped, the call does not seem to have any harmful side effects. However, if shutdown() accidentally did leave the "Pre-emption Disabled" scheduling flag set, the flag is cleared and normal operation resumes.