I already checked the clock frequency as described in that section. PLLs in the image sensor are configured so that PHY CLK of the sensor is 544MHz. MIPI_CSI_PHY_TST_CTRL1 register was set to 0x14 which corresponded to 849MHz. According to the first equation in section 3.4 ...
MIPI data rate = (MIPI clock * 2) * Number of lanes >= Pixel clock * Bits-per-pixel
... MIPI clock needs to be above minimum value. In my case the clock was much higher. I tried to set it to a value closely matching the clock of the sensor (0x2E, 600MHz) with the same result. I would not rule out the possibility of bad signals, but the corruption patters is very repeatable to a point where I could reconstruct some of the image by extracting byte sequences.
In an attempt to figure out the nature of the frame, I have covered half of the cameras horizontal field of view to obtain a "half-frame". A good frame would have left half of it dark and right half bright (saturated), with saturation repeating every 640 pixels (with every new row). In the corrupt image the saturated-not-saturated pixels were repeating every 160 pixels which is a quarter of the width. Also, that stream of data pixels would be interrupted by a burst of 144 black pixels. It appears to interrupt the pixel stream after every 288 pixels of actual data.
The "corruption" pattern seems to change after about 240 lines. After that the corruption appears twice as frequent. the saturation pattern now is 80 pixels and the black pixel burst is now 72 pixels long.
The only place where I have seen the image data compressed after half of the frame is in YUV422 and YUV420 formats. Could that be related? The sensor is Bayer format.
One possibility is that the receiving end has trouble "understanding" the data. Table 1 in the mentioned document talk about data types used to define packets. The actual values that the processor expects are not listed. Is it something the OmniVision sensors may need to have configured? Anybody know more on this subject?