kswapd and oom killer triggered when system has >200MBytes free

cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

kswapd and oom killer triggered when system has >200MBytes free

1,429 Views
bjornbosell
Contributor I

OS: Yocto 1.8, Kernel 3.10.53
HW: Congatec qmx6dl Q7 module, 1GByte RAM, 4GByte eMMC
RootFS: on eMMC
Software running: Some python3 applications.
Hi,

We see some odd behavior when we reach about 250MByte of free memory, the caches/buffers starts emptying and kswapd starts consuming more and more CPU, eventually if we keep allocating the system grinds to a halt and then the OOM killer kicks in to free up memory.
We removed the memory hogging applications we had running, and with a small python program we allocated memory and was able to see the same behavior, leading us to believe it is not related to any of the admittedly memory hungry software we run.

We have configured the OOM behavior to be panic and then reboot. And we see that just prior to the OOM triggered reboot we have around 230MBytes reported free. (having top running over a serial console)

We suspect this may have something to do with the CMA memory, which is configured to 320Mbyte (default setting).
We have tried running with CMA disabled, and with the same test, the system ended up in the same state with kswapd and OOM killer at around 40MB free. Which seems reasonable when taking in low watermark etc into consideration.

The product we are working on will eventually have a JVM running alongside a qtwebengine based web browser outputing to either 1080p over HDMI or 1280x1024 over LVDS, so we suspect that we do need the CMA pool even though quick tests have indicated that everything works fine when CMA is disabled. Just lowering the CMA pool will cause allocation issues reported from the galcore driver.

After reading a bit about CMA, it would appear as if the memory reserved should be available to the system atleast for buffers/cache or non-anonymous pages. Which makes the findings a bit odd.

Has anyone seen a similar issue, and know of a solution or work-around?
We are of course considering solutions where we simply add a swap file, or get a module with more RAM, but it would be interesting to understand the behavior we see, and why the system reports 230Mbytes free and yet triggers the OOM killer.

Regards,
Bjorn

Edit:

We had changed the vm.min_free_kbytes to 32MBytes, which may have been the cause of this problem, when commenting out the line in /etc/sysctl.conf, we see a similar behavior as when the CMA was disabled alltogether.

Message was edited by: Bjorn Bosell

0 Kudos
0 Replies