Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You need to apply backpressure before you hit memory limits, not after.

If you’re OOM your application is in a pretty unrecoverable state. Theoretically possible, practically not.





If you allocate a relatively big chunk of memory for each unit of work, and at some point your allocation fails, you can just drop that unit of work. What is not practical?

I think in that case overcommit will happily say the allocation worked. Unless you also zero the entire chunk of memory and then get OOM killed on the write.

I suppose you can try to reliable target "seriously wild allocation fails" without leaving too much memory on the table.

   0: Heuristic overcommit handling. Obvious overcommits of
      address space are refused. Used for a typical system. It
      ensures a seriously wild allocation fails while allowing
      overcommit to reduce swap usage.  root is allowed to 
      allocate slightly more memory in this mode. This is the 
   default.
https://www.kernel.org/doc/Documentation/vm/overcommit-accou...

Running in an environent without overcommit would allow you to handle it gracefully though, although bringing its own zoo of nasty footguns.

See this recent discussion on what can happen when turning off overcommit:

https://news.ycombinator.com/item?id=46300411


> See this recent discussion on what can happen when turning off overcommit:

What are you referring to specifically? Overcommit is only (presumably) useful if you are using Linux as a desktop OS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: