Skip to main content

Timeline for answer to Heap memory allocation in Linux by Marcus Müller

Current License: CC BY-SA 4.0

Post Revisions

11 events
when toggle format what by license comment
Sep 19, 2023 at 10:15 comment added Marcus Müller That's not a crashing process. and again, that would be trivial to find out: you'd have kernel log (dmesg) OOM entries. and, also, very unlikely scenario with packets as small as yours. This isn't complicated fragmentation, at all; also, libc will basically never return unused memory after malloc/free, so your process wouldn't even be doing any brk/mmap after a while. Note how the scenario you link to is "I need a very large piece of memory". That's not at all similar to your use case. If your process crashes, that's because your process is buggy, 99.95% of the time :)
Sep 19, 2023 at 9:52 comment added Vishal Sharma How about this scenario: stackoverflow.com/a/62855363/5701173 "RAM would occasionally become sufficiently fragmented that the kernel couldn't find a set of physically-contiguous RAM pages large enough to meet its immediate needs. When this happened, the kernel would invoke the OOM-killer to free up some RAM to avoid a kernel panic, which is all well and good for the kernel but not so great when it kills a process that the user was relying on to get his work done."
Sep 19, 2023 at 9:50 comment added Marcus Müller no, as I already said: No process would ever crash due to memory fragmentation. The process cannot ever see that.
Sep 19, 2023 at 9:09 comment added Vishal Sharma My process is a networking process which receives UDP/TCP packets continuously at a high rate(20-30K per second). Can such networking processes crash due to memory fragmentation?
Sep 19, 2023 at 8:57 vote accept Vishal Sharma
Sep 19, 2023 at 8:30 comment added Marcus Müller Long story short: that's very very unlikely. Fragmentation doesn't lead to crashes; Linux and MMUs work pretty reliably, and a process never needs to care about the underlying physical memory; it's positively none of its concerns. Performance-wise: Yes, you get a TLB that might need to grow a lot, and then you get TLB misses, and that's bad for performance. But: couting TLB misses is pretty easy, you just ask the Linux kernel about that.
Sep 19, 2023 at 8:19 comment added Vishal Sharma The reason I'm interested in all this is because I'm currently investigating whether fragmentation of my linux server's memory might be somehow behind random and difficult to reproduce crashes of my long running process. Also from a performance point of view whether for a user-space program, does it matter whether the underlying memory is contiguous or not?
Sep 19, 2023 at 8:16 comment added Marcus Müller @VishalSharma I'm wondering why you're interested in userspace memory being backed by contiguous physical memory. That's because it raises all kinds of "this is a design mistake" red flags for me in the context of driver design.
Sep 19, 2023 at 8:14 comment added Marcus Müller No. There's no guarantees. The Linux buddy algorithm has to take more goals into account than just delivering as little fragmented memory as possible. There's things like preferring to not have to CPU-core-coordinate page table entry modifications.
Sep 19, 2023 at 7:58 comment added Vishal Sharma Trying to understand better: If suppose a process requests 16 KB of free memory, and group 2 has sufficient number of free pages available, then it's guaranteed that the memory which will be allocated will be physically contiguous, right? Only when group 2 and onwards free pages are not available, only then the memory which will be allocated will be non contiguous?
Sep 18, 2023 at 17:43 history answered Marcus Müller CC BY-SA 4.0