this post was submitted on 15 Feb 2025
56 points (95.2% liked)

Linux

49964 readers
916 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

This is from a fresh boot of the system, except sshd I have not started anything else. ram consumption used to be just 126-200 mb now it has jumped so significantly that I am concerned I might have unnecessarily bloated my system:

I intend to use the system as a local server with an optional fully featured WM(Hyprland which is installed, but this screenshot was taken before it was loaded) for occasional use.

Ram conservation is a top priority and I would like to know if such a big jump in usage is normal or are there is something wrong with my system config

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 13 hours ago (2 children)

Yea, this kind of blows me away. Maybe I'm out of the loop, but 3-4 processes each eating away that much memory? Just to deal with the network stuff? Holy fuck.

[–] [email protected] 5 points 10 hours ago* (last edited 10 hours ago) (1 children)

MEM% for each NetworkManager process is 0.4 % of 3.28 G ≈ 13.1 M. Additionally, almost certainly most of this will be shared between these process, as well as other processes, so you cannot just add them together.

The virtual size (315M) is the virtual memory. Quite clearly only 13.1 M of this are actually in use. The rest will only start getting backed by real physical memory if it is being written to.

The way this works is that the process will get interrupted if it writes to a non-physical memory location (by the memory management unit (MMU); this is known as a page fault), and executions jumps to the kernel which will allocate physical memory and alter the virtual memory table, and then proceed with the execution of the write operation.

Many programs or library functions like to get way larger virtual memory buffers than they will actually use in practice, because that way the kernel does all this in the background if more memory is needed, and the program doesn't need to do anything. I.e. it simplifies the code.

[–] [email protected] 1 points 2 hours ago

Thank you for explaining it. I haven't been in the *nix world for years, keep thinking I'll get back into it.

[–] [email protected] 1 points 11 hours ago

Well I wasn't thinking about memory (and maybe that's the reason some people downvoted that comment...) but because in my experience NetworkManager takes time starting at boot and with months/years it was taking more and more time. I reset it once and kept doing the same thing.

As you said you're planning on a home server kind of thing I'd think setting up a static ip is a good idea and NetworkManager is just an overkill for that - you could very well go along with Gentoo's netifrc.