this post was submitted on 15 Feb 2025
66 points (93.4% liked)

Linux

50250 readers
1967 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

This is from a fresh boot of the system, except sshd I have not started anything else. ram consumption used to be just 126-200 mb now it has jumped so significantly that I am concerned I might have unnecessarily bloated my system:

I intend to use the system as a local server with an optional fully featured WM(Hyprland which is installed, but this screenshot was taken before it was loaded) for occasional use.

Ram conservation is a top priority and I would like to know if such a big jump in usage is normal or are there is something wrong with my system config

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 4 days ago (4 children)

If you can live without Networkmanager, I'd disable it and move your network setup to a static ip. Networkmanager can hog resources.

[–] [email protected] 1 points 4 days ago (3 children)

Yea, this kind of blows me away. Maybe I'm out of the loop, but 3-4 processes each eating away that much memory? Just to deal with the network stuff? Holy fuck.

[–] [email protected] 5 points 3 days ago* (last edited 3 days ago) (1 children)

MEM% for each NetworkManager process is 0.4 % of 3.28 G ≈ 13.1 M. Additionally, almost certainly most of this will be shared between these processes, as well as other processes, so you cannot just add them together.

The virtual size (315M) is the virtual memory. Quite clearly only 13.1 M of this are actually in use. The rest will only start getting backed by real physical memory if it is being written to.

The way this works is that the process will get interrupted if it writes to a non-physical memory location (by the memory management unit (MMU); this is known as a page fault), and executions jumps to the kernel which will allocate physical memory and alter the virtual memory table, and then proceed with the execution of the write operation.

Many programs or library functions like to get way larger virtual memory buffers than they will actually use in practice, because that way the kernel does all this in the background if more memory is needed, and the program doesn't need to do anything. I.e. it simplifies the code.

[–] [email protected] 2 points 3 days ago

Thank you for explaining it. I haven't been in the *nix world for years, keep thinking I'll get back into it.

load more comments (1 replies)
load more comments (1 replies)