Where's the rest of my RAM going?

droidus

Member
Apr 5, 2020
107
2
23
34
I am running PVE 7.4-4. I have a mix of containers and VMS. If I counted right, they are all using 43 GB. I have 64 GB installed. The summary for the device shows it is using 60 GB. I understand there would be some for the Proxmox OS, but how much does it really use, alone? Where is the remaining 21 GB being used?
 
What is the output of free -g (in code tags please) on the Proxmox host?
If you use ZFS, what is the output of arc_summary | grep size?
 
Code:
               total        used        free      shared  buff/cache   available
Mem:              62          59           2           0           1           2
Swap:              0           0           0
It's been a while since I stood it up, so can't recall if I'm running ZFS or not. I did the following, and not sure if this is the best/correct way of verifying:
Code:
root@home:/var/log# fdisk -l | grep -i zfs
Partition 1 does not start on physical sector boundary.
/dev/sdb3  1050624 5860533134 5859482511  2.7T Solaris /usr & Apple ZFS
/dev/sda3  1050624 5860533134 5859482511  2.7T Solaris /usr & Apple ZFS
Partition 1 does not start on physical sector boundary.
Partition 2 does not start on physical sector boundary.
/dev/zd112p2  1064 104825895 104824832   50G FreeBSD ZFS
I have 2 3 TB drives installed.
Here's the output from the command you requested:
Code:
ARC size (current):                                    98.3 %   30.8 GiB
        Target size (adaptive):                        98.2 %   30.8 GiB
        Min size (hard limit):                          6.2 %    2.0 GiB
        Max size (high water):                           16:1   31.4 GiB
        Most Frequently Used (MFU) cache size:         74.5 %   20.9 GiB
        Most Recently Used (MRU) cache size:           25.5 %    7.1 GiB
        Metadata cache size (hard limit):              75.0 %   23.5 GiB
        Metadata cache size (current):                 29.6 %    7.0 GiB
        Dnode cache size (hard limit):                 10.0 %    2.4 GiB
        Dnode cache size (current):                    40.2 %  967.4 MiB
        spl_kmem_cache_magazine_size                                   0
        spl_kmem_cache_max_size                                       32
        l2arc_rebuild_blocks_min_l2size                       1073741824
        spa_asize_inflation                                           24
        zfs_abd_scatter_min_size                                    1536
        zfs_arc_average_blocksize                                   8192
        zfs_dbgmsg_maxsize                                       4194304
        zfs_initialize_chunk_size                                1048576
        zfs_max_nvlist_src_size                                        0
        zfs_max_recordsize                                       1048576
        zfs_metaslab_max_size_cache_sec                             3600
        zfs_object_mutex_size                                         64
        zfs_override_estimate_recordsize                               0
        zfs_recv_write_batch_size                                1048576
        zfs_vdev_cache_size                                            0
        zfs_vnops_read_chunk_size                                1048576
        zil_maxblocksize                                          131072
 
Code:
ARC size (current):                                    98.3 %   30.8 GiB
        Target size (adaptive):                        98.2 %   30.8 GiB
        Min size (hard limit):                          6.2 %    2.0 GiB
        Max size (high water):                           16:1   31.4 GiB
        Most Frequently Used (MFU) cache size:         74.5 %   20.9 GiB
        Most Recently Used (MRU) cache size:           25.5 %    7.1 GiB
        Metadata cache size (hard limit):              75.0 %   23.5 GiB
        Metadata cache size (current):                 29.6 %    7.0 GiB
        Dnode cache size (hard limit):                 10.0 %    2.4 GiB
        Dnode cache size (current):                    40.2 %  967.4 MiB
        spl_kmem_cache_magazine_size                                   0
        spl_kmem_cache_max_size                                       32
        l2arc_rebuild_blocks_min_l2size                       1073741824
        spa_asize_inflation                                           24
        zfs_abd_scatter_min_size                                    1536
        zfs_arc_average_blocksize                                   8192
        zfs_dbgmsg_maxsize                                       4194304
        zfs_initialize_chunk_size                                1048576
        zfs_max_nvlist_src_size                                        0
        zfs_max_recordsize                                       1048576
        zfs_metaslab_max_size_cache_sec                             3600
        zfs_object_mutex_size                                         64
        zfs_override_estimate_recordsize                               0
        zfs_recv_write_batch_size                                1048576
        zfs_vdev_cache_size                                            0
        zfs_vnops_read_chunk_size                                1048576
        zil_maxblocksize                                          131072
Up to 32GiB is used by ZFS (currently 30.8), which might explain where your memory is going. You can limit this by following the Proxmox manual, or one of the many threads about this on this forum. Personally, I think memory used as cache is better than unused memory.
 
  • Like
Reactions: droidus
I believe I am using raid 1. My disk size shows as 2.2T available. If I have 2 x 3TB drives, would I be considered to have 3 TB, or 6 TB? I would think 3 TB, but just want to confirm.
 
3TB of which only ~2.18 TiB to 2.45 TiB would be actually usable, as you shouldn't fill a ZFS pool too much (will get slower after filling it more than ~80%).
 
The RAM you provisioned to your VMs (not containers) are not the only ram that is used. Each VM has also a graphics card which uses RAM, disk drivers, network drivers and so on. All of that needs RAM too, so your per VM ram usage is actually higher.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!