Hello guys,
I got 4 servers on the same cluster on all of them there ar CT's and VM's.
Every node it has Minimum 64 GB Ram and maximum 256 GB Ram.
The Ram usage / node (server) is around 20%-30% of the total memory ram, i just noticed now on all the servers the swap is 8GB Only :| and on 2 nodes is full so is not ok at all.
I do have to increase the Swap but i dont whant to reinstall the node and i have to do it safetly because of the CT/VM's on the node.
Dose anyone know hot to do this in safe way and not losing any data ?
All the nodes use proxmox 5.4-13
I just dont understand why dose it make the same amount of ram, like i got 4 servers with proxmox 3.4 i just installed doing nothink alse and if the server has 64Gb Ram he made the Swap 64Gb Two so i was thinking is the same for proxmox 5.4, i didnt even realize is not making it, i migrate the entire servers all ok and i spoted now :|
Thank you
	
	
	
		
				
			I got 4 servers on the same cluster on all of them there ar CT's and VM's.
Every node it has Minimum 64 GB Ram and maximum 256 GB Ram.
The Ram usage / node (server) is around 20%-30% of the total memory ram, i just noticed now on all the servers the swap is 8GB Only :| and on 2 nodes is full so is not ok at all.
I do have to increase the Swap but i dont whant to reinstall the node and i have to do it safetly because of the CT/VM's on the node.
Dose anyone know hot to do this in safe way and not losing any data ?
All the nodes use proxmox 5.4-13
I just dont understand why dose it make the same amount of ram, like i got 4 servers with proxmox 3.4 i just installed doing nothink alse and if the server has 64Gb Ram he made the Swap 64Gb Two so i was thinking is the same for proxmox 5.4, i didnt even realize is not making it, i migrate the entire servers all ok and i spoted now :|
Thank you
		Code:
	
	root@d4:~# df -h
Filesystem            Size  Used Avail Use% Mounted on
udev                   32G     0   32G   0% /dev
tmpfs                 6.3G  555M  5.8G   9% /run
/dev/mapper/pve-root   94G   44G   46G  50% /
tmpfs                  32G   66M   32G   1% /dev/shm
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                  32G     0   32G   0% /sys/fs/cgroup
/dev/mapper/pve-data  1.7T  959G  682G  59% /var/lib/vz
/dev/fuse              30M   80K   30M   1% /etc/pve
root@d4:~# 
	 
	 
 
		