I was on one of my Win 10 VMs today when all of a sudden I got kicked-out of my RDP session. I logged into the CLI and was told that my disk was out of space. Without doing any analysis (my bad),I immediately assumed that the my entire 240GB disk array was full, so I deleted a bunch of logs from /var/log. I still wasn't able to RDP into my VM, so I rebooted my server and was able to get in again. I'm running the latest version of Proxmox. I'd tell you exactly which version I was running, but I get an out of space message when I try to run pveversion from the cli. I'm running Proxmox from two 240GB Kingston Enterprise SSDs in raid 0. And my VM storage is on an 2TB ZFS HDD. I though the problem was solved, but I am still getting notifications that I have "no space left on the drive" whenever I try to do something on the CLI. Of course, this is not true. Running df-k-h, I see that I should have plenty of space left. the one thing that does worry me is the /dev/mapper/pve/-root is at 100%. I installed proxmox with all of the defualt configuration values and haven't done anything too exotic.
root@pve:/var/log# df -k -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 42M 6.3G 1% /run
/dev/mapper/pve-root 55G 55G 0 100% /
tmpfs 32G 37M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
vm-storage 1.5T 0 1.5T 0% /vm-storage
vm-storage/subvol-101-disk-1 8.0G 1000M 7.1G 13% /vm-storage/subvol-101-disk-1
/dev/fuse 30M 20K 30M 1% /etc/pve
tmpfs 6.3G 0 6.3G 0% /run/user/0
root@pve:/var/log#
This is a lab setup I created about 3 weeks ago. I'm running one docker, a WIN 2016 server a WIN-10 VM and a couple of Linux VMs. I've also got 64GBs of RAM, and none of my machines is using more than 4GB.
Despite having no second thoughts about removing log files, I don't feel comfortable deleting random files from Linux.
Any ideas as to what the problem is? I've googled this issue and really haven't found a solution that I fully understand.
root@pve:/var/log# df -k -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 42M 6.3G 1% /run
/dev/mapper/pve-root 55G 55G 0 100% /
tmpfs 32G 37M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
vm-storage 1.5T 0 1.5T 0% /vm-storage
vm-storage/subvol-101-disk-1 8.0G 1000M 7.1G 13% /vm-storage/subvol-101-disk-1
/dev/fuse 30M 20K 30M 1% /etc/pve
tmpfs 6.3G 0 6.3G 0% /run/user/0
root@pve:/var/log#
This is a lab setup I created about 3 weeks ago. I'm running one docker, a WIN 2016 server a WIN-10 VM and a couple of Linux VMs. I've also got 64GBs of RAM, and none of my machines is using more than 4GB.
Despite having no second thoughts about removing log files, I don't feel comfortable deleting random files from Linux.
Any ideas as to what the problem is? I've googled this issue and really haven't found a solution that I fully understand.