am in danger of running out of space?

slifin

New Member
Dec 31, 2013
13
0
1
here is the output of df -h:

Code:
df -h
Code:
Filesystem               Size  Used Avail Use% Mounted on
udev                      10M     0   10M   0% /dev
tmpfs                    6.3G   11M  6.3G   1% /run
/dev/md1                 274G  251G  9.1G  97% /
tmpfs                    5.0M     0  5.0M   0% /run/lock
tmpfs                     14G   22M   14G   1% /run/shm
/dev/fuse                 30M   20K   30M   1% /etc/pve
/var/lib/vz/private/101   52G   29G  9.1G  76% /var/lib/vz/root/101
none                      25G  4.0K   25G   1% /var/lib/vz/root/101/dev
none                     4.0K     0  4.0K   0% /var/lib/vz/root/101/sys/fs/cgroup
none                     4.9G  1.1M  4.9G   1% /var/lib/vz/root/101/run
none                     5.0M     0  5.0M   0% /var/lib/vz/root/101/run/lock
none                      25G     0   25G   0% /var/lib/vz/root/101/run/shm
none                     100M     0  100M   0% /var/lib/vz/root/101/run/user
/var/lib/vz/private/100   70G   47G  9.1G  84% /var/lib/vz/root/100
none                     6.9G  4.0K  6.9G   1% /var/lib/vz/root/100/dev
none                     4.0K     0  4.0K   0% /var/lib/vz/root/100/sys/fs/cgroup
none                     1.4G  1.1M  1.4G   1% /var/lib/vz/root/100/run
none                     5.0M     0  5.0M   0% /var/lib/vz/root/100/run/lock
none                     6.9G     0  6.9G   0% /var/lib/vz/root/100/run/shm
none                     100M     0  100M   0% /var/lib/vz/root/100/run/user
/var/lib/vz/private/102   10G  6.2G  3.9G  62% /var/lib/vz/root/102
none                     756M  4.0K  756M   1% /var/lib/vz/root/102/dev
none                     756M   20K  756M   1% /var/lib/vz/root/102/dev/shm

and here is the output of

Code:
vzlist --all --output ctid,hostname,diskspace,diskspace.s,diskspace.h --sort diskspace | awk '{if (NR>1) {printf("%-4s %-30s %-10s %-10s %-10s %d\n", $1, $2, $3, $4, $5, $3/$4*100)} else printf("%-4s %-30s %-10s %-10s %-10s %s\n", $1, $2, $3, $4, $5, "PERC_USED")}'


Code:
CTID HOSTNAME                       DSPACE     DSPACE.S   DSPACE.H   PERC_USED
102  *********.com        6481668    10485760   11534336   61
101  **********.eu      29756028   99614720   109576192  29
100  **********.com      48994076   204472320  224919552  23


my concern is the that the root partition is using 97% of the disk space, what I don't understand is whether that disk space has just been
allocated to my containers ahead of time or whether the machine is using 97% of it's space, if it is using that space where is the space being used?
and where can I find it?
 
Last edited:
I've installed ncdu and that shows

Code:
  166.6GiB [##########] /mnt
   82.4GiB [####      ] /var
  588.2MiB [          ] /usr
  267.4MiB [          ] /lib
   50.1MiB [          ] /boot
   22.0MiB [          ] /sbin
    6.5MiB [          ] /bin
    4.6MiB [          ] /etc
  284.0KiB [          ] /root
   20.0KiB [          ] /tmp
e  16.0KiB [          ] /lost+found
    4.0KiB [          ] /lib64
e   4.0KiB [          ] /srv
e   4.0KiB [          ] /selinux
e   4.0KiB [          ] /opt
e   4.0KiB [          ] /media
e   4.0KiB [          ] /home
    4.0KiB [          ]  .bash_history
@   0.0  B [          ]  vz
>   0.0  B [          ] /sys
>   0.0  B [          ] /run
>   0.0  B [          ] /proc
>   0.0  B [          ] /dev

I thought I had my backups on a secondary set of larger harddrives
inside /mnt/storage/dump

why is df -h counting space from secondary drives in it's analysis?
does it look like those files have moved drive somehow?
 
ok, so I think I don't have a mount for my secondary drives active any more:

because I can't find my RAID /dev/md2 or my directory /mnt/storage in the output of
Code:
mount

so I tried running mount -a
fstab:
Code:
/dev/md1        /       ext4    errors=remount-ro,discard       0       1
/dev/sda2       swap    swap    defaults        0       0
/dev/sdb2       swap    swap    defaults        0       0
proc            /proc   proc    defaults        0       0
sysfs           /sys    sysfs   defaults        0       0
/dev/md2        /mnt/storage    ext4    defaults        0       0

which says
/dev/md2 already mounted or /mnt/storage busy

ok so /dev/md2 isn't mounted so /mnt/storage must be busy
so I've checked who/what's using it with :

Code:
fuser -cu /mnt/storage

which returns a lot of users and processes, and now I'm stuck
 
Can I create a new mount point elsewhere in my system with fstab -> mount -a
then go into proxmox gui and make a new storage medium and point it to my new mount point?

I think I lost this mount point when the server physically restarted weeks ago
is there something I can do to prevent this happening again?
 
Last edited:
Can someone confirm or deny the steps above? I've deleted some of the backups to buy myself some time

basically I think I need to get my secondary drive mounted again but I'm not sure how
 
Yes you can add a disk and make a new mount point for it. ( http://www.debiantutorials.com/how-...-or-partition-using-uuid-and-ext4-filesystem/ may help, this method preserves the mount point even after a reset/reboot)
Then you can add it in Proxmox through Datacenter -> Storage -> Add -> Directory. Here enter the mount point, e.g. /mnt/bigdisk that should work AFAIK.

If the whole storage is reserved on creation depends on the Format, raw for example reserves everything and qcow2 doesn't but grows as needed.
 
I had a problem my fstab config so my disk wasn't mounting I've created a new mount point and pointed proxmox to that and now everything seems good
thank you for your reply