Proxmox disk full, but why?

eiger3970

Well-Known Member
Sep 9, 2012
276
3
58
Hello,
my Proxmox machine ran a backup of the VMs and filled up my disk, causing VM issues.
I'm trying to reduce the disk back down to minimum levels so backups can complete.
I then transfer the backups to a NAS storage PC, to keep the Proxmox disk empty with just Proxmox running and ready for backups.

This is what I really have when physically looking with my own eyes:
4 x 120 GB SSDs to setup RAID.
1 x 500 GB HDD for daily backups.
1 x 250 GB HDD for daily backups.

This # df -h command shows what disks are mounted on Proxmox. Strangely 55G, but should be 120GB.
HTML:
# df -hFilesystem            Size  Used Avail Use% Mounted on
udev                   10M     0   10M   0% /dev
tmpfs                 1.6G  368K  1.6G   1% /run
/dev/mapper/pve-root   28G  1.1G   25G   5% /
tmpfs                 5.0M     0  5.0M   0% /run/lock
tmpfs                 3.2G   28M  3.1G   1% /run/shm
/dev/mapper/pve-data   55G   51G  4.7G  92% /var/lib/vz
/dev/sda2             494M   36M  434M   8% /boot /dev/fuse              30M   16K   30M   1% /etc/pve


# fdisk -l shows all the physical disks connected on the Proxmox machine.
HTML:
# fdisk -l
WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted.


Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1               1   234441647   117220823+  ee  GPT

Disk /dev/sdc: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdc doesn't contain a valid partition table

Disk /dev/sdb: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/sdd: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders, total 234441648 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00047af5

   Device Boot      Start         End      Blocks   Id  System
/dev/sdd1            2048   228225023   114111488   83  Linux
/dev/sdd2       228227070   234440703     3106817    5  Extended
/dev/sdd5       228227072   234440703     3106816   82  Linux swap / Solaris

Disk /dev/sde: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00022ee2

   Device Boot      Start         End      Blocks   Id  System
/dev/sde1   *        2048   970555391   485276672   83  Linux
/dev/sde2       970557438   976771071     3106817    5  Extended
/dev/sde5       970557440   976771071     3106816   82  Linux swap / Solaris

Disk /dev/mapper/pve-root: 29.8 GB, 29796335616 bytes
255 heads, 63 sectors/track, 3622 cylinders, total 58195968 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/pve-root doesn't contain a valid partition table

Disk /dev/mapper/pve-swap: 14.9 GB, 14898167808 bytes
255 heads, 63 sectors/track, 1811 cylinders, total 29097984 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000

Disk /dev/mapper/pve-swap doesn't contain a valid partition table

Disk /dev/mapper/pve-data: 59.9 GB, 59907244032 bytes
255 heads, 63 sectors/track, 7283 cylinders, total 117006336 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
 Disk /dev/mapper/pve-data doesn't contain a valid partition table

# parted -l shows
HTML:
# parted -lModel: ATA INTEL SSDSC2BW12 (scsi)
Disk /dev/sda: 120GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  2097kB  1049kB               primary  bios_grub
 2      2097kB  537MB   535MB   ext3         primary  boot
 3      537MB   120GB   119GB                primary  lvm


Error: /dev/sdb: unrecognised disk label                                  

Error: /dev/sdc: unrecognised disk label                                  

Model: ATA INTEL SSDSC2BW12 (scsi)
Disk /dev/sdd: 120GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size    Type      File system     Flags
 1      1049kB  117GB  117GB   primary   ext4
 2      117GB   120GB  3181MB  extended
 5      117GB   120GB  3181MB  logical   linux-swap(v1)


Model: ATA Hitachi HDS72105 (scsi)
Disk /dev/sde: 500GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End    Size    Type      File system     Flags
 1      1049kB  497GB  497GB   primary   ext4            boot
 2      497GB   500GB  3181MB  extended
 5      497GB   500GB  3181MB  logical   linux-swap(v1)


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/pve-data: 59.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system  Flags
 1      0.00B  59.9GB  59.9GB  ext3


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/pve-swap: 14.9GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system     Flags
 1      0.00B  14.9GB  14.9GB  linux-swap(v1)


Model: Linux device-mapper (linear) (dm)
Disk /dev/mapper/pve-root: 29.8GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End     Size    File system  Flags  1      0.00B  29.8GB  29.8GB  ext3


Any help on how to sort out my disks please?
 
Last edited:
Hello,
my Proxmox machine ran a backup of the VMs and filled up my disk, causing VM issues.
I'm trying to reduce the disk back down to minimum levels so backups can complete.
I then transfer the backups to a NAS storage PC, to keep the Proxmox disk empty with just Proxmox running and ready for backups.

...

Any help on how to sort out my disks please?
Hi,
use following commands to see where the space is
Code:
pvs
vgs
lvs
Udo
 
HTML:
root@proxmox:~# pvs  PV         VG   Fmt  Attr PSize   PFree   /dev/sda3  pve  lvm2 a--  111.29g 13.87g

root@proxmox:~# vgs  VG   #PV #LV #SN Attr   VSize   VFree   pve    1   3   0 wz--n- 111.29g 13.87g

root@proxmox:~# lvs  LV   VG   Attr      LSize  Pool Origin Data%  Move Log Copy%  Convert  data pve  -wi-ao--- 55.79g                                             root pve  -wi-ao--- 27.75g                                             swap pve  -wi-ao--- 13.88g
 
So, I have too many options and need help on which option is best.

NAS4Free setup and transfers backups from Proxmox server automatically (not sure how).

Proxmox runs automatic backups and transfers automatically (I can't set this up as I need to setup NFS on Proxmox and NAS4Free computer)

Rsync with SSH (need to work out how to automatically run as SSH will prompt for a password).

Final problem: Proxmox's disks have messed up somehow so that the monthly backups filled the disks and the VMs had an i/o error.

I'm now trying to fix up the Proxmox disks and then decide on the easiest way to auto transfer files to a 2nd computer.
 
Ok, so this is my plan:
Copy existing VMs and transfer to 2nd computer. (need to check the location is var/lib/vz/images or /etc/pve/qemu-server)

Fix up Proxmox disks. (maybe use # fdisk -l to see what's wrong)
Setup array (I guess use the # mdadm command on the 4 x 120 GB SSDs)
Reinstall Proxmox and transfer VMs to Proxmox. (hopefully the VMs will 'just' work in var/lib/vz/images or /etc/pve/qemu-server)

Setup auto backup. (not sure whether to use Proxmox ZFS, NAS4Free or Rsync/SSH)
Setup auto transfer. (not sure whether to use Proxmox ZFS, NAS4Free or Rsync/SSH)
 
Last edited:
Ok, so this is my plan:
Copy existing VMs and transfer to 2nd computer. (need to check the location is var/lib/vz/images or /etc/pve/qemu-server)

Fix up Proxmox disks. (maybe use # fdisk -l to see what's wrong)
Setup array (I guess use the # mdadm command on the 4 x 120 GB SSDs)
Reinstall Proxmox and transfer VMs to Proxmox. (hopefully the VMs will 'just' work in var/lib/vz/images or /etc/pve/qemu-server)

Setup auto backup. (not sure whether to use Proxmox ZFS, NAS4Free or Rsync/SSH)
Setup auto transfer. (not sure whether to use Proxmox ZFS, NAS4Free or Rsync/SSH)

Hi,
just take a look on your first post... there are some strange things...
This is what I really have when physically looking with my own eyes:
4 x 120 GB SSDs to setup RAID.
1 x 500 GB HDD for daily backups.
1 x 250 GB HDD for daily backups.
your 4 SSDs aren't any raid - only sda is used for proxmox (boot + lvm pve) (with pve 3.4 you csan build sn zfs-raid on the four SSDs).
Your 500GB hdd aren't mounted, so not usable for backup. The partition-layout looks like an old Linux-installation (Ubuntu?).
Your 250GB hdd isn't visible in your posting - sdf is missing.

Udo
 
Ok, I physically rechecked the computer and found:
4 x 120 GB SSDs.
1 x 500 GB HDD.

So, now I'm trying to backup/store my VMs on a safe place, so I can reinstall Proxmox, then setup RAID on the 4 x 120 GB SSDs, then setup auto backup and transfer to a 2nd location.

This work might not be needed, but I am not aware of a more efficient way of fixing the 1 x 120 GB SSD which shows 92% capacity on 55GB (which is stopping backups completing as it fills the 55GB space and causes a VM to have an i/o error).
 
So, I'm trying to move my VMs to a 2nd storage location whilst I reinstall Proxmox and fix up my disk array.

Do I transfer /var/lib/vz/images/VMID/vm-VMID-disk-1.raw and /etc/pve/qemu-server VMID.conf?
or
do I have to run a manual backup and transfer the /var/lib/vz/dump/vzdump-qemu-VMID-YYYY_MM_DD-HH_MM_SS.vma.lzo?

I had to delete all the backup vma.lzo files in /var/lib/vz/dump due to the disk going weird and showing 92% capacity of 55GB on a 120GB disk?
For some reason, there's no more room to make backup vma.lzo files.
 
Okay, so I've been manually backing up each VM, then scp'ing to the NAS storage, then deleting the VM, then backing up the next VM.
VM 163 gives an error:
INFO: status: 63% (20609499136/32212254720), sparse 13% (4428292096), duration 142, 124/117 MB/s
lzop: No space left on device: <stdout>
INFO: status: 64% (20791951360/32212254720), sparse 13% (4435193856), duration 145, 60/58 MB/s
ERROR: vma_queue_write: write error - Broken pipe
INFO: aborting backup job

So, I check the CLI for further information, but the below doesn't help me.
Maybe someone else can see why the VM won't backup.
The disk seems to show 92% capacity, so there should be room?

Sorry for the no code bits below, but this thread didn't have an option to select code tags?


root@proxmox:/var/lib/vz/dump# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=2045228,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1638148k,mode=755)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3276280k)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
/dev/sda2 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
beancounter on /proc/vz/beancounter type cgroup (rw,relatime,blkio,name=beancounter)
container on /proc/vz/container type cgroup (rw,relatime,freezer,devices,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,relatime,cpuacct,cpu,cpuset,name=fairsched)
root@proxmox:/var/lib/vz/dump# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 111.29g 13.87g
root@proxmox:/var/lib/vz/dump# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
data pve -wi-ao--- 55.79g
root pve -wi-ao--- 27.75g
swap pve -wi-ao--- 13.88g
root@proxmox:/var/lib/vz/dump# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
udev 10240 0 10240 0% /dev
tmpfs 1638148 408 1637740 1% /run
/dev/mapper/pve-root 28641420 1159400 26027124 5% /
tmpfs 5120 0 5120 0% /run/lock
tmpfs 3276280 27972 3248308 1% /run/shm
/dev/mapper/pve-data 57583876 52665888 4917988 92% /var/lib/vz
/dev/sda2 505764 36751 442901 8% /boot
/dev/fuse 30720 16 30704 1% /etc/pve
root@proxmox:/var/lib/vz/dump# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content images,iso,vztmpl,backup,rootdir
maxfiles 0
 
Okay, so I've been manually backing up each VM, then scp'ing to the NAS storage, then deleting the VM, then backing up the next VM.
VM 163 gives an error:
INFO: status: 63% (20609499136/32212254720), sparse 13% (4428292096), duration 142, 124/117 MB/s
lzop: No space left on device: <stdout>
INFO: status: 64% (20791951360/32212254720), sparse 13% (4435193856), duration 145, 60/58 MB/s
ERROR: vma_queue_write: write error - Broken pipe
INFO: aborting backup job

So, I check the CLI for further information, but the below doesn't help me.
Maybe someone else can see why the VM won't backup.
The disk seems to show 92% capacity, so there should be room?

Sorry for the no code bits below, but this thread didn't have an option to select code tags?


root@proxmox:/var/lib/vz/dump# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=10240k,nr_inodes=2045228,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1638148k,mode=755)
/dev/mapper/pve-root on / type ext3 (rw,relatime,errors=remount-ro,user_xattr,acl,barrier=0,data=ordered)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3276280k)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/mapper/pve-data on /var/lib/vz type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
/dev/sda2 on /boot type ext3 (rw,relatime,errors=continue,user_xattr,acl,barrier=0,data=ordered)
rpc_pipefs on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
beancounter on /proc/vz/beancounter type cgroup (rw,relatime,blkio,name=beancounter)
container on /proc/vz/container type cgroup (rw,relatime,freezer,devices,name=container)
fairsched on /proc/vz/fairsched type cgroup (rw,relatime,cpuacct,cpu,cpuset,name=fairsched)
root@proxmox:/var/lib/vz/dump# vgs
VG #PV #LV #SN Attr VSize VFree
pve 1 3 0 wz--n- 111.29g 13.87g
root@proxmox:/var/lib/vz/dump# lvs
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
data pve -wi-ao--- 55.79g
root pve -wi-ao--- 27.75g
swap pve -wi-ao--- 13.88g
root@proxmox:/var/lib/vz/dump# df -k
Filesystem 1K-blocks Used Available Use% Mounted on
udev 10240 0 10240 0% /dev
tmpfs 1638148 408 1637740 1% /run
/dev/mapper/pve-root 28641420 1159400 26027124 5% /
tmpfs 5120 0 5120 0% /run/lock
tmpfs 3276280 27972 3248308 1% /run/shm
/dev/mapper/pve-data 57583876 52665888 4917988 92% /var/lib/vz
/dev/sda2 505764 36751 442901 8% /boot
/dev/fuse 30720 16 30704 1% /etc/pve
root@proxmox:/var/lib/vz/dump# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content images,iso,vztmpl,backup,rootdir
maxfiles 0
Hi,
mount your 500GB-Partion:
Code:
mkdir /mnt/backup
mount /dev/sde1 /mnt/backup
define in the gui -> storage /mnt/backup (type dir) the path and as usage backup.

Do an entry in /etc/fstab to automount the partition during reboot. Use the UUID for this, because the number can change with another diskorder.
Code:
blkid /dev/sde1
Do the backup again wit target to the new partition.

Udo
 
Ok, df -h is reading the new 500 GB mount.
Proxmox adding storage greyed out the Add button though.
Hi,
to add storage:
login in gui as root@pam
Datacenter -> Storage -> Add -> Directory
Therefore I didn't reach the /etc/fstab step.
Not quite clear what to do wtih the fstab after reading https://pve.proxmox.com/wiki/Storage_Model
something like this (other UUID, perhaps other FS (ext3/4))
Code:
UUID=d7d4e9c1-c3cb-4fc5-8ec1-21f8aad86a60   /mnt/backup   ext4   defaults   0   1
Test the fstab-entry before reboot with
Code:
umount /mnt/backup
mount -a
Udo
 
Well, I don't think the mount worked.
A Proxmox reboot failed, so I had to connect a keyboard and monitor to view the Proxmox boot error below:

Waiting for /dev/ to be fully populated...done.
Setting parameters of disk: (none).
Activating swap...done.
Checking root file system...fsck from util-linux 2.20.1
/dev/mapper/pve-root: clean, 44487/1818624 files, 403940/7274496 blocks
done.
Loading kernel module fuse.
Cleaning up temporary files... /tmp.
Assembling MD arrays...done (no arrays found in config file or automatically).
Setting up LVM Volume Groups...done.
Activating lvm and md swap...done.
Checking file systems...fsck from util-linux 2.20.1
/dev/mapper/pve-data: clean, 34/3661824 files, 13396343/14625792 blocks
/dev/sda2: clean, 242/130560 files, 53227/522240 blocks
fsck.ext4: No such file or directory while trying to open xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Possibly non-existent device?
fsck died with exit status 8
failed (code 8).
File system check failed. A log is being saved in /var/log/fsck/checkfs if that
location is writable. Please repair the file system manually. ... failed!
A maintenance shell will now be started. CONTROL-D will terminate this shell and
resume system boot. ... (warning).
Give root password for maintenance
(or type Control-D to continue):


Proxmox now shows the following output:
root@proxmox:/# umount /mnt/backup
umount: /mnt/backup: not mounted
root@proxmox:/# mount -a
mount: special device xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx does not exist
 
Well, I don't think the mount worked.
A Proxmox reboot failed, so I had to connect a keyboard and monitor to view the Proxmox boot error below:

Waiting for /dev/ to be fully populated...done.
Setting parameters of disk: (none).
Activating swap...done.
Checking root file system...fsck from util-linux 2.20.1
/dev/mapper/pve-root: clean, 44487/1818624 files, 403940/7274496 blocks
done.
Loading kernel module fuse.
Cleaning up temporary files... /tmp.
Assembling MD arrays...done (no arrays found in config file or automatically).
Setting up LVM Volume Groups...done.
Activating lvm and md swap...done.
Checking file systems...fsck from util-linux 2.20.1
/dev/mapper/pve-data: clean, 34/3661824 files, 13396343/14625792 blocks
/dev/sda2: clean, 242/130560 files, 53227/522240 blocks
fsck.ext4: No such file or directory while trying to open xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Possibly non-existent device?
fsck died with exit status 8
failed (code 8).
File system check failed. A log is being saved in /var/log/fsck/checkfs if that
location is writable. Please repair the file system manually. ... failed!
A maintenance shell will now be started. CONTROL-D will terminate this shell and
resume system boot. ... (warning).
Give root password for maintenance
(or type Control-D to continue):


Proxmox now shows the following output:
root@proxmox:/# umount /mnt/backup
umount: /mnt/backup: not mounted
root@proxmox:/# mount -a
mount: special device xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx does not exist
Hi,
this is the reason why I wrote "TEST BEFORE REBOOT WITH mount -a"!!
If you take the wrong UUID (and xxxx... is the wrong one - a placeholder) it can't work.

Read carefully the postings before and you will reach the goal. Perhaps you should buy an book about linux server administration for beginners.

Udo
 
I did run the test, but I guess I didn't understand the output result.

Anyway, I fixed this issue...I was missing the /etc/fstab code UUID=.
So now, Proxmox reboot now shows the newly mounted 500GB disk.

root@proxmox:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 1.6G 424K 1.6G 1% /run
/dev/mapper/pve-root 28G 1.2G 25G 5% /
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.2G 16M 3.2G 1% /run/shm
/dev/mapper/pve-data 55G 51G 4.7G 92% /var/lib/vz
/dev/sda2 494M 36M 433M 8% /boot
/dev/sde1 456G 5.2G 428G 2% /mnt/backup
/dev/fuse 30M 16K 30M 1% /etc/pve

However, adding the Proxmox storage won't complete due to a greyed out button, via Proxmox > Server View > Datacenter > Storage > Add > ID: 500 GB > Directory: /mnt/backup > Content: VZDump backup file > Nodes: All No restrictions (greyed out) > Enable: ticked > Shared: ticked > Max Backups: 1 > Add (greyed out).

Okay, I changed the ID, but 500 didn't work. The error is misleading as the error prompt says Allowed characters: 'a-z', '0-9', '-', '_', '.'.

Evidently, the backup now works on the new disk.

So, now I should be able to begin setting up the RAID on the 4 x 120 GB disks, automated backup and transfer to a 2nd location.

I think the RAID is worthwhile as if 1 disk fails, then Proxmox will run on the 3 other disks until I buy a replacement.
1 disk or 2 mirrored disks means if the disk breaks, then I have to reinstall Proxmox and its VMs.
 
Last edited:
However, adding the Proxmox storage won't complete due to a greyed out button, via Proxmox > Server View > Datacenter > Storage > Add > ID: 500 GB > Directory: /mnt/backup > Content: VZDump backup file > Nodes: All No restrictions (greyed out) > Enable: ticked > Shared: ticked > Max Backups: 1 > Add (greyed out).
Hi,
shared mean, that the same storage is available on other cluster nodes also.
In you case, the /mnt/backup is local storage on one node.

So you should unselect the shared field.

Udo
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!