Hello Everyone,
I'm quite a new Proxmox user having converted from ESXi. I have a homelab running on an Intel NUC with the following disk hardware, partition and usage formation:
I created the two guests, (guest1 / vmid 107, and guest2 / vmid 104).
I installed the guest OSes and started to configure them (they're both running independent Ubuntu/Nextcloud stacks fwiw). I came to a point where I needed to restart the two installations - so I created guest3 and guest4. I added the virtio lines to the two new guests' /etc/pve/qemu-server/xxx.conf files
However - My concern now is that when I attempt a backup of either guest3 or guest4 I get the error:
Both of the guests still have a disk attached - I just don't know where that disk is - if that makes sense. My concerns are:
As a result - I don't know what to do to remedy the situation. Accordingly - any help would be greatly appreciated. Information dumps below:
Host
guest3 (vmid 110)
guest4 (vmid 112)
I'm quite a new Proxmox user having converted from ESXi. I have a homelab running on an Intel NUC with the following disk hardware, partition and usage formation:
- 1 x M.2 1Tb (/dev/nvme0n1)
- Full disk is xfs formatted, used to hold the Virtual Machines
- 1 x SSD 2Tb (/dev/sda)
- 80Gb LVM used for the Proxmox installation (along with relevant BIOS boot and EFI partitions)
- /dev/sda4 -> ~500Gb xfs formatted, this was then added as a Directory in the GUI. Then "attached" (may be the wrong word) to the guest that was to use it - aka guest1).
- /dev/sda5 -> ~1.3Tb xfs formatted, as above, this was then added as a Directory in the GUI. Then "attached" to a different guest - aka guest2.
I created the two guests, (guest1 / vmid 107, and guest2 / vmid 104).
I installed the guest OSes and started to configure them (they're both running independent Ubuntu/Nextcloud stacks fwiw). I came to a point where I needed to restart the two installations - so I created guest3 and guest4. I added the virtio lines to the two new guests' /etc/pve/qemu-server/xxx.conf files
- For guest3 (which becomes the new guest1) -> virtio1: 500Gb_Dir:107/vm-107-disk-0.qcow2 (note, the new vmid for this guest is 110)
- For guest4 (which becomes the new guest2) -> virtio1: 1.3Tb_Dir:104/vm-104-disk-0.qcow2 (note, the new vmid for this guest is 112)
However - My concern now is that when I attempt a backup of either guest3 or guest4 I get the error:
volume '500Gb_Dir:107/vm-107-disk-0.qcow2' does not exist
Both of the guests still have a disk attached - I just don't know where that disk is - if that makes sense. My concerns are:
- First and foremost - if I understand enough of what's going on - it's likely that if either of the two guests (or indeed the host) are rebooted - the data partitions won't be visible to the guest(s).
- Also bad news - I can't currently backup these guests. That said, I do have the data backed up to a guest on another pve node via borg.
As a result - I don't know what to do to remedy the situation. Accordingly - any help would be greatly appreciated. Information dumps below:
Host
Code:
blkid
<snip>
/dev/sda4: UUID="e38ab4c4-41f4-4dda-8db7-xxxxxx" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="d59e19cb-239f-a644-8869-xxxxx"
/dev/sda5: UUID="82515bae-b3f1-41ba-95e7-xxxxxx" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="69977dd7-bbb4-8947-87a9-xxxxxx"
</snip>
cat /etc/pve/qemu-server/110.conf | grep virtio
virtio0: NUC_M2_1Tb:110/vm-110-disk-0.qcow2,size=11468M
virtio1: 500Gb_Dir:107/vm-107-disk-0.qcow2
cat /etc/pve/qemu-server/112.conf | grep virtio
virtio0: NUC_M2_1Tb:112/vm-112-disk-0.qcow2,size=11468M
virtio1: 1.3Tb_Dir:104/vm-104-disk-0.qcow2
# fdisk -l /dev/sda
Device Start End Sectors Size Type
/dev/sda1 34 2047 2014 1007K BIOS boot
/dev/sda2 2048 1050623 1048576 512M EFI System
/dev/sda3 1050624 157286400 156235777 74.5G Linux LVM
/dev/sda4 157288448 1205864447 1048576000 500G Linux filesystem
/dev/sda5 1205864448 3907029134 2701164687 1.3T Linux filesystem
guest3 (vmid 110)
Code:
blkid
<snip>
/dev/mapper/500Gb: LABEL="500Gb" UUID="66a1950b-7029-4ba3-8c24-dc82" BLOCK_SIZE="512" TYPE="xf
s"
</snip>
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/500G 500G 19G 482G 4% /mnt/ncdata_intra
guest4 (vmid 112)
Code:
blkid
<snip>
/dev/mapper/1.3Tb: LABEL="1.3Tb" UUID="1e9ab81b-ebea-408c-b9d4-01fdfc" BLOCK_SIZE="512" TYPE="xfs"
</snip>
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/1.3Tb 1.3T 18G 1.3T 2% /mnt/ncdata_ext