Hi,
Yesterday my SSD on which Proxmox VE was installed, crashed (hardware failure). Luckily, my VM's + LXC are on other disks, and although I have not scheduled daily backups, i did have quite a few. Still I wanted to have all the VM's + LXC " as recent" as possible so I did some recovering:
My LXC containers, for every I did a restore from backup but did not start the container. Before that, I deleted this newly created container and put the one which was active before the crash in place. After a start all ran smoothly again.
For the VM's, from within the GUI I created an empty VM with specifications which are like the VM it should represent. Then from cli I replaced that with the VM before the crash:
- qm importdisk <VMID> /path/to/oldvm/113/vm-113-disk-0.qcow2 <storage> (in this example i had VM 113)
- GUI -> hardware -> change disk to virtio
- GUI -> change bootorder
It worked all well, except for 1 VM. As this VM is around 1.6TB in size, I could not do above trick as for that you' ll need twice the size of the VM and I only had this on a 2TB nvme
Then for this VM i followed a different procedure ( https://forum.proxmox.com/threads/load-vms-from-an-old-drive-to-my-new-proxmox.82564/)
For this I created again an empty VM with same (roughly) specs as the original VM. Only i needed to this by memory, as the original "117.conf" is on the broken disk which could not be accessed. In short, after creation of the empty new VM 117, I edited the 117.conf as per below.
Thing is, for the size of the VM I randomly picked 1600G as size, as it's roughly the size it was.
I started the VM, all went well, everythings works, no problem so far.
question though, now in the VM, this size is indeed 1.6T (df -h)
How sure can i be, this size will not be a problem. Like, it is reporting incorrect (without me knowing), resulting sectors (or how they are called) are in future incorrectly written?
From time to time I'd like to increase disk size. (GUI -> disk action -> resize, then growpart /dev/sda 2 then resize2fs /dev/sda2)
I am happy it all works, but I am hesitating nothing will pop up at a later stage as this disk size randomly was choosen
boot: order=scsi0;ide2;net0
cores: 6
cpu: host
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=9.2.0,ctime=1752577528
name: comfyui
net0: virtio=BC:24:11:62:13:0F,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: sam990pro2tb:117/vm-117-disk-0.qcow2,iothread=1,size=1600G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=b31d127f-64eb-4e98-b54e-afc36bccda27
sockets: 1
vmgenid: 6a9ec835-3369-429a-9c38-2eb35a605217
Yesterday my SSD on which Proxmox VE was installed, crashed (hardware failure). Luckily, my VM's + LXC are on other disks, and although I have not scheduled daily backups, i did have quite a few. Still I wanted to have all the VM's + LXC " as recent" as possible so I did some recovering:
My LXC containers, for every I did a restore from backup but did not start the container. Before that, I deleted this newly created container and put the one which was active before the crash in place. After a start all ran smoothly again.
For the VM's, from within the GUI I created an empty VM with specifications which are like the VM it should represent. Then from cli I replaced that with the VM before the crash:
- qm importdisk <VMID> /path/to/oldvm/113/vm-113-disk-0.qcow2 <storage> (in this example i had VM 113)
- GUI -> hardware -> change disk to virtio
- GUI -> change bootorder
It worked all well, except for 1 VM. As this VM is around 1.6TB in size, I could not do above trick as for that you' ll need twice the size of the VM and I only had this on a 2TB nvme
Then for this VM i followed a different procedure ( https://forum.proxmox.com/threads/load-vms-from-an-old-drive-to-my-new-proxmox.82564/)
For this I created again an empty VM with same (roughly) specs as the original VM. Only i needed to this by memory, as the original "117.conf" is on the broken disk which could not be accessed. In short, after creation of the empty new VM 117, I edited the 117.conf as per below.
Thing is, for the size of the VM I randomly picked 1600G as size, as it's roughly the size it was.
I started the VM, all went well, everythings works, no problem so far.
question though, now in the VM, this size is indeed 1.6T (df -h)
How sure can i be, this size will not be a problem. Like, it is reporting incorrect (without me knowing), resulting sectors (or how they are called) are in future incorrectly written?
From time to time I'd like to increase disk size. (GUI -> disk action -> resize, then growpart /dev/sda 2 then resize2fs /dev/sda2)
I am happy it all works, but I am hesitating nothing will pop up at a later stage as this disk size randomly was choosen
boot: order=scsi0;ide2;net0
cores: 6
cpu: host
ide2: none,media=cdrom
machine: q35
memory: 8192
meta: creation-qemu=9.2.0,ctime=1752577528
name: comfyui
net0: virtio=BC:24:11:62:13:0F,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: sam990pro2tb:117/vm-117-disk-0.qcow2,iothread=1,size=1600G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=b31d127f-64eb-4e98-b54e-afc36bccda27
sockets: 1
vmgenid: 6a9ec835-3369-429a-9c38-2eb35a605217