So my proxmox node included bunch of VM + OMV. All VM use the nvme ssd except OMV (NAS solution), which is using 2TB hd. I somehow accidentally backup my 1.5TB into my OMV this and accidentally filling it up. I have then stopped the backup and restart the whole node. The storage now still detected at 1.5TB/2TB, but the qcow image for the OMV showing 2TB/2TB, and induce i/o error for the VM and locking it out.
I read about trimming the disk, so I enabled the discard option on the OMV disk, reboot and tried fstrim -av but the image still showing 2TB/2TB.
I read about proxmox command like below in the wiki:
but seem like this need a space for the new image to take up, which I dont have since the whole storage is filled up.
Can anyone help to tell me what to do in this instance, aside from buying another 2TB (I can't afford that) to get space to run the qemu-img as above? Ill provide any log if required.
I read about trimming the disk, so I enabled the discard option on the OMV disk, reboot and tried fstrim -av but the image still showing 2TB/2TB.
I read about proxmox command like below in the wiki:
Code:
qemu-img convert -O qcow2 image.qcow2_backup image.qcow2
but seem like this need a space for the new image to take up, which I dont have since the whole storage is filled up.
Can anyone help to tell me what to do in this instance, aside from buying another 2TB (I can't afford that) to get space to run the qemu-img as above? Ill provide any log if required.