Changing 2TB Drive from Scsi to Sata caused 3TB increment!!?

Jan 16, 2022
195
8
23
38
We shutted down a windows VM to change the SCSI interface of the RBD Disk to SATA.

everything worked well. the next day we noticied after our upgrade maintenance during the process of reverting from SATA to SCSI ,
the drive was showing 5TB !

we first tough this was a bug, but no it is now really showing up in backup and disk as 5TB drive.

the node was running 7.4.3 and i dont see any task log related to that except updating the disk interface.

this is a serious one.
 
Please post your vm config via qm config VMID - do you have ssd =1 and qemuagent enabled and installed?
 
yes quemuagent is enable and installed in the Guest windows VM


i think on VM creation the SSD flag was ON but when i disconnected and reconnected the VM with SATA i didnt checked SSD back



acpi: 1
agent: 1
bios: ovmf
boot: order=scsi0;ide0
cores: 8
cpu: kvm64,flags=+aes
cpulimit: 32
cpuunits: 1280
ide0: none,media=cdrom
kvm: 1
machine: q35
memory: 16384
meta: creation-qemu=7.2.0,ctime=1699192271
migrate_speed: 100
name: VM_CUST1234
net0: virtio=00:00:00:hidden,bridge=VX551,queues=16,rate=125
numa: 1
ostype: win10
scsi0: RBD13:vm-1234-disk-0,cache=writeback,discard=on,iothread=1,mbps_rd=250,mbps_wr=250,size=5T
scsihw: virtio-scsi-single
smbios1: uuid=09ae68ff-e1bc-43da-a61d-40e4f42e1c09
sockets: 2
vcpus: 16
vga: virtio,memory=128
vmgenid: 777fc060-b6d9-4436-81b5-b3f776a0b1ea
 
The vm is currently online showing 3tb free

And I'm scare that we will need to duplicate the disk with a partition tool by temporary adding a second drive..?

How can this appended
 
@aaron can you team review this please

Can you storage migrate the vm to a different storage? Also enable the qemu guest agent, see "run guest-trim after a disk move or vm migration".

1699969985737.png

you can also try to trigger the triming via qemu-agent in proxmox ve - hypervisor: qm guest cmd VMID fstrim you need to use the VMID of your vm.

Another try is: Optimize-Volume -DriveLetter C -ReTrim -Verbose in Windows (driveltter must match your drive you wanna trim)

I reread https://forum.proxmox.com/threads/t...-in-proxmox-ceph-after-migration.83924/page-2 (I posted this question myself back in the days) seems like SATA is not working with TRIM! (but your back on virtio-scsi already) :-)
 
Last edited:
i will try this tonight.

do you have any idea how can this appened ? is there a way to prevent Guest enable VM from increasing on the go the Disk.. ?! the WIndows OS is trill 2TB , with now a 3TB block available at the end.

i would llike to know how this appened
 
Can you storage migrate the vm to a different storage? Also enable the qemu guest agent, see "run guest-trim after a disk move or vm migration".

View attachment 58060

you can also try to trigger the triming via qemu-agent in proxmox ve - hypervisor: qm guest cmd VMID fstrim you need to use the VMID of your vm.

Another try is: Optimize-Volume -DriveLetter C -ReTrim -Verbose in Windows (driveltter must match your drive you wanna trim)

I reread https://forum.proxmox.com/threads/t...-in-proxmox-ceph-after-migration.83924/page-2 (I posted this question myself back in the days) seems like SATA is not working with TRIM! (but your back on virtio-scsi already) :)

is this supposed to Free Space ( the 3TB Free ) that is next to the windows system partition ?
 
We shutted down a windows VM to change the SCSI interface of the RBD Disk to SATA.

everything worked well. the next day we noticied after our upgrade maintenance during the process of reverting from SATA to SCSI ,
the drive was showing 5TB !
If it was my system, the first thing I would do is review the task log and system log. Examine "journalctl" around the time when you made a change +30/60min on each end.
If the log is available from the last time the system was started, I would examine from that point on as well.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
is this supposed to Free Space ( the 3TB Free ) that is next to the windows system partition ?
No this should free up space on the storage, if storage shoes more GB then windows does in the os.
 
so there is noway to revert from 5TB to 2TB easily.

i have found this in journalctl :

Nov 11 09:09:11 bl1 pvedaemon[1855060]: <root@pam> update VM 1234: resize --disk sata0 --size +1024G
Nov 11 11:11:31 bl1 pvedaemon[3764808]: <root@pam> update VM 1234: resize --disk sata0 --size +2048

So so far qemu agent is enable, and i know a backup job is running ( customer have a backup solution inside VM )
that is sceduled every 2 hours i think.

( vm is currently reverted to scsi0 ) we where in need of sata0 to do a winodws server upgrade from a Edition to the other

can qemu agent trigger a size expansion , or pvedeamon is PVEDEAMON related to API call ?
we use third party Modulegarden module, how can we know if its a api call ? can anyone share api detail log ?

thx
 
No this should free up space on the storage, if storage shoes more GB then windows does in the os.
I think there is a disconnect. Op is claiming that prior to changing controller he had a 2TB disk. This disk was properly reflected as 2TB in his VM and is fully allocated to filesystem. The 2TB matched, according to Op, to the size shown in VM config panel in PVE. We assume that it was also properly sized on the backend in Ceph.
Op is claiming that upon making a controller change (seems like back and forth) between SCSI and SATA the disk, through no Ops intervention, changed its size to 5TB. The 5TB is clearly seen in the VM config now. The VM also sees expanded space, ie the Physical disk in VM is 5TB, with 2TB allocated.

I think its a mistake to point Op into fs-trimming, migration, etc routes. What needs to happen is through Ceph disk listing commands get a confirmation that the virtual disk is indeed 5TB now. Then go back in log and trace everything that happened.
Its very unlikely that this is an artifact of adapter configuration change. Its more likely that this is an operator error. However, without examining logs its impossible to say.

Please correct me if I misunderstood.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: jsterr
Nov 11 09:09:11 bl1 pvedaemon[1855060]: <root@pam> update VM 1234: resize --disk sata0 --size +1024G
Nov 11 11:11:31 bl1 pvedaemon[3764808]: <root@pam> update VM 1234: resize --disk sata0 --size +2048
This clearly shows that disk has been extended when it was already changed to SATA. It happened in two steps, two hours apart.
You can now check the /var/log/pveproxy and try to find correlated API calls if any.
can qemu agent trigger a size expansion , or pvedeamon is PVEDEAMON related to API call ?
Extremely unlikely.
we use third party Modulegarden module, how can we know if its a api call ? can anyone share api detail log ?
Its more than likely that this was triggered by an Operator. Check /var/log/pveproxy. I am not familiar with MG and whether they provide additional logging.

I think the size change and controller change are not related directly. Just coincidental timing.

Good luck


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
  • Like
Reactions: aaron and leesteken

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!