Harddisk Size with DD

ejc317

Member
Oct 18, 2012
263
0
16
So we DD in a 20GB image onto a VM with a 10GB iscsi disk. It works but now the bootdisk size shows up as 0gb. Inside the VM it shows the right size (the size of the image).

Will this cause any issues on our SAN? Also conversely, if we do a backup of this system and restore it - will it work? Given the disk image is 20GB but the config file shows 10GB?
 
The following should fix the sizes:

# qm rescan --vmid <VMID>

No i know hat but I mean is this a way for people to get around the limit we set for them? restore a larger image (we give clients ability to restore backups)
 
I re-read your initial post, and I guess I do not really understand what you mean by "we DD in a 20GB image onto a VM with a 10GB iscsi disk"? dd 20GB into 10GB is impossible?
 
I re-read your initial post, and I guess I do not really understand what you mean by "we DD in a 20GB image onto a VM with a 10GB iscsi disk"? dd 20GB into 10GB is impossible?

We created a VM with a 10GB iscsi disk. Afterwards we DD over a .raw image onto the target /dev/san/vm-number-disk-1

The DD image is 20GB. The total disk size goes to 0, i rescan and now its 20GB.

Is this a way for clients to get around disk limitations
 
We created a VM with a 10GB iscsi disk. Afterwards we DD over a .raw image onto the target /dev/san/vm-number-disk-1

The DD image is 20GB. The total disk size goes to 0, i rescan and now its 20GB.

Is this a way for clients to get around disk limitations

Sorry, I don't really understand what you do here. You can't dd 20GB into a 10GB disk - dd will fail. Also, a simply dd does not change the VM config, nor the disk size. So why does the disk size does to 0? And what disk limitations do you talk about.
 
Sorry, I don't really understand what you do here. You can't dd 20GB into a 10GB disk - dd will fail. Also, a simply dd does not change the VM config, nor the disk size. So why does the disk size does to 0? And what disk limitations do you talk about.

Okay well I'm telling you

1) DD does not fail
2) After DD'ing a 20GB disk, the disk size under qm list goes to 0
3) After rescam, it shows up as 20GB

I am not making this up - you guys are so defensive whenever someone raises a bug - I have better things to do than come on here on a Sunday and make up stories of bugs. Jesus christ

My concern is that this block will corrupt the SAN storage system as it reallocated 20GB of space when there should have been only 10GB
 
Okay well I'm telling you

1) DD does not fail
2) After DD'ing a 20GB disk, the disk size under qm list goes to 0
3) After rescam, it shows up as 20GB

I am not making this up - you guys are so defensive whenever someone raises a bug - I have better things to do than come on here on a Sunday and make up stories of bugs. Jesus christ

My concern is that this block will corrupt the SAN storage system as it reallocated 20GB of space when there should have been only 10GB
Hi,
there is something wrong wioth your iSCSI-Configuration! Not with the pve-system.
If you use the iSCSI-disk as lvm-storage, the vm-disk is an logical volume and a logical volume isn't growable with dd - except you found a really linux wonder.

Of course you have to get an "devie full" messages after 10GB!

See here - 8GB don't fit in 4GB (which isn't a realy secret):
Code:
root@pve1:/var/lib/vz/images/198# ls -lsa
insgesamt 16793632
      4 drwxrwxrwx 2 root root       4096 10. Okt 17:05 .
      4 drwxr-xr-x 5 root root       4096 14. Okt 07:59 ..
8396812 -rw-r--r-- 1 root root 8589934592  7. Nov 21:50 vm-198-disk-1.raw
root@pve1:/var/lib/vz/images/198# lvs 
  LV            VG     Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
  sata-copy     backup -wi-a---  1,17t                                           
  data          pve    -wi-ao-- 50,00g                                           
  root          pve    -wi-ao-- 10,00g                                           
  swap          pve    -wi-ao--  2,00g                                           
  local         sata   -wi-ao--  3,00t                                           
  [B]vm-101-disk-1 sata   -wi-a---  4,00g[/B]                                           
  vm-250-disk-1 sata   -wi-a---  7,20g                                           
root@pve1:/var/lib/vz/images/198# [B]dd if=vm-198-disk-1.raw of=/dev/sata/vm-101-disk-1 bs=1024k[/B]
dd: Schreiben von „/dev/sata/vm-101-disk-1“: Auf dem Gerät ist kein Speicherplatz mehr verfügbar           # mean device is full
4097+0 Datensätze ein
4096+0 Datensätze aus
4294967296 Bytes (4,3 GB) kopiert, 87,454 s, 49,1 MB/s
root@pve1:/var/lib/vz/images/198# lvs
  LV            VG     Attr     LSize  Pool Origin Data%  Move Log Copy%  Convert
  sata-copy     backup -wi-a---  1,17t                                           
  data          pve    -wi-ao-- 50,00g                                           
  root          pve    -wi-ao-- 10,00g                                           
  swap          pve    -wi-ao--  2,00g                                           
  local         sata   -wi-ao--  3,00t                                           
  [B]vm-101-disk-1 sata   -wi-a---  4,00g[/B]                                           
  vm-250-disk-1 sata   -wi-a---  7,20g
Your problem must on your storage-system-side.

Udo
 
I am not making this up - you guys are so defensive whenever someone raises a bug - I have better things to do than come on here on a Sunday and make up stories of bugs. Jesus christ

You are really incredible - we give you free support on Sunday, and you complain? I will stop answering your question now.
 
We created a VM with a 10GB iscsi disk. Afterwards we DD over a .raw image onto the target /dev/san/vm-number-disk-1

The DD image is 20GB. The total disk size goes to 0, i rescan and now its 20GB.

Is this a way for clients to get around disk limitations

Just to add a different perspective to this, I think ejc is expecting that Proxmox would automagically just figure out there is a new size to the target disk. Of course, ejc needs to understand that a change to the size of the drive is a non-trivial change, so a rescan of some sort is needed for Proxmox to detect a new size of the target drive.

Personally, when I do this sort of thing, and I want to make sure Proxmox picks everything up correctly, I use the GUI and remove the drive (so it becomes an "unused disk"), then re-add it. Which probably does the same thing as doing the command-line rescan, but either way you have to prompt Proxmox to look and see that the drive size has changed and show that change in the GUI (changing the size= line from 10G to 20G).

In the end, I don't think this is a bug, but is an issue of ascetics and convenience. So perhaps ejc is suggesting that Proxmox do a quick auto-scan whenever a VM is started (for example) to make sure the drive sizes still match what the GUI is reporting. It's a small thing, but would be a "nice-to-have". ;)

Of course I could have this completely wrong... and in that case, ejc, please read dietmar's post a few times, realize comments like yours do not have a positive impact, and then read his post again. :p
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!