How to manipulate the extra raw storage of a VM

ieronymous

Active Member
Apr 1, 2019
251
18
38
44
Hi

I am struggling finding a way (at least the most proper / valid one) on how to move the raw disk space of a VM (after detaching it) to another VM (on the same or different server)

My installation is based on zfs. I Installed two Win10 VMs and I gave the first one (VM id 102) a second storage from local-zfs space (one disk only) from it's hardware tab, just by adding that storage (options of storage were discard, didn t check backup and it was 15G) This storage was seeing inside the VM as a second hard drive and was partitioned with the well known Win NTFS. This storage has some data inside . I am trying to figure out how
1.After detaching the extra storage how to re-attach it to the same VM (GUI only has the option to detach it not re attach it)
2.How to attach it to another VM (Win of course since it is formatted with NTFS)

what I did was:
#zfs list ->to check the name of the disk
NAME USED AVAIL REFER MOUNTPOINT
rpool 77.6G 147G 104K /rpool
rpool/ROOT 30.9G 147G 96K /rpool/ROOT
rpool/ROOT/pve-1 30.9G 147G 30.9G /
rpool/data 46.7G 147G 104K /rpool/data
rpool/data/subvol-101-disk-0 2.17G 3.83G 2.17G /rpool/data/subvol-101-disk-0
rpool/data/vm-100-disk-0 987M 147G 987M -
rpool/data/vm-100-disk-1 80K 147G 80K -
rpool/data/vm-102-disk-0 17.1G 147G 17.1G -
rpool/data/vm-102-disk-1 12.3G 147G 12.3G -
rpool/data/vm-103-disk-0 14.2G 147G 14.2G -
The disk I am interested in is the rpool/data/vm-102-disk-1

# ls -la /dev/zvol/rpool/data/vm-102-disk-1* -> in order to see the the partitions of the disk
lrwxrwxrwx 1 root root 13 Mar 14 00:56 /dev/zvol/rpool/data/vm-102-disk-1 -> ../../../zd48
lrwxrwxrwx 1 root root 15 Mar 14 00:56 /dev/zvol/rpool/data/vm-102-disk-1-part1 -> ../../../zd48p1
As it seems (and it is logical) there is the vm-102-disk-1-part1 partition and this is the formatted with NTFS
Now here I have two more questions
a)vm-102-disk-1 and vm-102-disk-1-part1 are images? Do they have a file type or because they are raw images they havent?
b)Why at the end each partition has a corresponding device letter like zd48 and zd48p1. Is it used anywhere? Is it usable somehow?

I was successful but trying to mount that part-1 partition with the below command doesn t help much to my use case scenario.
#mount -o ro /dev/zvol/rpool/data/vm-102-disk-1-part1 /mnt/extradisk/ (extradisk folder I created to mount that extra storage)

Indeed contents of that storage were visible


/mnt/extradisk# ls
'$RECYCLE.BIN' kali-linux-2021.4a-installer-amd64.iso
511.23-notebook-win10-win11-64bit-international-dch-whql.exe 'linuxmint-20.3-mate-64bit (1).iso'
AnyDesk.msi proxmox-backup-server_2.1-1.iso
debian-amd64-netinst-3cx.iso proxmox-ve_7.1-2.iso
deepin-desktop-community-23-nightly-amd64.iso 'System Volume Information'
GeForce_Experience_v3.24.0.126.exe ubuntu-20.04.3-live-server-amd64.iso

Of course that doesn t get me anywhere. Also mounting with a filesystem like
mount -o ro -t zfs /dev/zvol/rpool/data/vm-102-disk-1-part1 /mnt/extradisk/
won t help much, right?

What worked was go to VM100 and from hardware tab detach the storage. At the same time I had terminal to the path
/etc/pve/qemu-server where I copied the line referring to that extra storage local-zfs:vm-102-disk-1,cache=writeback,discard=on,size=15G
After detaching it a new line created to the 100 VM which was unused0: local-zfs:vm-102-disk-1
So with both VMs closed I opened the 103.conf and added the like local-zfs:vm-102-disk-1,cache=writeback,discard=on,size=15G
saved refresh gui and 103 VM had that extra storage of VM 101. Maybe that answers my first two question but if I wanted to move that storage to another node

I have to treat that storage as a file in order to mv or cp it the other side. Well how am I supposed to achieve this?

Thank you
 
Last edited:
I have to treat that storage as a file in order to mv or cp it the other side. Well how am I supposed to achieve this?
...this is where a virtual @Dunuin would intervene to clear things out by saying (copy -pasting his answer from another post)
<<<I guess you should read a bit how ZFS works, especially the difference between a "dataset" and a "zvol".
Your VMs virtual disks are zvols, so they are not part of a filesystem so you can't see them when browsing a filesystem
like the filesystem that is mounted at "/hddpool". Think of them like plain harddisks (just virtual).
You can't see a harddisk in a folder because it is a device and not a file ontop of filesystem.
You can partition a harddisk and format that partition and then mount that partition somewhere
(just like with a zvol) but you still can't see the disk, just the mounted content of that filesystem,
that is stored on that partition that is ontop of that harddisk.
You could manually mount the contents of that zvol (just like mounting a harddisk)
but never do this when that VM is running and never forget to unmount it before starting that VM or you will corrupt your data.>>>>

Well even though I ve read documentation many timed and kept many many noted of different subjects I care about , nothing ever would be enough since I havent found anywhere specific examples on how to achieve what I need, and what I need is logical (I believe)... find a way
-without creating a cluster (so to be able to move that extra raw storage to another node which is what I want)
-without creating a dataset and create a storage on top of that dataset in order to move the disk I want as a Qcow2 and then to move it to even an external disk again, so i have the ability to mount the external disk to a new node and transfer the specific storage inside that disk in order pve to make it raw again (so complicated I don t know if it is possible the way I described it . Even if I could not so elegant way of doing it)

In conclusion, how to transfer that block storage to another node since you can t copy paste sectors only files and disks are zvols.
Using your words again @Dunuin <<<<<.... disks aren't files on top of a filesystem but are blockdevices (zvols or LVs) if you are using ZFS or LVM/LVM-thin as the storage. So there are no files you can see or copy. Blockdevices you can only work with when using the CLI commands (for example lvs or zfs list) or when working with them using the "/dev/..." paths (like working with real physical disks).>>>>>
No problem doing from cli or whatever environment this might be, but how?

Maybe making a backup of the extra storage and then treat it as file, but I can t backup the extra storage attached to a VM only the VM itself, at least I didn t find a way.

Maybe zfs over iscsi? that will give something that is transferable?


Ps New edit: I tried backing up that storage separately from the VM and seemed to work since I had afterwards a vma.zst file which I was able to transfer via WinScp to the other node a specified path. Problem is I can only restore it as a VM which it isnt (did it anyway) and got stuck there.
I am thinking maybe then to detach it and put the specific line (iscsi1 ......... ......) inside the .conf file of the transferred VM as well.
 
Last edited:
Hi

I am struggling finding a way (at least the most proper / valid one) on how to move the raw disk space of a VM (after detaching it) to another VM (on the same or different server)

My installation is based on zfs. I Installed two Win10 VMs and I gave the first one (VM id 102) a second storage from local-zfs space (one disk only) from it's hardware tab, just by adding that storage (options of storage were discard, didn t check backup and it was 15G) This storage was seeing inside the VM as a second hard drive and was partitioned with the well known Win NTFS. This storage has some data inside . I am trying to figure out how
1.After detaching the extra storage how to re-attach it to the same VM (GUI only has the option to detach it not re attach it)
If I remember right the "Detach" button will become a "Attach" button as soon as you click on a detached virtual disk.
2.How to attach it to another VM (Win of course since it is formatted with NTFS)

what I did was:
#zfs list ->to check the name of the disk
NAME USED AVAIL REFER MOUNTPOINT
rpool 77.6G 147G 104K /rpool
rpool/ROOT 30.9G 147G 96K /rpool/ROOT
rpool/ROOT/pve-1 30.9G 147G 30.9G /
rpool/data 46.7G 147G 104K /rpool/data
rpool/data/subvol-101-disk-0 2.17G 3.83G 2.17G /rpool/data/subvol-101-disk-0
rpool/data/vm-100-disk-0 987M 147G 987M -
rpool/data/vm-100-disk-1 80K 147G 80K -
rpool/data/vm-102-disk-0 17.1G 147G 17.1G -
rpool/data/vm-102-disk-1 12.3G 147G 12.3G -
rpool/data/vm-103-disk-0 14.2G 147G 14.2G -
The disk I am interested in is the rpool/data/vm-102-disk-1

# ls -la /dev/zvol/rpool/data/vm-102-disk-1* -> in order to see the the partitions of the disk
lrwxrwxrwx 1 root root 13 Mar 14 00:56 /dev/zvol/rpool/data/vm-102-disk-1 -> ../../../zd48
lrwxrwxrwx 1 root root 15 Mar 14 00:56 /dev/zvol/rpool/data/vm-102-disk-1-part1 -> ../../../zd48p1
As it seems (and it is logical) there is the vm-102-disk-1-part1 partition and this is the formatted with NTFS
Now here I have two more questions
a)vm-102-disk-1 and vm-102-disk-1-part1 are images? Do they have a file type or because they are raw images they havent?
They are zvols and blockdevices, so there is no file.
b)Why at the end each partition has a corresponding device letter like zd48 and zd48p1. Is it used anywhere? Is it usable somehow?
These zdX names might change. Each zvol ist just counted up in steps of 8. So zd0, zd8, zs16, zd24, zd32 and so on. Its better to use "/dev/zvol/rpool/data/vm-102-disk-1" instead if you want to mount a zvol.
I was successful but probably worthless of course (for my usecase scenario at least) to mount that part-1 partition with
#mount -o ro /dev/zvol/rpool/data/vm-102-disk-1-part1 /mnt/extradisk/ (extradisk folder I created to mount that extra storage)

Indeed contents of that storage were visible


/mnt/extradisk# ls
'$RECYCLE.BIN' kali-linux-2021.4a-installer-amd64.iso
511.23-notebook-win10-win11-64bit-international-dch-whql.exe 'linuxmint-20.3-mate-64bit (1).iso'
AnyDesk.msi proxmox-backup-server_2.1-1.iso
debian-amd64-netinst-3cx.iso proxmox-ve_7.1-2.iso
deepin-desktop-community-23-nightly-amd64.iso 'System Volume Information'
GeForce_Experience_v3.24.0.126.exe ubuntu-20.04.3-live-server-amd64.iso

Of course that doesn t get me anywhere as of course mounting with a filessytem as well like
mount -o ro -t zfs /dev/zvol/rpool/data/vm-102-disk-1-part1 /mnt/extradisk/ right?

What worked was go to VM100 and from hardware tab detach the storage. At the same time I had terminal to the path
/etc/pve/qemu-server where I copied the line referring to that extra storage local-zfs:vm-102-disk-1,cache=writeback,discard=on,size=15G
After detaching it a new line created to the 100 VM which was unused0: local-zfs:vm-102-disk-1
So with both VMs closed I opened the 103.conf and added the like local-zfs:vm-102-disk-1,cache=writeback,discard=on,size=15G
saved refresh gui and 103 VM had that extra storage of VM 101. Maybe that answers my first two question but if I wanted to move that storage to another node

I have to treat that storage as a file in order to mv or cp it the other side. Well how am I supposed to achieve this?

Thank you
Use the "zfs" and "zpool" commands to work with ZFS pools. If you want to move a virtual disk from one VM to another you need to:
1.) stop the VM so the zvol isn't mounted
2.) Detach the zvol
3.) check what the zvols of the target VMs are called. Lets say you want to move a zvol "rpool/data/vm-102-disk-1" to a VM with the VMID 104 that already got a zvol "rpool/data/vm-104-disk-1". For that you need to rename the zvol according to the PVE naming scheme. So run a zfs rename rpool/data/vm-102-disk-1 rpool/data/vm-104-disk-2 to rename the zvol from "rpool/data/vm-102-disk-1" to "rpool/data/vm-104-disk-2".
4.) run a qm rescan so that PVE looks for new disks. Now the zvol should be missing from VM 102 and be listed on VM 104 hardware tab, where you can click the attach button to attach it to the VM.

You can move zvols between different pools or even different machines by using the "zfs send" and "zfs recv" commands. Like described here: https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html

So if your target VM is on another machine do something like this between step 2 and 3:
Create a snapshot of rpool/data/vm-102-disk-1: zfs snapshot rpool/data/vm-102-disk-1@migration
Move that snapshot over a SSH connection to the remote PVE host: zfs send rpool/data/vm-102-disk-1@migration | ssh Your.Remote.PVE.Host zfs recv YourRemoteZFSStorage/TargetDatasetWhereVzolShouldBeStoredIn
 
Last edited:
If I remember right the "Detach" button will become a "Attach" button as soon as you click on a detached virtual disk.
nope.... ti gives only the option to detach it. Probably afterwards cli is the only solution like the way I did it (described above)

They are zvols and blockdevices, so there is no file.
...so there isnt a way?

These zdX names might change. Each zvol ist just counted up in steps of 8. So zd0, zd8, zs16, zd24, zd32 and so on. Its better to use "/dev/zvol/rpool/data/vm-102-disk-1" instead if you want to mount a zvol.
...didn t know that info about octets. Nice

Use the "zfs" and "zpool" commands to work with ZFS pools. If you want to move a virtual disk from one VM to another you need to:
1.) stop the VM so the zvol isn't mounted
2.) Detach the zvol
3.) check what the zvols of the target VMs are called. Lets say you want to move a zvol "rpool/data/vm-102-disk-1" to a VM with the VMID 104 that already got a zvol "rpool/data/vm-104-disk-1". For that you need to rename the zvol according to the PVE naming scheme. So run a zfs rename rpool/data/vm-102-disk-1 rpool/data/vm-104-disk-2 to rename the zvol from "rpool/data/vm-102-disk-1" to "rpool/data/vm-104-disk-2".
4.) run a qm rescan so that PVE looks for new disks. Now the zvol should be missing from VM 102 and be listed on VM 104 hardware tab, where you can click the attach button to attach it to the VM.

You can move zvols between different pools or even different machines by using the "zfs send" and "zfs recv" commands. Like described here: https://docs.oracle.com/cd/E18752_01/html/819-5461/gbchx.html

So if your target VM is on another machine do something like this between step 2 and 3:
Create a snapshot of rpool/data/vm-102-disk-1: zfs snapshot rpool/data/vm-102-disk-1@migration
Move that snapshot over a SSH connection to the remote PVE host: zfs send rpool/data/vm-102-disk-1@migration | ssh Your.Remote.PVE.Host zfs recv YourRemoteZFSStorage/TargetDatasetWhereVzolShouldBeStoredIn
Extremely thankful for the above info. Since I m at work right now, I ll check both ways and come but with results (and probably some more questions :) )
 
Last edited:
4.) run a qm rescan so that PVE looks for new disks. Now the zvol should be missing from VM 102 and be listed on VM 104 hardware tab, where you can click the attach button to attach it to the VM.
I don t know where this info comes from, but after someone detaches the storage there is nothing (without even running qm rescan) anymore to re attach there. Only one line inside the VM s .conf file corresponding to that extra storage which is
unused0: local-zfs:vm-102-disk-2.
If I just delete this line and run qm rescan I get
VM 102 add unreferenced volume 'local-zfs:vm-102-disk-2' as 'unused0' to config
After this .conf file has auto inserted the unused0: local-zfs:vm-102-disk-2 line in there.
Renaming first before running qm rescan solves the problem and I can delete the line for the extra disk inside the .conf file without having error messaged from qm rescan.
Your mini guide though stops there since there is nothing to do from the gui. Maybe cli zfs attach (if any) command will do the trick?

So if your target VM is on another machine do something like this between step 2 and 3:
Create a snapshot of rpool/data/vm-102-disk-1: zfs snapshot rpool/data/vm-102-disk-1@migration
Move that snapshot over a SSH connection to the remote PVE host: zfs send rpool/data/vm-102-disk-1@migration | ssh Your.Remote.PVE.Host zfs recv YourRemoteZFSStorage/TargetDatasetWhereVzolShouldBeStoredIn
-Well since a snapshot created a link aof the actual storage what is the point of moving it somewhere else where the original storage missing?
-Even though I made the snapshot zfs snapshot rpool/data/vm-104-disk-1@migration I search dozens of possible paths to find it afterwards but I can t seem to find the right path. Searching online for a simple question like this give multiple answers without any giving the damn path.
-Even if I find the snapshot and move it, then what? How am I suppose to make it what it was before the snapshot, a block level device (or whatever that might was)
 
Last edited:
I am probably a bit late to the game, but if an unused disk is edited (either via the edit button or by double-clicking it) you can configure it again, and it will be attached. Something that we potentially could make a more obvious. I need to think about it a bit.

If you are running a recent Proxmox VE version (7.1 or newer), you can move guest disks between guests. For now, this needs to be done on the CLI, but the GUI for it is something we are currently working on.

Code:
qm move-disk <source vmid> <disk> --target-vmid <target vmid> --target-disk <target disk>

For example:
Code:
qm move-disk 100 unused0 --target-vmid 200
If you don't provide a --target-disk, it should keep the one from the source, "unused0" in this case.
 
I am probably a bit late to the game, but if an unused disk is edited (either via the edit button or by double-clicking it) you can configure it again, and it will be attached. Something that we potentially could make a more obvious. I need to think about it a bit.
You are not late at all. On the contrary you kept me from slicing that damn machine with a katana!!! :)
So once more, if you detach the disk there is not disk afterwards to edit or double click .It vanished and that s it (from gui perspective). You can check it yourself. the extra line in hardware windows disappears in a blink of an eye after detach. The only way I found to reattach it is from the VM's conf file adding again that line scsi1: local-zfs:vm-104-disk-1,cache=writeback,discard=on,size=20G,ssd=1.
If I do this (from cli of course) the even refreshing the gui appears again the disk attached!!

If you are running a recent Proxmox VE version (7.1 or newer), you can move guest disks between guests. For now, this needs to be done on the CLI,
Don t get the phrase <<guest disks between guests>>. Meaning? That extra storage was the second disk of VM1 and I want to make it the second disk of VM2. So who are the guests in this scenario.

I don t have a problem at all doing it from cli as long as someone tells me how it is done. By the way, your procedure seems to be usable for movement of disk between 2 Vms of the same node only. What about two VMs on different nodes (my case).

PS By the way where zfs snapshot command keeps the outcome, in other words were snapshots are stored?
Does it have a meaning snapshoting a storage in order to move it ? If yes how to make it back afterwards to it's original form to the other side (node)?
 
So once more, if you detach the disk there is not disk afterwards to edit or double click .It vanished and that s it (from gui perspective).
Hmm, that should turn into an "unusedX" disk. What happens if you run qm rescan? Does it show up as "unusedX" after that?

Don t get the phrase <<guest disks between guests>>. Meaning? That extra storage was the second disk of VM1 and I want to make it the second disk of VM2. So who are the guests in this scenario.
Yes, you have a disk on VM1 and want to move that over to VM2. It is possible to do manually, but to do it right, you also have to rename the disk images to match the new VMID. Renaming depends heavily on the underlying storage and the commands needed for that.

That is why we have command for that now, that handles all that.


y the way, your procedure seems to be usable for movement of disk between 2 Vms of the same node only. What about two VMs on different nodes (my case).
Not possible... if the nodes are part of a cluster, maybe temporarily migrating one VM to the other node could be an option.

If there is no cluster and you have independent Proxmox VE nodes, you could, if you want to stay within the Proxmox VE tools, create a backup of the source VM and restore that on the other node. Then use the move disk to the other guest there.
To only have the needed disk inside the backup, you can remove the "Backup" checkbox from the other disks of the source VM. Don't forget to add them once you have created the "migration backup" ;)

PS By the way where zfs snapshot command keeps the outcome, in other words were snapshots are stored?
Does it have a meaning snapshoting a storage in order to move it ? If yes how to make it back afterwards to it's original form to the other side (node)?
If you want to see which snapshots you have in ZFS, run zfs list -t all. Snapshots are an inherent feature of ZFS.

ZFS is using datasets to create its hierarchy. There are two types of datasets that you will use. Filesystem ones and block based ones (volumes or zvol).
For VMs, zvols are used to provide a block device to the VM.
ZFS can send and receive these datasets. This can be used to either create backups of datasets of whole pools and that stream can be piped, for example via an ssh connection where one server is sending, and the other receiving and storing that sent dataset on its own ZFS pool.

To specify a point in time, you need a snapshot, so the zfs send, sends everything up to that snapshot. You can also use 2 snapshots to send only incremental changes if the older one is already present on the target machine.

I hope that explains it in rough terms. ZFS is, like any other feature rich storage solution, quite a topic of its own that you can dive in.
 
  • Like
Reactions: Dunuin
Hmm, that should turn into an "unusedX" disk. What happens if you run qm rescan? Does it show up as "unusedX" after that?
Depends. Are you talking what happens in GUI or inside the .conf file? Because in GUI nothing happens and nothing ever will :)
In .conf it automatically adds the line unused0: local-zfs:vm-102-disk-2.

Basically what I ve answered to @Dunuin
<<<Only one line inside the VM s .conf file corresponding to that extra storage which is
unused0: local-zfs:vm-102-disk-2.
If I just delete this line and run qm rescan I get error message
Code:
VM 102 add unreferenced volume 'local-zfs:vm-102-disk-2' as 'unused0' to config
After this .conf file has auto inserted the unused0: local-zfs:vm-102-disk-2 line in there.
Renaming first before running qm rescan solves the problem and I can delete the line for the extra disk inside the .conf file without having error messaged from qm rescan.>>>>

That is why we have command for that now, that handles all that.
you mean qm move-disk ???

If there is no cluster and you have independent Proxmox VE nodes, you could, if you want to stay within the Proxmox VE tools, create a backup of the source VM and restore that on the other node. Then use the move disk to the other guest there.
To only have the needed disk inside the backup, you can remove the "Backup" checkbox from the other disks of the source VM. Don't forget to add them once you have created the "migration backup"
... I have a lot to say here. This is what I ve tried last night. To be more specific,
I went to the hardware tab and edited the VM's disk and unchecked backup. This way when I made the backup it only created backup of that extra storage.
After that I unchecked the extra storage disk's backup option and therefore the backup made was for the VM itself.
This way I had in path /var/lib/vz/dump two separate files!!! (it is important to specify that since afterwards this is that creates the problem)
vzdump-qemu-102-2022_03_15-01_04_21.vma.zst (VM disk)
vzdump-qemu-102-2022_03_15-00_59_53.vma.zst (VM's extra attached storage)
I moved to the corresponding path (so /var/lib/vz/dump) via SCP those two files and
Navιgating via GUI to local->Backups I could see these two files, on the new node and now by pressing each one of them
I have the ability to restore the VM choosing the storage on the new node I want. Sweet up until here!!
Problem is though that it treats those files like VMs and not just storages. Bottom line is the VM restored successfully, but the extra storage after restoration shows under the node like it was a VM. Is there a solution to that? Something I could do differently maybe?

Not possible...
...my way above, kind of defeats that.

If you want to see which snapshots you have in ZFS, run zfs list -t all. Snapshots are an inherent feature of ZFS.
I dont want to see I want to navigate to it. Since it shows me rpool/data/vm-103-disk-0@wintest2path with no mount point of course,
also not visible in /dev/zvol/rpool/data/vm-103-disk-0@wintest2path, does that mean in order to manipulate it (copy cut ..etc) need to mount it
like:
mount -o ro /dev/zvolrpool/data/vm-103-disk-0@wintest2path /mnt/extradisk/ (extradisk, folder I created to mount that extra storage) or this action is without purpose at all? It not like mounting will reveal files inside the mounting point or will it?

ZFS is using datasets to create its hierarchy. There are two types of datasets that you will use. Filesystem ones and block based ones (volumes or zvol).
For VMs, zvols are used to provide a block device to the VM.
ZFS can send and receive these datasets. This can be used to either create backups of datasets of whole pools and that stream can be piped, for example via an ssh connection where one server is sending, and the other receiving and storing that sent dataset on its own ZFS pool.
Thank you for the extra info even though I knew that already

To specify a point in time, you need a snapshot, so the zfs send, sends everything up to that snapshot. You can also use 2 snapshots to send only incremental changes if the older one is already present on the target machine.
To transfer a VM to another side = node and be fully operational there needs a backup or a snapshot. All the other it is jsut theory that doesnt answer only confuses. I am trying to make binary questions whose answers are this or that. Afterwards it is ok to learn why was this or that. First I need (and with me I mean everyone else asking to learn) to know it can be done then how it can be done.

I hope that explains it in rough terms. ZFS is, like any other feature rich storage solution, quite a topic of its own that you can dive in.
Problem is that even though I dive in, most of the subjects (subjects I need at least) its just generalities . Sorry if you are part of the wiki but it is just the way it is. No specific examples (with names instead of variables) after the knowledge, just variables and variables and variables and options . That just tires people not encourages to learn more. I have read many of the units again and again. Once more theory. In practice is a whole other story, You need to see and know the correct outcome before hands, in order to have troubleshooting afterwards. This is exactly what I ve done so many years with proxmox (other aspects of my work too) and still you see how basic some of my questions are. It is not that I can t install or configure Proxmox. Done it multiple times in different production machines.

Once more thank you for being here and trying to help me out with this and sorry if at some point I seemed offensive. It is my tiredness that shows otherwise, no intention at all.
 
Correct way to copy datasets/zvols between pools (even if these are on different hosts) is to pipe the output of a 'zfs send' to a 'zfs recv'. Like aaron already explained you need a snapshot for that. This will result in an identical copy of the dataset or zvol on the target pool. If you just want to move and not copy it you can remove the copied zvol from the source pool afterwards with the 'zfs destroy' command.
 
Like aaron already explained you need a snapshot for that.
... probably, but it seems weird since the snapshot is self is part of the storage and depends on that. So when you move only the snapshot to another place, that other place doesn t include the origin of the snapshot just the snapshot. Its like you move only the links to another place and the origin stays back (if you want to help more on that please don t put links on differences between backups vs snapshots seen them all and videos as well. Still have questions when it comes to its's understanding). To me its like moving the shortcuts of a program and expect to work to the new location. It won t. It will ask the origin path.
 
... probably, but it seems weird since the snapshot is self is part of the storage and depends on that. So when you move only the snapshot to another place, that other place doesn t include the origin of the snapshot just the snapshot. Its like you move only the links to another place and the origin stays back (if you want to help more on that please don t put links on differences between backups vs snapshots seen them all and videos as well. Still have questions when it comes to its's understanding). To me its like moving the shortcuts of a program and expect to work to the new location. It won t. It will ask the origin path.
ZFS is a copy-on-write filesystem. Think of it like a logbook where you never overwrite something that is already written to a page. All you can do is write to the first empty page at the end. If you want to edit something that was already written on an previous page, you can't erase a line and write something else in that line, instead you need to add a new line to the end of th last written page like "Replace page 100 line 10 with X". And if you then want to know what value X right now is you read the entire book from the first to the last page and follow all changes.

Lets say it looks like this:
Page 50: set "tmp1" to "Hello"
Page 50: set "tmp2" to "World"
Page 70: create snapshot nr 1
Page 100: set value "tmp2" to "ieronymous"
Page 110: create snapshot nr 2
Page 120: set value "tmp1" to "Bye"
Page 130: now

There is no fixed place where the final value of "tmp1" or "tmp2" is stored. If you want to know what "tmp1 + tmp2" was at the time when the snapshot nr 1 was created you read page 1 to 70 and look for lines that mentioned "tmp1" or "tmp2". So for snapshot nr 1 "tmp1 + tmp2" would be "Hello World". If you want to know what "tmp1 + tmp2" was when creating snapshot nr 2 you read pages 1 to 110 and you will see that it first was "Hello World" but then got edited to "Hello ieronymous". When reading the complete logbook from first to last page "tmp1 + tmp2" would result in "Bye ieronymous".

And when using "zfs send | zfs recv" you need to use a snapshot to tell it to which point in time you are refernecing to. To continue with the logbook methaphor. When using it with snapshot nr 1 it will copy pages 1 to 70 from one logbook to another logbook. When using snapshot nr 2 it will copy pages 1 to 110 to the other book. In case you already copied snapshot nr 1 (pages 1 to 70) you could tell it to do a incremental send/recv from snapshot nr 1 to snapshot nr 2. Then it would only copy pages 71 to 110 as pages 1 to 70 are already copied over.

What you get when using "zfs send | zfs recv" is not a entry in the second logbook referencing to the first logbook. You get a identical copy of all pages from the first page to the last page before the snapshot you told it to use.

In reality it is of cause way more complex than the metaphor. But I find it useful to understand how ZFS works and why stuff is different compared to more traditional filesystems like ext4, ntfs and so on.

With these you would have a index on the first page telling you on which page you would find what you want. And if you want to edit a value you first look at the index page on which page you find that value, go to that page, use a piece of rubber to erase the old value and write the new value in the same line. This is convenient and will be way less work to quickly read/write something as you just need to read two pages (the index and the page the index is pointing you to) and not the entire book. But because you are always overwriting existing lines you only get the final version of the book and there is no way to see how the book changed over time.

With ZFS you could easily track all changes over time by reading the complete book chronologically because nothing ever gets overwritten.
Hope that helps to understand why ZFS got so much overhead and why it is so different.
 
Last edited:
I am probably a bit late to the game, but if an unused disk is edited (either via the edit button or by double-clicking it) you can configure it again, and it will be attached. Something that we potentially could make a more obvious. I need to think about it a bit.

If you are running a recent Proxmox VE version (7.1 or newer), you can move guest disks between guests. For now, this needs to be done on the CLI, but the GUI for it is something we are currently working on.

Code:
qm move-disk <source vmid> <disk> --target-vmid <target vmid> --target-disk <target disk>

For example:
Code:
qm move-disk 100 unused0 --target-vmid 200
If you don't provide a --target-disk, it should keep the one from the source, "unused0" in this case.
hi , how to set target storage?
 
Bash:
# qm move_disk 103 scsi2 NFS --target-vmid 101  --target-disk scsi4 --format=qcow2
400 Parameter verification failed.
target-vmid: either set 'storage' or 'target-vmid', but not both
storage: either set 'storage' or 'target-vmid', but not both
qm move-disk <vmid> <disk> [<storage>] [OPTIONS]
 
either set 'storage' or 'target-vmid', but not both
As it says in the error message, use either, but not both in combination ;). If you want to assign the disk to a different VM and also relocate it to a different storage, you will have to do it in two steps
 
Problem is though that it treats those files like VMs and not just storages. Bottom line is the VM restored successfully, but the extra storage after restoration shows under the node like it was a VM. Is there a solution to that? Something I could do differently maybe?
So you have restored the backup to a new and temporary VM?
Then you can use the
Code:
qm move-disk <source vmid> <disk> --target-vmid <target vmid> --target-disk <target disk>
command to move that disk over to the actual VM. This should leave the temporary VM without a disk, and at this point you can remove it.

As far as I understand, those two Proxmox VE nodes of yours are not in a cluster, right? That is why you will have to take some detours to achieve that. If they were part of the same cluster, you could (live) migrate the VMs between the nodes, then once on the same node, do the qm move-disk again.


Depends. Are you talking what happens in GUI or inside the .conf file? Because in GUI nothing happens and nothing ever will :)
In .conf it automatically adds the line unused0: local-zfs:vm-102-disk-2.

Okay that is not how it should be. Once you detach a disk, it should show up in the hardware panel as UnusedX disk.
Especially if you see it in the config file. You should also see it if you run qm config <vmid>.

What happens if you navigate to a different part of the GUI and then back to the hardware panel of that VM?
 
Did it!!!!!!!!!!!!!!!!!!!!!!!!! My way, but did it!!!!! Here follows an example as I would like it to be read from me by someone else's answer

Node 1:
Make a backup of the VM with the extra storage and locate it at /var/lib/vz/dump/...vzdump-qemu-103-2022_03_16-01_06_43.vma.zst
Copy/move this file with any way you want from Node 1 to Node 2 to the same path /var/lib/vz/dump/
Work at Node 1 done.

Node 2:
From GUI choose the Backup window and highlight the vzdump-qemu-103-2022_03_16-01_06_43.vma.zst and restore it
Navigate to the restored VM, choose the extra storage (here 202) and detach it
Then from cli check how this extra storage it is called along with it's virtual path
Code:
zfs list
rpool/data/vm-202-disk-1
Rename the vm-202-disk-1 to vm-201-disk-1 (that 1 is mandatory for instance you can t rename it to 0 because it is the number
for the new VM you want to add that storage to.Also 201 because this is the number the already created VM has)
Code:
zfs rename rpool/data/vm-202-disk-1 rpool/data/vm-201-disk-1
Afterwards run qm rescan in order for PROX to notice the renaming
Code:
qm rescan
VM 201 add unreferenced volume 'local-zfs:vm-201-disk-1' as 'unused0' to config (it did notice the rename procedure and now thinks that it
                                                                             is the extra storage of another VM, the one you want to add to)
Enter the .conf file of the new VM you want to add that storage (here 201)
Code:
nano /etc/pve/qemu-server/201.conf
Delete the line unused at the end and add the line unused0:local-zfs:vm-201-disk-1
add line scsi1: local-zfs:vm-201-disk-1,discard=on,size=5G to the path /etc/pve/qemu-server/201.conf
Save and exit. Run qm rescan again and suddenly you ll notice the extra storage added to the GUI as well.!!!!
Code:
qm rescan
VM 201 remove entry 'unused0', its volume 'local-zfs:vm-201-disk-1' is in use
Start the VM and see the storage there , probably with the same letter it had on the previous VM (not so sure about that though)

In conclusion I was trying to find this method in order to be able to have for a WinServ acting as a DC an extra storage where the shared
folders and other data could be saved for the domain users. Those data would take way more space than the VM itself and couldn't t afford to have
it backed up each time (yes with snapshots would t be so big but again I don t need that to be backed up and a simple uncheck to that backup box would also do the trick). The main goal here is to have a way if the VM screws up somehow, to be able to detach that storage from the VM and move it to another node and attach it there to another Win VM again and just recreate the shares.
All this could be way more simpler just by having 2 DCs and HA... etc.etc, but budget of the company is limited and we also have power corruption from the national provider to a point that we are in danger each day to fry everything (last time neutral was cut off outside the company and 420v went through and all the ups were popping like popcorn).
Anyway, that is what I wanted to do and how I achieved that. Of course it couldn't t be possible without the help of @Dunuin and @aaron

Thank you guys!!

PS This way you convert an abstract raw space into a vma.zst sile and then back to raw space again

@aaron So you have restored the backup to a new and temporary VM?
Why temporary? Prox treated it like a normal VM, even though the storage inside contains data and not an operating system that can boot

@aaron
Then you can use the
Code:
qm move-disk <source vmid> <disk> --target-vmid <target vmid> --target-disk <target disk>
command to move that disk over to the actual VM. This should leave the temporary VM without a disk, and at this point you can remove it.
Probably yes but I did it the way I described above and the result is the same I believe, don t you agree?

As far as I understand, those two Proxmox VE nodes of yours are not in a cluster, right? That is why you will have to take some detours to achieve that. If they were part of the same cluster, you could (live) migrate the VMs between the nodes, then once on the same node, do the qm move-disk again.
If ever Prox will make the procedure of de-clustering as easy as clustering by adding behind <<remove from cluster button>> the appropriate scripts in order to rename accordingly the removed node and the VMs on it, then I ll try again some day. For someone that doesn t want to add a Vm for quorum or having flood messages in syslog each time he shuts down one of the 2 servers cluster is out of a question. Did it once, liked the functionality and hated the procedure to remove the one afterwards. One must go with paper sheets on pocket in order to follow so many instructions that may leave the cluster at the end unusable. Image to be at a production level.

Okay that is not how it should be. Once you detach a disk, it should show up in the hardware panel as UnusedX disk.
Especially if you see it in the config file. You should also see it if you run qm config <vmid>.

What happens if you navigate to a different part of the GUI and then back to the hardware panel of that VM?
It should but it doesn t.
hardware panel as UnusedX disk->no such thing (by the way I m in the latest version of Prox)
Didn t try qm config <vmid> only qm rescan.
Nothing happens not only if I navigate elsewhere in order to refresh but even to reboot.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!