[SOLVED] Error writing [path] No space left on device

Exio

Member
Mar 30, 2022
45
7
13
Hi, im getting this error when i try to move a MV disk from one path to another. I think this have no sense because i think i have enough free space in my lvm, aprox 380GB in the local volume and 1.5Tb in the local-lvm.

Im trying to execute this command.

Code:
mv /var/lib/vz/images/100/W10-BASE-02B2-disk1.qcow2 /dev/pve/vm-100-disk-2 && mv /var/lib/vz/images/100/W10-BASE-02B2-disk2.qcow2 /dev/pve/vm-100-disk-3

Edit: this both images are not up to 120Gb.

I have read some post with the same error but neither solve my problem.

Probably you will ask for more info, ask me anything, i dont have the certain of what info is needed here.

What is happening?

Thanks.
 
Last edited:
You can start with explaining what you are trying to do.

Your destination - /dev - is a special operating system directory where very specific information is stored, i.e. device files:
https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/dev.html

A regular qcow file should not be /dev. Furthermore the /dev is located on the root partition "/". Assuming you did a regular install that partition is only ~8-20GB.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
You can start with explaining what you are trying to do.

Your destination - /dev - is a special operating system directory where very specific information is stored, i.e. device files:
https://tldp.org/LDP/Linux-Filesystem-Hierarchy/html/dev.html

A regular qcow file should not be /dev. Furthermore the /dev is located on the root partition "/". Assuming you did a regular install that partition is only ~8-20GB.


Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I am trying to move an already created disk to a path where PVE recognizes it as such. the start of the problem comes from trying to create a VM with a disk that already contains a Windows installation with N required apps.
The thing is that when I try to add these two disks to the hardware of the VM, no option recognizes these disks, therefore, I cannot add them or run the VM.

Code:
root@proxmox:~# ls -l /dev/pve
total 32921780
lrwxrwxrwx 1 root root           7 Mar 30 15:14 backups -> ../dm-6
lrwxrwxrwx 1 root root           7 Mar 30 15:14 root -> ../dm-1
lrwxrwxrwx 1 root root           7 Mar 30 15:14 swap -> ../dm-0
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-100-disk-0 -> ../dm-21
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-100-disk-1 -> ../dm-22
-rw------- 1 root root 33711902720 Mar 30 18:16 vm-100-disk-2
lrwxrwxrwx 1 root root           7 Mar 30 15:14 vm-101-disk-0 -> ../dm-8
lrwxrwxrwx 1 root root           7 Mar 30 15:14 vm-101-disk-1 -> ../dm-9
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-102-disk-0 -> ../dm-10
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-102-disk-1 -> ../dm-11
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-102-disk-2 -> ../dm-12
lrwxrwxrwx 1 root root           7 Mar 30 15:14 vm-104-disk-0 -> ../dm-7
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-105-disk-0 -> ../dm-13
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-105-disk-1 -> ../dm-14
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-106-disk-0 -> ../dm-20
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-106-disk-1 -> ../dm-23
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-200-disk-0 -> ../dm-15
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-200-disk-1 -> ../dm-16
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-200-disk-2 -> ../dm-17
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-200-disk-3 -> ../dm-18
lrwxrwxrwx 1 root root           8 Mar 30 15:14 vm-200-disk-4 -> ../dm-19

I thought that if the other disks of the other VMs were in that path, these disks that I try to import, would also go there

I didn't do a normal PVE install. I changed the root partition to 580Gb aprox.

Thanks for your reply.
 
As you can see in your output - the "other" disks are symbolic links to special devices "dm-xx" that represent Device Mapper type, probably LVM, perhaps multipath.
The only "regular" file is from your attempts to copy it there, and it should not be there.
Even if you manage to place qcow file into /dev/pve - Proxmox will not "recognize" it, it doesnt work like that.

There are many variables in your situation:
- clearly you are mostly using some sort of "block" storage
- probably you want to import your qcow disk to that block storage
- if true ^, you can use "qm import" - see "man qm" for more information
- the disk is already in the "right" place, unless you changed configuration

Code:
pvesm alloc local 100 vm-100-disk-10.qcow2 1
Formatting '/var/lib/vz/images/100/vm-100-disk-10.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=1024 lazy_refcounts=off refcount_bits=16
successfully created 'local:100/vm-100-disk-10.qcow2'

pvesm list local
Volid                                                     Format  Type           Size VMID
local:100/vm-100-disk-10.qcow2                            qcow2   images         1024 100
local:vztmpl/ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz tgz     vztmpl    210160363

qm set 100 --ide0 local:100/vm-100-disk-10.qcow2
update VM 100: -ide0 local:100/vm-100-disk-10.qcow2

/var/lib/vz/images/100# cp vm-100-disk-10.qcow2 somerandomdisk.qcow2
pvesm list local
Volid                                                     Format  Type           Size VMID
local:100/somerandomdisk.qcow2                            qcow2   images         1024 100
local:100/vm-100-disk-10.qcow2                            qcow2   images         1024 100
local:vztmpl/ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz tgz     vztmpl    210160363

qm set 100 --ide1 local:100/somerandomdisk.qcow2
update VM 100: -ide1 local:100/somerandomdisk.qcow2



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Last edited:
As you can see in your output - the "other" disks are symbolic links to special devices "dm-xx" that represent Device Mapper type, probably LVM, perhaps multipath.
The only "regular" file is from your attempts to copy it there, and it should not be there.
Even if you manage to place qcow file into /dev/pve - Proxmox will not "recognize" it, it doesnt work like that.

There are many variables in your situation:
- clearly you are mostly using some sort of "block" storage
- probably you want to import your qcow disk to that block storage
- if true ^, you can use "qm import" - see "man qm" for more information
- the disk is already in the "right" place, unless you changed configuration

Code:
pvesm alloc local 100 vm-100-disk-10.qcow2 1
Formatting '/var/lib/vz/images/100/vm-100-disk-10.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=1024 lazy_refcounts=off refcount_bits=16
successfully created 'local:100/vm-100-disk-10.qcow2'

pvesm list local
Volid                                                     Format  Type           Size VMID
local:100/vm-100-disk-10.qcow2                            qcow2   images         1024 100
local:vztmpl/ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz tgz     vztmpl    210160363

qm set 100 --ide0 local:100/vm-100-disk-10.qcow2
update VM 100: -ide0 local:100/vm-100-disk-10.qcow2

/var/lib/vz/images/100# cp vm-100-disk-10.qcow2 somerandomdisk.qcow2
pvesm list local
Volid                                                     Format  Type           Size VMID
local:100/somerandomdisk.qcow2                            qcow2   images         1024 100
local:100/vm-100-disk-10.qcow2                            qcow2   images         1024 100
local:vztmpl/ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz tgz     vztmpl    210160363

qm set 100 --ide1 local:100/somerandomdisk.qcow2
update VM 100: -ide1 local:100/somerandomdisk.qcow2



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Well... i have a bigger problem now...
https://forum.proxmox.com/threads/cannot-access-web-gui.107376/
I will continue with this after solve this one new... omg...
 
Last edited:
As you can see in your output - the "other" disks are symbolic links to special devices "dm-xx" that represent Device Mapper type, probably LVM, perhaps multipath.
The only "regular" file is from your attempts to copy it there, and it should not be there.
Even if you manage to place qcow file into /dev/pve - Proxmox will not "recognize" it, it doesnt work like that.

There are many variables in your situation:
- clearly you are mostly using some sort of "block" storage
- probably you want to import your qcow disk to that block storage
- if true ^, you can use "qm import" - see "man qm" for more information
- the disk is already in the "right" place, unless you changed configuration

Code:
pvesm alloc local 100 vm-100-disk-10.qcow2 1
Formatting '/var/lib/vz/images/100/vm-100-disk-10.qcow2', fmt=qcow2 cluster_size=65536 extended_l2=off preallocation=metadata compression_type=zlib size=1024 lazy_refcounts=off refcount_bits=16
successfully created 'local:100/vm-100-disk-10.qcow2'

pvesm list local
Volid                                                     Format  Type           Size VMID
local:100/vm-100-disk-10.qcow2                            qcow2   images         1024 100
local:vztmpl/ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz tgz     vztmpl    210160363

qm set 100 --ide0 local:100/vm-100-disk-10.qcow2
update VM 100: -ide0 local:100/vm-100-disk-10.qcow2

/var/lib/vz/images/100# cp vm-100-disk-10.qcow2 somerandomdisk.qcow2
pvesm list local
Volid                                                     Format  Type           Size VMID
local:100/somerandomdisk.qcow2                            qcow2   images         1024 100
local:100/vm-100-disk-10.qcow2                            qcow2   images         1024 100
local:vztmpl/ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz tgz     vztmpl    210160363

qm set 100 --ide1 local:100/somerandomdisk.qcow2
update VM 100: -ide1 local:100/somerandomdisk.qcow2



Blockbridge: Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
So i need to execute all of this? I mean, i want to import both of .qcow2 disk files to my VM 100 hardware. Is that code right for that?
 
No, you dont need to execute all of it. It was an example of creating, finding, using a random qcow with non-conforming name.
You only need the last step, based on your question.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Code:
root@proxmox:~# qm set 100 --ide1 local:100//var/lib/vz/images/100/W10-BASE-02B2-disk2.qcow2

unable to parse volume filename '/var/lib/vz/images'

Something have to be wrong...
 
From my example:

This is how you can list available disks to find their PVE appropriate name and path:
#pvesm list local
Volid Format Type Size VMID
local:100/somerandomdisk.qcow2 qcow2 images 1024 100
local:100/vm-100-disk-10.qcow2 qcow2 images 1024 100
local:vztmpl/ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz tgz vztmpl 210160363

And this is how you use the output of list to add a disk:

#qm set 100 --ide1 local:100/somerandomdisk.qcow2
update VM 100: -ide1 local:100/somerandomdisk.qcow2


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
From my example:

This is how you can list available disks to find their PVE appropriate name and path:
#pvesm list local
Volid Format Type Size VMID
local:100/somerandomdisk.qcow2 qcow2 images 1024 100
local:100/vm-100-disk-10.qcow2 qcow2 images 1024 100
local:vztmpl/ubuntu-16.04-standard_16.04.5-1_amd64.tar.gz tgz vztmpl 210160363

And this is how you use the output of list to add a disk:

#qm set 100 --ide1 local:100/somerandomdisk.qcow2
update VM 100: -ide1 local:100/somerandomdisk.qcow2


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Understood, but...

Code:
root@proxmox:~# pvesm list local
Volid                                         Format  Type             Size VMID
local:iso/MacOS_-_Monterey_-_full.img         iso     iso       15032385536
local:iso/OpenCore-v16.iso                    iso     iso         157286400
local:iso/ubuntu-20.04.3-desktop-amd64.iso    iso     iso        3071934464
local:iso/wifislax64-3.0-final.iso            iso     iso        2472542208
local:iso/Windows_10_x86_y_x64_marzo_2021.iso iso     iso        8660713472
local:iso/Windows_11_x64.iso                  iso     iso        5587453952
local:iso/Windows_7_SP1_Ultimate_X64.iso      iso     iso        3252027392
local:iso/Windows_Server_2019_X64.iso         iso     iso        5310353408
local:iso/Windows_Server_2022_x64.iso         iso     iso        5055336448

I cant see my .qcow2 files. They are located in /var/lib/vz/template/iso as you can see here:

Code:
root@proxmox:~# ls -l /var/lib/vz/template/iso
total 203068356
-rw-r--r-- 1 root root 54232481792 Mar 30 09:47 W10-BASE-02B2-disk1.qcow2
-rwxr-xr-x 1 root root 54476440576 Mar 30 01:13 W10-BASE-02B2-disk1.vmdk
-rw-r--r-- 1 root root 25311903744 Mar 30 09:46 W10-BASE-02B2-disk2.qcow2
-rwxr-xr-x 1 root root 25319964672 Mar 30 01:23 W10-BASE-02B2-disk2.vmdk
root@proxmox:~#

I have both VMDK and qcow2 formats
 
You need to get your paths straight (pun intended).

In comment #1 and #8 you had:
/var/lib/vz/images/100/

In #10 your files appeared in
" /var/lib/vz/template/iso"

Neither qcow nor vmdk are "iso" obviously.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
I did mv with these disk to see what happen if i do pvesm list local, thats the change. Anyways, where are qcow2 and vmdk disk supposed to be to add them to id100 vm?
 
/var/lib/vz/images/100/ is the right path if all you want to do is attach an existing qcow to vmid 100.
It should work, unless the attributes of your "local" storage do not include "images":

https://pve.proxmox.com/wiki/Storage#_storage_configuration
Code:
Default storage configuration (/etc/pve/storage.cfg)
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

# default image store on LVM based installation
lvmthin: local-lvm
        thinpool data
        vgname pve
content rootdir,images


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
/var/lib/vz/images/100/ is the right path if all you want to do is attach an existing qcow to vmid 100.
It should work, unless the attributes of your "local" storage do not include "images":

https://pve.proxmox.com/wiki/Storage#_storage_configuration
Code:
Default storage configuration (/etc/pve/storage.cfg)
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

# default image store on LVM based installation
lvmthin: local-lvm
        thinpool data
        vgname pve
content rootdir,images


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Doesnt work, i moved again both disk to /var/lib/vz/images/100/ and pvesm list local still not showing them. Im a bit stucked, what should i do to fix this?
 
OK, i was writing qm import in the wrong way, i did this:
Code:
root@proxmox:~# qm set 100 --ide2 local:100/W10-BASE-02B2-disk1.qcow2
update VM 100: -ide2 local:100/W10-BASE-02B2-disk1.qcow2
root@proxmox:~# qm set 100 --ide3 local:100/W10-BASE-02B2-disk2.qcow2
update VM 100: -ide3 local:100/W10-BASE-02B2-disk2.qcow2
root@proxmox:~#
And now the disks are in the HW of the VM but i get this error on the console of the VM:
"TASK ERROR: storage 'local' does not support content-type 'images'" when i start it

I think i should move these disk to local-lvm but what is the path?
 
Last edited:
No, local-lvm is not for qcow storage. To convert your qcow to LVM volume you need to use "qm importdisk" (man qm)
i.e.: qm importdisk 100 ./myrandom.qcow local-lvm --format raw

Did you check that your "local" storage is marked to support "images" as I mentioned earlier? What is the context of your "/etc/pve/storage.cfg" ?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
No, local-lvm is not for qcow storage. To convert your qcow to LVM volume you need to use "qm importdisk" (man qm)
i.e.: qm importdisk 100 ./myrandom.qcow local-lvm --format raw

Did you check that your "local" storage is marked to support "images" as I mentioned earlier? What is the context of your "/etc/pve/storage.cfg" ?


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
storage.cfg contains this:

Code:
root@proxmox:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images
 
If you want to use /var/lib/vz location, which is presented as "directory" type storage under label "local", to store and present qcow - you need to mark it to be capable to do so by adding "images" content option which it's lacking now.

Code:
pvesm set local -content iso,images,backup,vztmpl
would do it


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
If you want to use /var/lib/vz location, which is presented as "directory" type storage under label "local", to store and present qcow - you need to mark it to be capable to do so by adding "images" content option which it's lacking now.

Code:
pvesm set local -content iso,images,backup,vztmpl
would do it


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
OK, done.

Now when i add the disk to VM and start it, log show:

"swtpm_setup: Not overwriting existing state file.
kvm: -device ide-hd,bus=ide.0,unit=1,drive=drive-ide1,id=ide1: Can't create IDE unit 1, bus supports only 1 units
stopping swtpm instance (pid 138795) due to QEMU startup error
TASK ERROR: start failed: QEMU exited with code 1"

I set the disks as IDE0 and IDE1

Tried with: qm importdisk 100 /var/lib/vz/images/100/W10-BASE-02B2-disk1.qcow2 local --format raw
and
qm set 100 --sata0 local:100/W10-BASE-02B2-disk1.qcow2
qm set 100 --sata1 local:100/W10-BASE-02B2-disk2.qcow2
For the bus error... And getting the same error.

Could it be an image error?
The first disk was a .VDI Disk, the second one was .VHD disk. I converted both to VMDK and then to QCOW2.

Is there a way to convert a VHD and a VDI to VMDK or QCOW2 in proxmox? I used VirtualBox's clonehd tool to perform the above mentioned process.
 
Last edited:
I set the disks as IDE0 and IDE1

Tried with: qm importdisk 100 /var/lib/vz/images/100/W10-BASE-02B2-disk1.qcow2 local --format raw
and
qm set 100 --sata0 local:100/W10-BASE-02B2-disk1.qcow2
qm set 100 --sata1 local:100/W10-BASE-02B2-disk2.qcow2
For the bus error... And getting the same error.

Could it be an image error?
The first disk was a .VDI Disk, the second one was .VHD disk. I converted both to VMDK and then to QCOW2.

Is there a way to convert a VHD and a VDI to VMDK or QCOW2 in proxmox? I used VirtualBox's clonehd tool to perform the above mentioned process.

the "importdisk" command you ran is essentially an equivalent of :
Code:
 cp W10-BASE-02B2-disk1.qcow2 vm-100-disk-0.qcow
So not very useful for end result.

The IDE bus in KVM supports only one device per bus, afaik. You should really examine your VM configuration and understand what is being used and what is available. There are other options to use the for the disk, ie scsi with virtio.

If above commands are the only thing you ran, I suspect that you now have two (or more) entries in your configuration that all reference the same disk - including the offending "ide" one. You will need to use "qm set --delete" to clean the configuration up.
You can examine the VM configuration via "qm config 100".


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!