Using a .raw file as the hard disk for a VM.

lolloandr

New Member
Nov 10, 2023
5
0
1
Hello, i have this configuration:

- ssd with proxmox installe

- raid of 4 disk acting as the storage for the VMs



My SSD failed and i replaced it with a new one and installed proxmox, now i need to get my machines running again. Proxmox automatically picked up the raid (see here) but i don't see any obvious way to use these images as hard drive for my VMs. What i am missing?

Thank you
 
Hello

Could you share your /etc/pve/storage.cfg with us and also the config of the vm you want it to share to (/etc/pve/qemu-server/<VMID>.conf)?
 
Sure, here you go:

/etc/pve/storage.cfg
Code:
dir: local
        path /var/lib/vz
        content backup,vztmpl,iso

lvmthin: local-lvm
        thinpool data
        vgname pve
        content rootdir,images

lvm: bigData
        vgname bigData
        content rootdir,images
        shared 0


/etc/pve/qemu-server/100.conf
Code:
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=8.0.2,ctime=1699611923
name: local
net0: virtio=6A:81:9E:1D:45:3D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: bigData:vm-100-disk-1,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=82fec99b-cf5e-4e34-b797-2e6372f3ea3e
sockets: 1
vmgenid: fa7c619f-09c0-46b4-b8cc-be27b39e6c5f
 
That are your logical volumes, please post the output of pvdisplay, vgdisplay and lvdisplay
 
/etc/pve/qemu-server/100.conf
Code:
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=8.0.2,ctime=1699611923
name: local
net0: virtio=6A:81:9E:1D:45:3D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: bigData:vm-100-disk-1,iothread=1,size=32G -----------------------------------------
scsihw: virtio-scsi-single
smbios1: uuid=82fec99b-cf5e-4e34-b797-2e6372f3ea3e
sockets: 1
vmgenid: fa7c619f-09c0-46b4-b8cc-be27b39e6c5f
You do not have declared disk on bigData: storage. You changed this name to vm-100-disk-1?
 
That are your logical volumes, please post the output of pvdisplay, vgdisplay and lvdisplay
Here you go:
Code:
root@pve:~# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sdb3
  VG Name               pve
  PV Size               464.76 GiB / not usable <3.01 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              118978
  Free PE               4097
  Allocated PE          114881
  PV UUID               af7vUw-t8iH-f7io-wX1s-dZJ1-tzaw-xWJurO
 
  --- Physical volume ---
  PV Name               /dev/sda
  VG Name               bigData
  PV Size               <8.00 TiB / not usable <3.88 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              2097151
  Free PE               1245183
  Allocated PE          851968
  PV UUID               9m39f1-tHaA-OGig-spgS-tVP1-z11E-B8cP1F

Code:
root@pve:~# vgdisplay
  --- Volume group ---
  VG Name               pve
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  7
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <464.76 GiB
  PE Size               4.00 MiB
  Total PE              118978
  Alloc PE / Size       114881 / 448.75 GiB
  Free  PE / Size       4097 / 16.00 GiB
  VG UUID               bCTgWn-4srh-5cTd-2OJL-vkhY-xhKr-hTe2vf
 
  --- Volume group ---
  VG Name               bigData
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  31
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                6
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <8.00 TiB
  PE Size               4.00 MiB
  Total PE              2097151
  Alloc PE / Size       851968 / 3.25 TiB
  Free  PE / Size       1245183 / <4.75 TiB
  VG UUID               pqjjdS-Vn4U-xWLc-CCz5-s6i4-WGbP-cn0N01


Code:
root@pve:~# lvdisplay
  --- Logical volume ---
  LV Name                data
  VG Name                pve
  LV UUID                FgW31E-N4dn-ItL7-flcQ-0esd-AM4c-c6Lq2i
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-11-08 10:47:57 +0100
  LV Pool metadata       data_tmeta
  LV Pool data           data_tdata
  LV Status              available
  # open                 0
  LV Size                337.86 GiB
  Allocated pool data    0.00%
  Allocated metadata     0.50%
  Current LE             86493
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:9
 
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                3Uwylu-BccP-96tl-RvDs-qWPp-FnpX-NO0ai6
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-11-08 10:47:42 +0100
  LV Status              available
  # open                 2
  LV Size                8.00 GiB
  Current LE             2048
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
 
  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                PL1DzK-D3s2-jR35-JUVS-b7te-1HGA-tWEyRw
  LV Write Access        read/write
  LV Creation host, time proxmox, 2023-11-08 10:47:42 +0100
  LV Status              available
  # open                 1
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
 
  --- Logical volume ---
  LV Path                /dev/bigData/vm-101-disk-0
  LV Name                vm-101-disk-0
  VG Name                bigData
  LV UUID                YG1ABF-neuW-nHWI-VVR9-TqRp-g3yM-20bdcd
  LV Write Access        read/write
  LV Creation host, time pve, 2023-04-29 19:52:14 +0200
  LV Status              available
  # open                 0
  LV Size                128.00 GiB
  Current LE             32768
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:5
 
  --- Logical volume ---
  LV Path                /dev/bigData/vm-102-disk-0
  LV Name                vm-102-disk-0
  VG Name                bigData
  LV UUID                XfogHO-NJ0k-jpk1-CODd-ca09-Qvau-xw58cy
  LV Write Access        read/write
  LV Creation host, time pve, 2023-05-02 15:45:14 +0200
  LV Status              available
  # open                 0
  LV Size                3.00 TiB
  Current LE             786432
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6
 
  --- Logical volume ---
  LV Path                /dev/bigData/vm-104-disk-0
  LV Name                vm-104-disk-0
  VG Name                bigData
  LV UUID                AZLtLq-NSl3-1dOA-vegB-2YTb-AHem-7AmdTN
  LV Write Access        read/write
  LV Creation host, time pve, 2023-09-09 22:02:26 +0200
  LV Status              available
  # open                 0
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:7
 
  --- Logical volume ---
  LV Path                /dev/bigData/vm-112-disk-0
  LV Name                vm-112-disk-0
  VG Name                bigData
  LV UUID                Ie4GlG-E6Dv-ZOFo-nwMv-EMi9-lxy2-cqiHwX
  LV Write Access        read/write
  LV Creation host, time pve, 2023-09-10 15:49:52 +0200
  LV Status              NOT available
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
 
  --- Logical volume ---
  LV Path                /dev/bigData/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                bigData
  LV UUID                E1f3jC-cpZO-BHdX-7aUM-kg0x-Mi2I-h7lLPA
  LV Write Access        read/write
  LV Creation host, time pve, 2023-11-10 11:19:22 +0100
  LV Status              available
  # open                 0
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:4
 
  --- Logical volume ---
  LV Path                /dev/bigData/vm-100-disk-1
  LV Name                vm-100-disk-1
  VG Name                bigData
  LV UUID                9PULko-Wik8-gzEl-axXw-gcp7-abST-AuNJnI
  LV Write Access        read/write
  LV Creation host, time pve, 2023-11-10 11:25:24 +0100
  LV Status              available
  # open                 0
  LV Size                32.00 GiB
  Current LE             8192
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8

Sorry for the late reply and thank you for your help!
 
Last edited:
You do not have declared disk on bigData: storage. You changed this name to vm-100-disk-1?
I tried, it won't boot. The vm bios could't find a bootable image.

Sorry for the late reply and thank you for your help!
 
Last edited:
Have you tried to simply add
Code:
scsi1: bigData:vm-100-disk-0,iothread=1,size=32G
to the vm config file?
 
Have you tried to simply add
Code:
scsi1: bigData:vm-100-disk-0,iothread=1,size=32G
to the vm config file?
Hi, I tried. This is what i get:
1700144881141.png

This is the config file:
Code:
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=8.0.2,ctime=1699611923
name: local
net0: virtio=6A:81:9E:1D:45:3D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: vm-112-disk-0,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=82fec99b-cf5e-4e34-b797-2e6372f3ea3e
sockets: 1
vmgenid: fa7c619f-09c0-46b4-b8cc-be27b39e6c5f

And this is a screenshot of the drives i have:
1700144943902.png
 
There seem to be 2 disks associated with vm-100
but in your config file I only can see scsi0: bigData:vm-100-disk-1,iothread=1,size=32G

Have you tried adding BOTH disks to the /etc/pve/qemu-server/100.conf ? Something like
Code:
boot: order=scsi0;ide2;net0
cores: 1
cpu: x86-64-v2-AES
ide2: none,media=cdrom
memory: 2048
meta: creation-qemu=8.0.2,ctime=1699611923
name: local
net0: virtio=6A:81:9E:1D:45:3D,bridge=vmbr0,firewall=1
numa: 0
ostype: l26
scsi0: bigData:vm-100-disk-0,iothread=1,size=32G
scsi1: bigData:vm-100-disk-1,iothread=1,size=32G
scsihw: virtio-scsi-single
smbios1: uuid=82fec99b-cf5e-4e34-b797-2e6372f3ea3e
sockets: 1
vmgenid: fa7c619f-09c0-46b4-b8cc-be27b39e6c5f
?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!