No Disk visible when installing Windows in VM

ssldn

Member
Jul 13, 2020
59
2
8
48
Global
HI, I have the problem to have the VM started and wanting to mount the windows 10 image inside of it to install and then the machine says:
No disk found , so like in the image:


And when I want to look at what harddrive I have installed then it shows the local lvm as SSD emulation.

The storage of this is showing:
v1vZqYv.png

the content tab of another storage lvm2 will say like: mkdir /dev/vg0/heavyload1: File exists at /usr/share/perl5/PVE/Storage/DirPlugin.pm line 108. (500)
 
Last edited:
AM now first trying to install the virtIOscsi drivers first. I first expected they will be installed like during windows installation cause drivers.
 
Add a second CD drive and attach the latest Virtio ISO to it. Then during installation you can click on the Load Driver (Treiber Laden) button.

The driver should be in the `vioscsi` directory. After the driver is loaded you should see the disk.
 
Add a second CD drive and attach the latest Virtio ISO to it. Then during installation you can click on the Load Driver (Treiber Laden) button.

The driver should be in the `vioscsi` directory. After the driver is loaded you should see the disk.
Yes this is right procedure
 
The storage local-lvm is showing:

v1vZqYv.png

the content tab of another storage lvm2 will say like: mkdir /dev/vg0/heavyload1: File exists at /usr/share/perl5/PVE/Storage/DirPlugin.pm line 108. (500)
 
Last edited:
Try to change type of Hard Disk to IDE
Hi, did You mean to do this as a possible solution for the lvm thin which isnt working nowhere?
Am not sure if I created the LVM thin correctly. that is why am asking.
My storages are:
local
local-lvm
local2 (inactive)
storageprox
Am pretty sure, that I so far couldnt activate the local2 DIRECTORY. Did not find any hint or so, how to do it.
Cause my shell spits out the following:
Bash:
root@ghost0 ~ # pvesm status
mkdir /dev/vg0/heavyload1: File exists at /usr/share/perl5/PVE/Storage/DirPlugin.pm line 107.
Name               Type     Status           Total            Used       Available        %
local               dir     active        26769660        15225300        10295496   56.88%
local-lvm       lvmthin     active       104857600         9688842        95168757    9.24%
local2              dir   inactive               0               0               0    0.00%
storageprox         dir     active        26769660        15225300        10295496   56.88%

Please look at this line: mkdir /dev/vg0/heavyload1: File exists at /usr/share/perl5/PVE/Storage/DirPlugin.pm line 107. I havent found anything useful while researching the net.
Look:
KtcOlAB.png


For any help am very grateful.
Thank you upfront.
Andre
 
Last edited:
Add a second CD drive and attach the latest Virtio ISO to it. Then during installation you can click on the Load Driver (Treiber Laden) button.

The driver should be in the `vioscsi` directory. After the driver is loaded you should see the disk.
Yeah. I did see the disc inside. I had already entered the desktop and then changed in "options" in the host setting on the node, some settings. I think afterwards the VM was gone. SImply gone. So far I couldnt find anything in logs or tasks. EDIT: I surely found out, why I couldnt see the VM anymore, cause of the one admin type I was logged in it has Linux PAM as authentication method.
I also see that particular admin, with which I was logged in, has Permission for VMadmin, just simply its not his realm.
clever stuff, I think. EDIT FINISH.

But also the rest of virtio drivers are needed, right?
Like here:
Repeat the process for other VirtIO drivers

EDIT 2: I can see, that during research one can also be coming to ends finally: One Step installation of VitrtIO Drivers
 
Last edited:
Bash:
root@ghost0 ~ # mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=32794292k,nr_inodes=8198573,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=6563828k,mode=755)
/dev/mapper/vg0-root on / type ext3 (rw,relatime)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
none on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=25,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=19883)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
configfs on /sys/kernel/config type configfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/nvme0n1p1 on /boot type ext3 (rw,relatime)
/dev/mapper/vg0-home on /home type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)
/dev/mapper/vg0-heavyload1 on /dev/vg0/heavyload1-s type ext4 (rw,relatime)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
/dev/fuse on /etc/pve type fuse (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions,allow_other)
tmpfs on /run/user/1001 type tmpfs (rw,nosuid,nodev,relatime,size=6563824k,mode=700,uid=1001,gid=1001)
is what I all can see about whats going on, why the One volume is not becoming active.
 
And I did it like in the pve docs:

To make it always accessible add the following line in /etc/fstab.


Bash:
# echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab
But the thing is not accessible
 
More input:
Bash:
root@ghost0 ~ # pvdisplay
  --- Physical volume ---
  PV Name               /dev/nvme0n1p2
  VG Name               vg0
  PV Size               <476.44 GiB / not usable <4.34 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              121967
  Free PE               0
  Allocated PE          121967
  PV UUID               MtTK78-mINc-7Y2S-ffth-ahnB-Jafg-TH9G9H

  --- Physical volume ---
  PV Name               /dev/nvme1n1
  VG Name               vg0
  PV Size               <476.94 GiB / not usable <2.34 MiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              122096
  Free PE               4030
  Allocated PE          118066
  PV UUID               56d07n-5HHb-D34r-91O3-Yf25-vbFn-7OqYhp

root@ghost0 ~ # lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg0/root
  LV Name                root
  VG Name                vg0
  LV UUID                3bnBxH-pDNh-wTUr-vpZ2-AvFc-dw32-Ott8qV
  LV Write Access        read/write
  LV Creation host, time rescue, 2020-07-27 14:33:47 +0200
  LV Status              available
  # open                 1
  LV Size                26.00 GiB
  Current LE             6656
  Segments               2
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/vg0/swap
  LV Name                swap
  VG Name                vg0
  LV UUID                2vXZKS-cc0d-Ck8v-ZqFR-XB62-Jjsz-3JoubA
  LV Write Access        read/write
  LV Creation host, time rescue, 2020-07-27 14:33:48 +0200
  LV Status              available
  # open                 2
  LV Size                6.00 GiB
  Current LE             1536
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Path                /dev/vg0/home
  LV Name                home
  VG Name                vg0
  LV UUID                C0Yhle-k7hr-qeUY-V79d-2eQri-ZK6P-yon7cD
  LV Write Access        read/write
  LV Creation host, time rescue, 2020-07-27 14:33:48 +0200
  LV Status              available
  # open                 1
  LV Size                455.43 GiB
  Current LE             116591
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

  --- Logical volume ---
  LV Path                /dev/vg0/heavyload1
  LV Name                heavyload1
  VG Name                vg0
  LV UUID                p4e0vc-eu02-ozB-PXwn-Lbzr-gyio-iSkHtE
  LV Write Access        read/write
  LV Creation host, time ghost0, 2020-08-03 23:55:43 +0200
  LV Status              available
  # open                 1
  LV Size                350.00 GiB
  Current LE             89600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:3

  --- Logical volume ---
  LV Name                vmdata
  VG Name                vg0
  LV UUID                hZqCpZ-sRft-b2qT-3ITN-YnXY-XzB2-HhamLH
  LV Write Access        read/write
  LV Creation host, time ghost0, 2020-08-05 14:30:11 +0200
  LV Pool metadata       vmdata_tmeta
  LV Pool data           vmdata_tdata
  LV Status              available
  # open                 2
  LV Size                100.00 GiB
  Allocated pool data    9.23%
  Allocated metadata     15.03%
  Current LE             25600
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:6

  --- Logical volume ---
  LV Path                /dev/vg0/vm-100-disk-0
  LV Name                vm-100-disk-0
  VG Name                vg0
  LV UUID                BsOByr-hRcU-mrz3-tERj-oOep-w9Vk-d4qBCs
  LV Write Access        read/write
  LV Creation host, time ghost0, 2020-08-15 20:43:30 +0200
  LV Pool name           vmdata
  LV Status              available
  # open                 1
  LV Size                45.00 GiB
  Mapped size            20.51%
  Current LE             11520
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:8
 
Try to change type of Hard Disk to IDE
This solution worked for me. I had to fight to get the VM shut down, remove the disk, and add a new one (IDE) and it showed up when installing as an available target disk.
 
Idem ici, install sous Proxmox et IDE fonctionne mais l'option standard, SCSI je crois ? ne fonctionnait pas
idem pôur le réseau, j'ai déclaré un realtek et c'est connecté
 
Last edited:
Same here, install under Proxmox and IDE works but the standard option, SCSI I think? did not work
same for the network, I declared a realtek and it is connected
Hi @bemo47 , the procedure below has always worked for us:
Add a second CD drive and attach the latest Virtio ISO to it. Then during installation you can click on the Load Driver (Treiber Laden) button.

The driver should be in the `vioscsi` directory. After the driver is loaded you should see the disk.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
 
Hi @bemo47 , the procedure below has always worked for us:
And with a recent Proxmox VE version, the GUI should now offer a second CD drive in the VM create wizard directly if you choose Windows as guest OS
 
  • Like
Reactions: bbgeek17
Thanks, its working for the HDD, but using IDE was working also.
And i have the same for the network, no network connected but if i change from virtio to Realtek for example it works.
Maybe i should also install a virtio network driver ?
 
If you use a somewhat recent virtio ISO, there should be an installer that will install all the virtiIO drivers and services (guest agent, ballooning). Otherwise you will have to manually install the drivers as they are needed.

And yes, if you want to use the virtio network device, the drivers need to be installed too as Windows doesn't have them out of the box :)
 
  • Like
Reactions: bemo47

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!