FreeNAS/FreeBSD hot-plugging of disks not working

BloodyIron

Renowned Member
Jan 14, 2013
229
13
83
it.lanified.com
Hi Folks,

DISCLAIMER: THIS IS FOR LAB TESTING, NOT FOR PRODUCTION, PLEASE LEAVE YOUR PITCHFORKS AT HOME.

Okay, so I'm trying to run a FreeNAS VM here to test all kinds of configuration scenarios, including disaster scenarios of course. This is FreeNAS 9.10, latest version as of this writing (FreeNAS-9.10.1-U4 (ec9a7d3)). This version uses FreeBSD 10.3-RELEASE as the underlying OS.

I'm using Proxmox VE 4.3, updated as of yesterday or so.

What I am trying to do is attack a virtual disk (as in .qcow2 disk, not a pass-through disk) to the VM while it's running so I can do hot-swap testing stuff. I have tried all the controller options, and tried adding the disk as VIRTIO/SCSI/SATA in as many permutations as I think mostly possible.

When I add the disk, proxmox most of the time succeeds, but FreeNAS never sees that a new disk has been connected to the VM. Then when I try to remove the disk, I get:

"Parameter verification failed. (400)

virtio1: hotplug problem - error on hot-unplugging device 'virtio1'"

This happens no matter what controller/disk configuration combination I use (thus far).

I have tried to scour the internet high and wide, and I can't figure out what I'm doing wrong. Can I please get some input from those in the know as to what I could be doing wrong? Also, if there's a short-fall in Proxmox for supporting this, is there any way I can help correct this?

Thanks! :)
 
Okay I have _LIMITED_ success with a setup I found.

FreeNAS, still in a VM.
  1. Controller: "VirtIO SCSI"
  2. Add disks: "SCSI"
I can add disks and FreeNAS actually enumerates them, can see them in the webGUI. ADDING them SEEMS to be reliable. Removing them is inconsistent, sometimes it gives me errors when I try to detatch them:

"
Parameter verification failed. (400)

scsi0: hotplug problem - error on hot-unplugging device 'scsihw0'
"​

So, unsure if there's a bug here, or what. FreeNAS on the console says the disk is detatched and "destroyed", and stops seeing it, but Proxmox puts it in a red-crossthrough state. That disk image file is unusable till I shutdown the VM (reboot doesn't fix).
 
Although now after rebooting/shutting down the FreeNAS VM and turning back on, adding the disks back, FNAS doesn't quite see them.

Argh, this is quite inconsistent :(
 
you run latest version?

AFAIR we had such issues in some older releases.
 
Latest version of FreeNAS, and no updates presented for proxmox so I assume latest (definitely on 4.3).

post the output of:

> pveversion -v
 
proxmox-ve: 4.3-66 (running kernel: 4.4.19-1-pve)
pve-manager: 4.3-1 (running version: 4.3-1/e7cdc165)
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-46
qemu-server: 4.0-88
pve-firmware: 1.1-9
libpve-common-perl: 4.0-73
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-61
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-qemu-kvm: 2.6.1-6
pve-container: 1.0-75
pve-firewall: 2.0-29
pve-ha-manager: 1.0-35
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.4-1
lxcfs: 2.0.3-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
zfsutils: 0.6.5.7-pve10~bpo80
 
Yeah my apt repos are set right. I don't have a sub so I have the enterprise repo commented out.

There are no updates presented though, even though my "pveversion -v" is lower values than outlined on that page. So, I'm unsure why I'm not seeing newer packages. These particular nodes in the cluster are recent additions, so there shouldn't be anything funky preventing it from seeing newer updates ("shouldn't").

So, I'm unsure what I'm missing here, but it sure seems I'm behind on some updates. What should I do?

Apt-get upgrade and apt-get dist-upgrade present no available packages.


 
Okay adding the previously mentioned repo to sources.list made it so the packages are at the same level, or newer, than the wiki page you linked to.

HOWEVER, the pve-kernel is listed twice, and I'm not sure why :

proxmox-ve: 4.3-72 (running kernel: 4.4.24-1-pve)
pve-manager: 4.3-13 (running version: 4.3-13/7da29e06)
pve-kernel-4.4.24-1-pve: 4.4.24-72
pve-kernel-4.4.19-1-pve: 4.4.19-66
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-100
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.3-19
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-87
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80


I assume you will want me to try doing the disk attaching now? Or any other thoughts? I'll try to mess around with disk attach/detatch some more now with the updates, see how that goes, since I'm sure it might be a little bit before I get a response here ;)

Thx for the help so far :^)
 
Okay so I tried attaching disks while the VM was running on that node, the VM did not detect a new disk was present (tried with two VM disk images).

I tried moving the VM to another node, same test, same results.

So I think this is not quite something I'm doing wrong here, not sure where the issue lies exactly.
 
So far as I can tell, no. The FreeNAS/FreeBSD doesn't even register a drive is attached.

I updated the entire cluster today to the latest versions (4.4) of packages, tried again, still no improvement.
 
I tried this with FreeBSD 11, and adding/remvoving a hard disk of type scsi, scsi being here virtio-scsi (not the single variant)

After adding a disk, I get message on the system console informing me of the new device name.
Same when a disk is removed.

da3: < > detached
(da3:vtscsi0:0:0:3): Periph destroyed
 
scsi being here virtio-scsi (not the single variant)
This sounds reasonable since virtio-scsi-single uses a controller for each disk which implies adding and removing a PCIe controller each time a disk is hot-plugged. I think adding or removing a controller to/from a running system is not supported.
 
Well, this doesn't really help me since FreeNAS is built on FreeBSD 10.3. FreeNAS is what I need to lab for various scenarios, and updating the underlying FreeBSD is seriously a bad idea.

What about with 10.3? I really am out of ideas as to what I could be doing wrong here D:


I tried this with FreeBSD 11, and adding/remvoving a hard disk of type scsi, scsi being here virtio-scsi (not the single variant)

After adding a disk, I get message on the system console informing me of the new device name.
Same when a disk is removed.

da3: < > detached
(da3:vtscsi0:0:0:3): Periph destroyed
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!