Freenas VM reboots after Shutdown

dnut

New Member
May 6, 2016
17
6
3
38
Hello There,
I'm currently using Proxmox to virtualize some VMs:
-Freenas
-Windows Server 2012r2
-Windows 10

I configurated Windows Server to perform Shutdown even if no user is logged, so to allow Proxmox to properly shutdown the o.s. if needed.
Windows 10 allow that out of the box, with no addictional tweaks.
Freenas instead is giving me some problems: it reboots after it has completed the shutdown...
So after a while Proxmox power the vm off (while the vm is in the unwanted reboot phase) to complete the shutdown reboot process of the whole proxmox environment.

Any suggestions?

Freenas is rock solid, and installed on top of a zfs filesystem, so I don't bother too much about data corruption neither of the zfs root file system or the data zfs volume, but, of course I'm looking for a way to fix it.

Thank you

p.s. Proxmox is an impressive virtualization solution!!!
 

MRosu

New Member
Mar 27, 2016
23
1
1
34
I had the same issue virtualizing FreeNAS on Proxmox. I fixed it by changing the CPU from the default 'kvm64' to 'qemu64'. No more kernel panics on shutdown, and I'll get an actual shutdown instead of a reboot. Good luck!
I had a similar issue with FreeNAS not shutting down when using the shutdown command from GUI. This solves my problem.
 
  • Like
Reactions: XanderServer

mattlach

Member
Mar 23, 2016
154
13
18
Boston, MA
I had the same issue as well. I never wound up finding a fix, as I started discovering massive checksum errors the first time FreeNAS started a scrub.

I used iut with two passed through LSI controllers, and this worked perfectly in ESXi previously, but I suffered data loss when I tried this in KVM. The data on the pool was actually good, but the scrub process was reading it wrong somehow and thinking it was checksum errors, and then marking the files as corrupt and irreparable.

I exported my pool out of my FreeNAS VM and reimported it into ZFS on Linux in the host, and all my problems went away. ZFS is easy to manage from the command line, but managing the shares manually is a bit of a pain. It's a huge positive though, as the pool has never been faster, and resource use is much lower, as the ARC now is directly on the host, and it can better take advantage of system memory.
 

MRosu

New Member
Mar 27, 2016
23
1
1
34
I had the same issue as well. I never wound up finding a fix, as I started discovering massive checksum errors the first time FreeNAS started a scrub.

I used iut with two passed through LSI controllers, and this worked perfectly in ESXi previously, but I suffered data loss when I tried this in KVM. The data on the pool was actually good, but the scrub process was reading it wrong somehow and thinking it was checksum errors, and then marking the files as corrupt and irreparable.

I exported my pool out of my FreeNAS VM and reimported it into ZFS on Linux in the host, and all my problems went away. ZFS is easy to manage from the command line, but managing the shares manually is a bit of a pain. It's a huge positive though, as the pool has never been faster, and resource use is much lower, as the ARC now is directly on the host, and it can better take advantage of system memory.
Unfortunately I also need to create complicated shares, such a Apple Time Machine so I really wanted a GUI.

I'm using a Supermicro jbod server and with disk passthrough I've had no problems with scrubs.

I suppose it's different with a raid controller even if it's set to a passthrough mode. I don't think I would have had the guts to do a FreeNAS VM without this jbod only setup.
 

mattlach

Member
Mar 23, 2016
154
13
18
Boston, MA
Unfortunately I also need to create complicated shares, such a Apple Time Machine so I really wanted a GUI.

I'm using a Supermicro jbod server and with disk passthrough I've had no problems with scrubs.

I suppose it's different with a raid controller even if it's set to a passthrough mode. I don't think I would have had the guts to do a FreeNAS VM without this jbod only setup.
Don't get me wrong. the LSI controllers are both flashed to IT mode, which means they behave as JBOD HBA's.

I have heard MANY recommendations against passing through individual disks to FreeNAS or any other ZFS based system. The only appropriate way to virtualize FreeNAS or any other ZFS file system is to pass through the entire controller. This was very stable in ESXi but did not work for me in KVM. It MAY have been due to this kernel bug though, that has since been fixed.

I - too - have a client in the house that uses Apple (ugh), but I solved this by creating a dedicated Ubuntu LXC container running Netatalk. To make things simpler I just copied the netatalk config file contents from FreeNAS, and pointed the disk mount to the exact same location. The existing Time Machine backup even survived the process.

I have also done the same thing (just to keep things sortof isolated) with SAMBA. I have a dedicated LXC container that I use just for samba shares.

It was a little complicated to get things set up, but even if FreeNAS now works after the kernel bug above was fixed, I wouldn't go back. The benefits of the ZFS ARC managed by the host means I don't have to assign an excessive amount of RAM to FreeNAS. Disk access is also much faster now.

Very complex to get all the NFS, SAMBA and Apple shares properly set up with proper permissions and configurations, but once it is done, it is done and it works very very well.
 

dnut

New Member
May 6, 2016
17
6
3
38
I had the same issue virtualizing FreeNAS on Proxmox. I fixed it by changing the CPU from the default 'kvm64' to 'qemu64'. No more kernel panics on shutdown, and I'll get an actual shutdown instead of a reboot. Good luck!
Hi,
thank you for the suggestion.

Unfortunately if I change the cpu from KVM64 to QEMU64 the VM doesn't boot anymore (Exit code=1)
 

dnut

New Member
May 6, 2016
17
6
3
38
I had the same issue as well. I never wound up finding a fix, as I started discovering massive checksum errors the first time FreeNAS started a scrub.

I used iut with two passed through LSI controllers, and this worked perfectly in ESXi previously, but I suffered data loss when I tried this in KVM. The data on the pool was actually good, but the scrub process was reading it wrong somehow and thinking it was checksum errors, and then marking the files as corrupt and irreparable.

I exported my pool out of my FreeNAS VM and reimported it into ZFS on Linux in the host, and all my problems went away. ZFS is easy to manage from the command line, but managing the shares manually is a bit of a pain. It's a huge positive though, as the pool has never been faster, and resource use is much lower, as the ARC now is directly on the host, and it can better take advantage of system memory.
I used this guide to "pass" the storage disks directly to the the Freenas VM: https://pve.proxmox.com/wiki/Physical_disk_to_kvm

It worked flawlessly, the only issue I have with it is that I can't access to Smart Functionality of the disks directly into Freenas, I believe cause Virtio driver doesn't translate/pass Smart functions to the VM.

Never the less I do Smart checks directly in the host through smartmontools package and smartd daemon.

Never had an issue in the ZFS volume managed by Freenas, did scrubs many times with no issues at all (See the attachment).
Never the less I'm not using a dedicated controller just passed 3 WD Red HDD to the VM and configured then as I would in a Physical Machine.

In this way I can also detach the disks from the proxmox and install them in a Physical Server, then load the Freenas config file, and then everything will start to work as in the VM Freenas.

Never the less, I usual push my customers to use a Freenas on Physycal Server in Tandem with a Proxmox one, when budget allow that.
The optimal configuration for me is to have at least one NAS and one Proxmox, but many times I had to do some compromise...
 

Attachments

Last edited:

dnut

New Member
May 6, 2016
17
6
3
38
An Update:
today I tried to change from KVM64 to QEMU64 on a Freenas VM on another Proxmox Server (4.2) and it worked. Now the VM works properly and no kernel panic happens on shutdown.

Never the less in another Proxmox Server (4.1) it doesn't work. And the VM doesn't boot if I change the CPU to QEMU64.
I stopped to doing testing as this is a production server with 3 separate small offices depending on it, and I can survive if Freenas VM is sometimes stopped instead of properly shutted down (ZFS root file system of Freenas makes it rock solid even if stopped abruptly).

Thank you
 

supergonzo74

New Member
Apr 3, 2020
1
1
1
45
Hi guys,

this thread surely saved me a lot of pains. My virtualized freenas 11.2-U7 installation always kept rebooting until I changed cpu-type from kvm64 to qemu64. After this machine boots and shuts down properly now.

Thanks again...and keep up the good work...
 
  • Like
Reactions: XanderServer

victorhooi

Member
Apr 3, 2018
213
15
23
33
I hit this issue as well (FreeNAS running under Proxmox would always reboot when you clicked Shutdown).

Curious what the underlying cause is? Why does changing the CPU from kvm64 to qemu64 fix the issue?

Are there any disadvantages, caveats or performance implications of using qemu64 here?
 
  • Like
Reactions: XanderServer

gdmax

New Member
Jun 27, 2020
8
1
3
51
I had the same issue virtualizing FreeNAS on Proxmox. I fixed it by changing the CPU from the default 'kvm64' to 'qemu64'. No more kernel panics on shutdown, and I'll get an actual shutdown instead of a reboot. Good luck!
Thanks, It works :)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!