Personally I preferred OPNSense over pfSense. The lack of BSD support for VirtIO networking hardware made me choose Untangle, though. Untangle has so far given me every feature I expected to use with OPNsense/pfSense but is a Linux distro, with full VirtIO support
It would seem I have not properly tuned ZFS. I tried that same command locally on my ZFS server and got similar poor results.
I'm a bit disappointed in that, as it would seem nearly everything I've done has been in vain from purchases to setup... sigh
It seems to be effecting Windows systems (maybe Win 10 only?) I opened two noVNC consoles, one Windows 10 and the other Linux Mint, and the linux VM syncs properly. Win 10 was always opposite.
I tried your solution with no joy.
1) Turned client desktop Num Lock OFF
2) Started VM
3) Opened VM...
Thanks! Seems everyone is right.
Since @H4R0 gave his suggestion, I am seeing numbers I didn't expect to see on my ZFS mirrored vdevs storage. I must be doing something wrong and will now have to dissect my storage systems because I would expect much better performance than a single disk in a...
Haha! Yes I know. ♂️
I ran top before the cluster came back online and noticed a kworker was using a lot of CPU.
My systems are connected to storage via 40Gbe infiniband.
After getting the cluster back online and storage mouted cluster-wide, that kworker is not...
Sorry for the delay. I ran into some serious work-stoppage with the latest kernel.
As you requested @Stoiko Ivanov
Showing zfs filesystems/datasets
root@node05:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
@Alwin Okay I am starting to get somewhere now
It does appear to be a kernel issue -- or an issue with pvesm / corosync and the later kernels. That, I am not sure.
What I did was bring up ONE node (node04) with pve-kernel-5.3.18-3-pve (all other nodes remained on latest...
Thanks for that suggestion :)
Because it's happening to local storage and not only network attached storage it would seem to me something else is at play?
I cannot see the contents of local /var/lib/vz or of any other storage be it local or networked.
After clicking Summary or Contents of...
Its a PVE installation with ZFS exported as NFS.
Of course the other systems access it remotely. Having it join the cluster allows me to monitor the storage through PVE like a PVE Storage Appliance.
Thanks for taking the time to respond @Alwin
>>> Both services are running...
@Alwin local storage is accessible in console/CLI; but not NFS
I am noticing strange things.
NFS remains unmounted but yet I see PVE claims HA is successfully starting HA containers with resources on NFS storage (that wont mount)
Timestamp in picture are minutes ago.
Thanks for your replies. It may be related but its not exactly what I am experiencing. I tried the previous kernel with no joy.
All of my nodes boot. I can access each node Summary page on PVE GUI.
>>> This appears to be storage related.
I am having trouble accessing ALL storage (including...
So I'm in a huge rut. I updated my nodes and everything seems to have broke.
ZFS wont mount encrypted datasets (separate post for this created)
NFS wont mount
NFS wont export
No syslog (in GUI)
Local storage wont load (communication failure)
Datacenter shows quorum and active nodes -- all...
After an update to ZFS 0.8.4-pve now two storage systems with encrypted datasets will not mount child datasets.
ZFS is treating child/sub datasets as directories. Both systems have an 'encrypted_data' dataset with underlying datasets inheriting encryption details.
root@node05:~# zfs load-key...
I am getting miserable speeds while doing backups.
My backup-storage is NFS over RDMA.
Speeds while writing to the NFS shared storage is much much better than VZDUMP.
I tested writing directly to backup-storage, twice system memory:
root@node02:/mnt/pve/backup-storage# time dd if=/dev/zero...
Proxmox works on just about everything.
It seems your needs are fairly minimal.
Almost everything will work in your use case as you described it. If you add the SSD/SAS disks and ensure the CPU meets the demands of the VMs -- you're set.
I could try adding to your configuration
Below #my wifi
I've not got my bridged devices set for auto. But it may help your situation (?)
Then ifdown each interface.
Bring all interfaces up
See if you get any debug info in your console.
I've had some passthrough devices that work best not on machine: q35
Have you tried default/i440fx instead of q35?
Then try again with and without the rom file.
args: -cpu 'host,hv_time,kvm=off,hv_vendor_id=1234567890ab'
vmbr0 is the vm bridge interface but that interface is not bridged to any other interface ports.
In your case you have two networks. Your node is able to ping both ways because it has a connection on both interfaces. But, your current vmbr0 network has no outside access. You only think so...