So I'm in a huge rut. I updated my nodes and everything seems to have broke.
ZFS wont mount encrypted datasets (separate post for this created)
NFS wont mount
NFS wont export
No syslog (in GUI)
Local storage wont load (communication failure)
Datacenter shows quorum and active nodes -- all...
After an update to ZFS 0.8.4-pve now two storage systems with encrypted datasets will not mount child datasets.
ZFS is treating child/sub datasets as directories. Both systems have an 'encrypted_data' dataset with underlying datasets inheriting encryption details.
root@node05:~# zfs load-key...
I am getting miserable speeds while doing backups.
My backup-storage is NFS over RDMA.
Speeds while writing to the NFS shared storage is much much better than VZDUMP.
I tested writing directly to backup-storage, twice system memory:
root@node02:/mnt/pve/backup-storage# time dd if=/dev/zero...
I have one node that reboots on its own.
I haven't pinned down whats causing the system to shutdown/reboot
Ive replaced the all of the memory (which tested fine before replacing it), temps appear okay on the CPUs, otherwise I have a 10 gbe fiber card and an infiniband card that will next...
No-VNC does not sync numlock and capslock with the host.
In every situation it appears to be exactly opposite.
I wonder if this information may be useful?
Its impossible to mount a read-only mount-point on NFS storage.
Workaround is removing Read-only option -- which allows the container to boot.
I really wish to mount read-only and this used to work in PVE 5+
● email@example.com - PVE LXC Container: 20005
I have purchased two nvme disks with PLP that I am using for my ZFS SLOG/ZIL. I have room for several more if it turns out to be beneficial that I do so.
While I have successfully added two log devices, my question is if multiple log devices will stripe or give me the performance of more than...
I have been testing my VM disk performance on an NFS synchronous shared storage. The results have left me scratching my head trying to figure out whats going on. The results may be the expected behavior, but even so I am lost how that may be.
On my NFS synchronous share I created a VM (Linux...
I am mounting several NFS shares. For my LXC and QEMU images I wish to mount the NFS share as synchronous.
QEMU guests are working well on the NFS sync share.
For LXC, however, I noticed my sync writes dropped down below 10 MB/s and would hang for several minutes after writing test files.
I have a mounted ZFS over iSCSI storage device using the LIO plugin.
It is successfully mounted to my nodes
I took a loot into the /Storage/ISCSIPlugin.pm to see how the storage is being mounted. It looks like it is using iscsiadm
But when I try to see the devices
# /usr/bin/iscsiadm --mode...
I was testing my ZFS over ISCSI storage with different settings, compression, encryption, etc on different datasets on my ZFS pool and noticed the disk numbering convention numbers the disks per storage appliance.
The first disk you create will be called vm-vmid-disk-0;
attaching another disk...
I was writing a response to another thread and some error occurred and I can no longer find the post.
This is not a question but may be useful for anyone else who may be attempting to add a network share within an unprivileged container as a mount point and wish to gain write access permissions...
In Proxmox PVE 6.X I've noticed some odd behavior within the GUI when attempting to migrate an NFS mounted LXC container.
I wonder if someone can repeat so I might be able to determine if this is an internal issue on my end or a bug I should report.
Note: Migration of the Container is working...
Ive just configured my SAN to run as a PVE node/storage appliance with ZFS over iSCSI as a LIO target.
NFS and iSCSI over RDMA is working well with the exception of adding EFI disks to a VM.
Copying EFI vars image failed: command '/usr/bin/qemu-img convert -n -f raw -O raw...
Ive installed a couple open solaris distributions including openindiana and omnios both installing a UEFI bootloader. But OVMF on proxmox reports after install that it is not found. Changing to SeaBIOS does boot. I tried on both PVE 6 and PVE 5.4 with the same results: no UEFI boot to any...
I'm having a heck of a time trying to patch the kernel, which I am doing to support eth_ipoib.
It seems I am missing a package, a problem with the kernel's Makefile scripts, or I'm missing a few screws... It's driving me there, anyway.
So I am working in the path...
I have a question concerning the vmware vga setting of a QEMU/KVM guest.
Does this setting require any vmware/third-party software (like virtviewer for spice) ?
The reason I ask this is because the AMD S7000 server video card is compatible with VMWare VSGA and with it the video card...
I have been troubleshooting Code 43 on a Windows 10 VM with a GTX 1060 for days and then decided to passthrough the GPU to a linux guest VM to test the Nvidia drivers there. Both system drivers are crashing the VM.
These are the same problems I had earlier when passing an older generation card...
I would like to utilize PCI passthrough of my video card using the default i440fx machine instead of q35.
i440fx will support this and is known to have some benefits where q35 and nvidia have issues on specific cards.
When I start the VM I receive the following error at start:
q35 machine model...