Search results

  1. Very bad update experience - NFS won't mount

    So I'm in a huge rut. I updated my nodes and everything seems to have broke. ZFS wont mount encrypted datasets (separate post for this created) NFS wont mount NFS wont export No syslog (in GUI) Local storage wont load (communication failure) Datacenter shows quorum and active nodes -- all...
  2. ZFS [BUG] in latest 0.8.4-pve1

    After an update to ZFS 0.8.4-pve now two storage systems with encrypted datasets will not mount child datasets. ZFS is treating child/sub datasets as directories. Both systems have an 'encrypted_data' dataset with underlying datasets inheriting encryption details. root@node05:~# zfs load-key...
  3. Miserable backup speeds

    I am getting miserable speeds while doing backups. My backup-storage is NFS over RDMA. Speeds while writing to the NFS shared storage is much much better than VZDUMP. I tested writing directly to backup-storage, twice system memory: root@node02:/mnt/pve/backup-storage# time dd if=/dev/zero...
  4. Random node shutdown/reboots

    I have one node that reboots on its own. I haven't pinned down whats causing the system to shutdown/reboot Ive replaced the all of the memory (which tested fine before replacing it), temps appear okay on the CPUs, otherwise I have a 10 gbe fiber card and an infiniband card that will next...
  5. No-VNC numlock not synced

    No-VNC does not sync numlock and capslock with the host. In every situation it appears to be exactly opposite. I wonder if this information may be useful? https://www.cendio.com/bugzilla/show_bug.cgi?id=400
  6. read-only LXC mount-point fails

    Its impossible to mount a read-only mount-point on NFS storage. Workaround is removing Read-only option -- which allows the container to boot. I really wish to mount read-only and this used to work in PVE 5+ ● pve-container@20005.service - PVE LXC Container: 20005 Loaded: loaded...
  7. Multiple ZFS SLOG?

    I have purchased two nvme disks with PLP that I am using for my ZFS SLOG/ZIL. I have room for several more if it turns out to be beneficial that I do so. While I have successfully added two log devices, my question is if multiple log devices will stripe or give me the performance of more than...
  8. Strange disk caching mode results and questions

    I have been testing my VM disk performance on an NFS synchronous shared storage. The results have left me scratching my head trying to figure out whats going on. The results may be the expected behavior, but even so I am lost how that may be. On my NFS synchronous share I created a VM (Linux...
  9. LXC on NFS (sync) does not work

    I am mounting several NFS shares. For my LXC and QEMU images I wish to mount the NFS share as synchronous. QEMU guests are working well on the NFS sync share. For LXC, however, I noticed my sync writes dropped down below 10 MB/s and would hang for several minutes after writing test files. The...
  10. ZFS over iSCSI uses iscsiadm?

    I have a mounted ZFS over iSCSI storage device using the LIO plugin. It is successfully mounted to my nodes I took a loot into the /Storage/ISCSIPlugin.pm to see how the storage is being mounted. It looks like it is using iscsiadm But when I try to see the devices # /usr/bin/iscsiadm --mode...
  11. vm-vmid-disk-xx numbering convention

    I was testing my ZFS over ISCSI storage with different settings, compression, encryption, etc on different datasets on my ZFS pool and noticed the disk numbering convention numbers the disks per storage appliance. The first disk you create will be called vm-vmid-disk-0; attaching another disk...
  12. CIFS or NFS mount inside unprivleged container

    I was writing a response to another thread and some error occurred and I can no longer find the post. This is not a question but may be useful for anyone else who may be attempting to add a network share within an unprivileged container as a mount point and wish to gain write access permissions...
  13. PVE GUI issues with LXC Migration

    In Proxmox PVE 6.X I've noticed some odd behavior within the GUI when attempting to migrate an NFS mounted LXC container. I wonder if someone can repeat so I might be able to determine if this is an internal issue on my end or a bug I should report. Note: Migration of the Container is working...
  14. Unable to create EFI Disk on ZFS over iSCSI LIO

    Ive just configured my SAN to run as a PVE node/storage appliance with ZFS over iSCSI as a LIO target. NFS and iSCSI over RDMA is working well with the exception of adding EFI disks to a VM. Copying EFI vars image failed: command '/usr/bin/qemu-img convert -n -f raw -O raw...
  15. QEMU Tablet pointer out of sync?

    Is anyone else noticing the QEMU table not quite syncing through VNC?
  16. OVMF boot UEFI Solaris distros?

    Ive installed a couple open solaris distributions including openindiana and omnios both installing a UEFI bootloader. But OVMF on proxmox reports after install that it is not found. Changing to SeaBIOS does boot. I tried on both PVE 6 and PVE 5.4 with the same results: no UEFI boot to any...
  17. Kernel compile - encountering issues with Makefile script

    I'm having a heck of a time trying to patch the kernel, which I am doing to support eth_ipoib. It seems I am missing a package, a problem with the kernel's Makefile scripts, or I'm missing a few screws... It's driving me there, anyway. So I am working in the path...
  18. A question on vmware vsga

    I have a question concerning the vmware vga setting of a QEMU/KVM guest. Does this setting require any vmware/third-party software (like virtviewer for spice) ? The reason I ask this is because the AMD S7000 server video card is compatible with VMWare VSGA and with it the video card...
  19. PVE 5.3 (nvidia+) passthrough bug

    I have been troubleshooting Code 43 on a Windows 10 VM with a GTX 1060 for days and then decided to passthrough the GPU to a linux guest VM to test the Nvidia drivers there. Both system drivers are crashing the VM. These are the same problems I had earlier when passing an older generation card...
  20. i440fx GPU passthrough

    I would like to utilize PCI passthrough of my video card using the default i440fx machine instead of q35. i440fx will support this and is known to have some benefits where q35 and nvidia have issues on specific cards. When I start the VM I receive the following error at start: q35 machine model...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!