Search results

  1. J

    [SOLVED] VM Restore Fails "vma: restore failed - blk_pwrite to failed (-28)"

    I am trying to restore a VM from network storage. This is the following error I'm getting: restore vma archive: zstd -q -d -c /Nextcloud.Storage/shared/larger_backups/dump/vzdump-qemu-800-2021_07_05-03_19_07.vma.zst | vma extract -v -r /var/tmp/vzdumptmp2102.fifo - /var/tmp/vzdumptmp2102 CFG...
  2. J

    Detected Hardware Unit Hang: NIC resetting unexpectedly during high throughput

    It looks like the best solution so far here has been to disable hardware offloading features and sacrifice performance, that's disappointing. I dug around and found that I'm running is a Intel I217-LM card and PVE is running drivers: e1000e v3.2.6-k root@NodeA:~# ethtool -i eno1 | grep -i...
  3. J

    Detected Hardware Unit Hang: NIC resetting unexpectedly during high throughput

    I have noticed in my syslog that during times of high throughput, I am getting this hardware hanging issue. How do I begin to troubleshoot this? Jun 26 21:39:45 TracheNodeA corosync[1828]: [KNET ] link: host: 1 link: 1 is down Jun 26 21:39:45 TracheNodeA corosync[1828]: [KNET ] host...
  4. J

    Migrating VMs between clusters erased some VM disks - also how to qmrestore while keeping disks that were NOT backup up originally

    If adding NFS entry to the storage.cfg auto-mounts everything in /mnt/pve/<storage ID> than it seems to me that this is the less preferable method. From what I can tell, when you share a storage in a cluster, the node expects the folder to be mounted in the same place on both the node and host...
  5. J

    Throughput issues with recently installed Gig Ethernet card - Dell PowerEdge T320, Proxmox/Debian, Broadcom NetXtreme BCM5722

    Here's the network config on both Proxmox hosts with "Cluster Network" corresponding to the troublemaker
  6. J

    Throughput issues with recently installed Gig Ethernet card - Dell PowerEdge T320, Proxmox/Debian, Broadcom NetXtreme BCM5722

    Hi all, I recently added a node to a Proxmox cluster and was setting up a dedicated server <-> server ethernet connection with some spare NICs but I am getting significant throughput issues on the new card. I am suspicious of the NIC I installed in the T320 and NOT of the NIC in the...
  7. J

    [SOLVED] Cannot backup only LXC to NFS, VM works

    Thanks, that worked! INFO: starting new backup job: vzdump 100 300 500 700 --compress zstd --mailnotification failure --quiet 1 --all 0 --storage NC.VM.Backups.dir --mailto --mode snapshot --node TracheNodeA INFO: Starting Backup of VM 100 (lxc) INFO: Backup started at 2021-06-13 11:57:05 INFO...
  8. J

    [SOLVED] Cannot backup only LXC to NFS, VM works

    LXC UID is mapped to 0 root@AdGuard:~# id uid=0(root) gid=0(root) groups=0(root) Same as it's host (Cluster node) root@TracheNodeA:~# id uid=0(root) gid=0(root) groups=0(root) And same as the NFS host root@TracheServ:~# id uid=0(root) gid=0(root) groups=0(root)
  9. J

    [SOLVED] Cannot backup only LXC to NFS, VM works

    This is an unprivileged container, I'm not sure how much that changes this.
  10. J

    [SOLVED] Cannot backup only LXC to NFS, VM works

    I recently started a cluster and am sharing directories mounted at ZFS via NFS. Here are my config details. I am on 192.168.1.24 and am mounting a share from 192.168.1.129 This is the error I get when backing up my LXC
  11. J

    Migrating VMs between clusters erased some VM disks - also how to qmrestore while keeping disks that were NOT backup up originally

    So for the dir, if I'm going to share it in the cluster with NFS, would I still add a nfs entry to /etc/pve/storage.cfg? Or would I just add it in /etc/fstab and be done with it?
  12. J

    [SOLVED] Permission denied to mkdir ZFS shared over NFS between Proxmox nodes

    Resolved, it was a typo. I turned on debug logs with rpcdebug -m nfsd all ran mount -a on the node and then turned off the debug logs with rpcdebug -m nfsd -c all I saw and realized there was a typo rpc.mountd[41775]: refused mount request from 192.168.1.24 for...
  13. J

    Migrating VMs between clusters erased some VM disks - also how to qmrestore while keeping disks that were NOT backup up originally

    The shared flag was set but the storage was not mounted. Where do I find this task history? An interesting observation I made was that when I tried to migrate a VM where the shared flag wasnt declared, sanity checks prevented me from doing so. But if I tried to migrate a VM where the shared is...
  14. J

    [SOLVED] Permission denied to mkdir ZFS shared over NFS between Proxmox nodes

    Here's the reddit discussion about this: https://www.reddit.com/r/Proxmox/comments/nutsqd/permission_denied_zfs_shared_over_nfs_between/h12hz6a/?context=3 I tried to set unique fsid to see if that makes a difference, but it failed: root@Server:~# cat /etc/exports /new_ssd...
  15. J

    Migrating VMs between clusters erased some VM disks - also how to qmrestore while keeping disks that were NOT backup up originally

    The zoneminder VM had migrated and was still running but I had NOT shared the storage yet so I guess it was just running from memory? This was a big oops but perhaps there's a way for your team to introduce a sanity check to the migration progress, preventing migration if the storage isn't...
  16. J

    [SOLVED] Need help configuring ZFS over iSCSI

    I made the key again and just didnt password protect it this time. It worked. Maybe this warrants an update to the proxmox ZFS over iSCSI docs? Is it still inadvisable to use the linux setup as per docs? "A word of caution. For enterprise usecases I would only recommend solaris based platforms...
  17. J

    [SOLVED] Need help configuring ZFS over iSCSI

    That was a great suggestion, now I can see that it's actually the publickey being denied. I'm assuming because the key is password protected So I tried ssh again to check ad it works, but its password protected and that's not delcared in the iSCSI command. Should I just make a new public key...
  18. J

    [SOLVED] Permission denied to mkdir ZFS shared over NFS between Proxmox nodes

    I have added a node to a cluster. Both nodes are running PVE 6.4-8 and I am sharing my disks using ZFS over NFS. I have connected to the respective NFS shares and can see their disks on my node but apparently do not have the right permissions and I'm not certain what those would even entail...
  19. J

    [SOLVED] Need help configuring ZFS over iSCSI

    Here is what the error looks like on my node when attempting to connect to the ZFS host This is how I configured the storage where 192.168.1.129 is the IP for the ZFS disk host zfs: solaris blocksize 4k target...