Search results

  1. H

    Search for backup containing comment

    When a VM is backed up in PBS it's given a comment for the machine hostname. How can I search through all backups filtering certain hostnames. I have a backup but I do not know the VMID. Not able to find where this data is stored.
  2. H

    NUMA on single socket - Bad idea?

    Thank you, yes they are the same AMD EPYC models some in single socket, some dual.
  3. H

    NUMA on single socket - Bad idea?

    I have a cluster mixed of single and dual socket machines. Is migrating machines from the dual socket machines > single socket with NUMA enabled have any negative effects?
  4. H

    Dual Socket terrible performance on some VMs

    It's not just bad performance on that test, the system overall feels very slow.
  5. H

    Dual Socket terrible performance on some VMs

    True but that is not the issue other vm perform badly also the proxmox host itself.
  6. H

    Dual Socket terrible performance on some VMs

    This is a dual socket AMD EPYC system, NUMA is enabled but only for the single VPS I am testing with as I did not set it before noticing this issue. Running the test sysbench --test=memory --memory-block-size=4G --memory-total-size=32G run on the Proxmox host and on virtual machines shows...
  7. H

    Zenbleed - CVE-2023-20593

    Reboot the VMs + node or only node? VMs still show old code with live migration
  8. H

    Zenbleed - CVE-2023-20593

    It's running with or without amd64-microcode, how do I see if it "bleeds"?
  9. H

    Zenbleed - CVE-2023-20593

    VM must be rebooted too? It was still showing the old code with live migration back until I rebooted the VM, or is this optional
  10. H

    Zenbleed - CVE-2023-20593

    How can I easily test this has been patched correctly? amd64-microcode installed
  11. H

    Zenbleed - CVE-2023-20593

    I have Debian 11 still with Proxmox 7 what exact commands do I need to run?
  12. H

    Node keeps crashing

    I took vm.nr_hugepages=72 from https://github.com/extremeshok/xshok-proxmox I don't know why to be honest. I don't know where the error is that's what I am asking. It posts all those messages instantly, noting before for several minutes and then just shuts down/reboots. Specs: X570D4U Ryzen...
  13. H

    Node keeps crashing

    I seem to have an issue with one of my machines https://pastebin.com/zK4KZL9r it keeps crashing, the memory is near full, it's only swapping a few GB. The following is being used vm.swappiness=20 options zfs zfs_arc_min=10737418240 # Set to use 20GB Max options zfs zfs_arc_max=21474836480 ##...
  14. H

    How to set ZFS pool as backup storage

    I tired to find a way to do this in the Proxmox GUI too I don't know why they have not added it. I use a script to do this https://github.com/Jehops/zap
  15. H

    ZFS - Bad disk? delay pool errors

    I figured out what was wrong but I am not sure exactly what it was. It's one of these changes I presume IOMMU https://gyazo.com/d0e8c1b60a82550fa237ad6464f15ee6 https://gyazo.com/ba5a87521990efa6aaeda96645e567a2 https://gyazo.com/ab52675e3f8f59bc99dd5a1a831cff9c
  16. H

    ZFS - Bad disk? delay pool errors

    Intel SSD DC P4608 but I have other nodes using this drive too, all exact same setup in 3 server cluster. The drive works too.
  17. H

    ZFS - Bad disk? delay pool errors

    I posted this in wrong section can a staff member please move it to Install and configuration
  18. H

    ZFS - Bad disk? delay pool errors

    Tried a different disk, CPU and all RAM sticks, same issue. I have three servers all exact same setup/hardware. I don't think increasing the RAM will do anything (I tested it too same issue) I am only transferring some templates after a reboot currently no production. arc_summary...
  19. H

    NIC keeps changing interfaces on reboots

    It will either boot up as enp133s0f or enp129s0f, what would cause it to keep changing? 4: enp133s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 00:10:18:c3:a1:80 brd ff:ff:ff:ff:ff:ff 5: enp133s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop...
  20. H

    ZFS - Bad disk? delay pool errors

    One of my servers is taking a very long time to migrate data. As you see from this screenshot, it does eventually complete syslog shows the following, is it a sign of a bad disk? nvme2n1p1 & nvme3n1p1 are in the ZFS pool May 13 16:45:16 HOME1 zed: eid=90 class=delay pool='zfs' vdev=nvme3n1p1...