Search results

  1. H

    NUMA on single socket - Bad idea?

    Thank you, yes they are the same AMD EPYC models some in single socket, some dual.
  2. H

    NUMA on single socket - Bad idea?

    I have a cluster mixed of single and dual socket machines. Is migrating machines from the dual socket machines > single socket with NUMA enabled have any negative effects?
  3. H

    Dual Socket terrible performance on some VMs

    It's not just bad performance on that test, the system overall feels very slow.
  4. H

    Dual Socket terrible performance on some VMs

    True but that is not the issue other vm perform badly also the proxmox host itself.
  5. H

    Dual Socket terrible performance on some VMs

    This is a dual socket AMD EPYC system, NUMA is enabled but only for the single VPS I am testing with as I did not set it before noticing this issue. Running the test sysbench --test=memory --memory-block-size=4G --memory-total-size=32G run on the Proxmox host and on virtual machines shows...
  6. H

    Zenbleed - CVE-2023-20593

    Reboot the VMs + node or only node? VMs still show old code with live migration
  7. H

    Zenbleed - CVE-2023-20593

    It's running with or without amd64-microcode, how do I see if it "bleeds"?
  8. H

    Zenbleed - CVE-2023-20593

    VM must be rebooted too? It was still showing the old code with live migration back until I rebooted the VM, or is this optional
  9. H

    Zenbleed - CVE-2023-20593

    How can I easily test this has been patched correctly? amd64-microcode installed
  10. H

    Zenbleed - CVE-2023-20593

    I have Debian 11 still with Proxmox 7 what exact commands do I need to run?
  11. H

    Node keeps crashing

    I took vm.nr_hugepages=72 from https://github.com/extremeshok/xshok-proxmox I don't know why to be honest. I don't know where the error is that's what I am asking. It posts all those messages instantly, noting before for several minutes and then just shuts down/reboots. Specs: X570D4U Ryzen...
  12. H

    Node keeps crashing

    I seem to have an issue with one of my machines https://pastebin.com/zK4KZL9r it keeps crashing, the memory is near full, it's only swapping a few GB. The following is being used vm.swappiness=20 options zfs zfs_arc_min=10737418240 # Set to use 20GB Max options zfs zfs_arc_max=21474836480 ##...
  13. H

    How to set ZFS pool as backup storage

    I tired to find a way to do this in the Proxmox GUI too I don't know why they have not added it. I use a script to do this https://github.com/Jehops/zap
  14. H

    ZFS - Bad disk? delay pool errors

    I figured out what was wrong but I am not sure exactly what it was. It's one of these changes I presume IOMMU https://gyazo.com/d0e8c1b60a82550fa237ad6464f15ee6 https://gyazo.com/ba5a87521990efa6aaeda96645e567a2 https://gyazo.com/ab52675e3f8f59bc99dd5a1a831cff9c
  15. H

    ZFS - Bad disk? delay pool errors

    Intel SSD DC P4608 but I have other nodes using this drive too, all exact same setup in 3 server cluster. The drive works too.
  16. H

    ZFS - Bad disk? delay pool errors

    I posted this in wrong section can a staff member please move it to Install and configuration
  17. H

    ZFS - Bad disk? delay pool errors

    Tried a different disk, CPU and all RAM sticks, same issue. I have three servers all exact same setup/hardware. I don't think increasing the RAM will do anything (I tested it too same issue) I am only transferring some templates after a reboot currently no production. arc_summary...
  18. H

    NIC keeps changing interfaces on reboots

    It will either boot up as enp133s0f or enp129s0f, what would cause it to keep changing? 4: enp133s0f0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 00:10:18:c3:a1:80 brd ff:ff:ff:ff:ff:ff 5: enp133s0f1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop...
  19. H

    ZFS - Bad disk? delay pool errors

    One of my servers is taking a very long time to migrate data. As you see from this screenshot, it does eventually complete syslog shows the following, is it a sign of a bad disk? nvme2n1p1 & nvme3n1p1 are in the ZFS pool May 13 16:45:16 HOME1 zed: eid=90 class=delay pool='zfs' vdev=nvme3n1p1...
  20. H

    PBS - Total backup sizes just keeps going up and up

    Is logging enabled by default? I can't find anything in /var/log/proxmox-backup/tasks about GC

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!