Search results

  1. R

    AMD Ryzen 9 5950X 8.2.2 Kernel 6.8.4-3-pve crashing/rebooting every 2-3 days

    Yeah, no. There are some issues related to nfs i seen in this forum, but cifs is fine, im using the storage box either. Sorry, then i don't know, maybe someone else.
  2. R

    Proxmox slower in CPU/Memory than VMware

    https://bs.fri.stoss-medica.int:8006/pve-docs/chapter-qm.html#qm_cpu Read the "affinity" Section. In VM-Settings under CPU, tick "Advanced" and set Affinity. To find out which Processors are assigned to which Socket on the Proxmox host, install apt install numactl and enter numactl --hardware...
  3. R

    Proxmox slower in CPU/Memory than VMware

    Numa doesn't work on Proxmox. Or lets say there is no logic, like assigning 4 Cores of the VM to the same Socket, to benefit from faster Ram. As long as there is no numa Support, ESXI will always win on dual Socket Systems or Single Socket AMD Bergamo/Genoa Servers. ESXI handles numa pretty...
  4. R

    SSD Samsung 990 Evo Failed Status

    You explain basically exactly the issues here, with 990 Pro's "explore the firmware question" -> Newest firmware doesn't change anything, they maybe fail slightly later. I thought updating helps too. "indicator is only 1-2 %" -> Correct, here either, they still fail. "temperature higher than...
  5. R

    SSD Samsung 990 Evo Failed Status

    I have the exact same issues with 990 Pro's. The whole 990 line is simply Crap, i replace them every 2-3 Months. I have 4 of them for a ZFS Metadata + Small Files Cache, and the 4 that are left, are stable. They passed now 4 Months of runtime without failing. But to get that far, i had to...
  6. R

    Looking for Cheap and Easy Quorum

    Im running a 3-node Cluster, 2 nodes as VM-Server and 1 node as PVE+PBS. Works great, just useless at the end. In the beginning the idea was to run on the PVE/PBS node, a Triliead-VM that does ESXI-Backups. But since we got completely rid of ESXI, the Trilead-VM doesnt run anymore either and the...
  7. R

    Safari Only - PVE 8.2.4 - NoVNC Console Error

    You're amazing, thanks! Now we need only a fix lol xD
  8. R

    High iops in host, low iops in VM

    Because your Storage on your LXC Container is just passed through as "filesystem" and on KVM it's a ZVOL. The one is a filesystem directly. The other is a Blockdevice + Filesystem of the VM itself.
  9. R

    Safari Only - PVE 8.2.4 - NoVNC Console Error

    Hi, something changed either in PVE 8.2.4 or Safari got today an Update. ISSUE: NoVNC Console doesn't work! xTerm Console works without issues. -> Firefox: no Issues -> Edge: no Issues -> Chrome: Didnt tested, sorry -> Safari: Doesn't work since Today. I have like 8 PVE Servers and as Stupid...
  10. R

    High iops in host, low iops in VM

    LVM is a lot faster, its a very well known factor. However, there is no fix for this on the Horizon, sadly 2.2.4 made nothing better, zvols are still utter crap. Litterally everything else is at least twice as fast, its hard to find something that is slower xD However, no one will be able to...
  11. R

    Proxmox ZFS and Intel QAT

    Thanks for the info, but that still wouldnt work with amd plattforms. And intel CPU's have QAT already integrated (some/most of them).
  12. R

    Problems with GPU Passthrough since 8.2

    awesome, thanks, relaxed cmdline solved it here either :-)
  13. R

    [SOLVED] Not working: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]

    No? I tryed once with intel and almost briked the intel card, glad god the intel update tool did a backup. Since then i never tried to flash an oem card with original fw But if you say it works with mellanox, i gonna try that, lol Thanks
  14. R

    [SOLVED] Not working: Mellanox Technologies MT27520 Family [ConnectX-3 Pro]

    As far i remember, all mellanox connect-x3 and 4 Cards from HPe have a vlan bug. So vmbr with slave port/bond of connect-x3/4, will never be vlan aware. At least it won't work. Unless you put the slave ports into Promisc-Mode. I had this on a lot of HPe Mellanox Cards, that why im avoiding...
  15. R

    raidz from 4 disk to 3 is it possible?

    Maybe one more Tip. People (me in the beginnings either) are fixed to buy the exact same drive as replacement. Let me say it that way: If you can get a larger/better disk, same or more cachesize... So in general if specs are at least same, or better... Most important is that the new disk is...
  16. R

    Hardware Feedback - Proxmox Ceph Cluster (3-4 nodes)

    You are right, i have to dig further in. I don't say that there is no way around that, just no easy one. The easiest one is CPU Pinning, which is possible on Proxmox without a lot of knowledge and you don't need even to change bios settings for that, like enabling NPS4 or Numa per L3 Cache...
  17. R

    Hardware Feedback - Proxmox Ceph Cluster (3-4 nodes)

    Its called split tables or something like that and it is available on any genoa/milan, but you'll get usually 8 or even 16 numa domains per CPU, depending on the core count. Each Domain per L3-Cache. Each domain has only 8 cores. To manage this with cpu pinning, without the support that proxmox...
  18. R

    Hardware Feedback - Proxmox Ceph Cluster (3-4 nodes)

    Thats not correct. Phoronix test suite is very optimized edge case benchmarking. Almost everything has nothing todo with real world. Proxmox doesn't support numa or it's completely broken with amd CPUs, as long that is the case, you cannot get 100% of the performance in any multi threading...
  19. R

    Hardware Feedback - Proxmox Ceph Cluster (3-4 nodes)

    I have those servers, they are good. Bios updates are good and the Hardware is good. And additionally the Server Section has Nothing todo with the Customer Section, completely different department/support etc. There are other Barebones, from Gigabyte/Asrock and a lot more. Even from Supermicro...