Search results

  1. R

    Ram Benchmarking Speed (with processing) declined on moving to Proxmox from ESXI on same hardware

    didnt seen that, youre right. What platform is that? i have strange issues with Genoa and hyperthreading here, mentioned in another thread, but i still had no chance to debug further, its simply in production (makes hard to experiment with) The hyperthreading issue i have, doesn't exist on any...
  2. R

    Worse performance with higher specs server

    I would say, for Consumer SSD's like 870 EVO's etc... 900 is great. For enterprise SSD's probably crap, you're right, i simply skipped on my side enterprise SSD's and gone directly to Enterprise NVME's. So thats why i don't have any experience with Enterprise SSD's. All new servers that i build...
  3. R

    Proxmox problem with memory limits in ARC (ZFS)

    You wrote you'll using 16k blocksize, you mean really volblocksize? Im not sure if it will have some downsides for VM's (i don't think), but it should help with the needed space for metadata. Same for recordsize, usually the larger the recordsize, the less metadata you need. But 128k (the...
  4. R

    Proxmox problem with memory limits in ARC (ZFS)

    Then you have no other way as using VM. But replication is not live, what i mean is, if one server goes down, and the vm gets started on the other one, you loose 2hours of data if you set to sync every 2h for example. Just as a sidenote.
  5. R

    [SOLVED] NVME disk "Available Spare" problem.

    Lets simply see in a week or so, after he got his drive and a backup. Then he can do that without any fear and check smartctl again, or in worst case replace the drive.
  6. R

    Proxmox problem with memory limits in ARC (ZFS)

    120TB is lot, i don't know any downsides, but i wouldn't do that personally. Don't understand me wrong, it will likely be just fine. However, i would prefer using an lxc container if possible and mount the Storage directly to the lxc container. (Primary to avoid the usage of zvols) Otherwise...
  7. R

    Ram Benchmarking Speed (with processing) declined on moving to Proxmox from ESXI on same hardware

    Ah thats a different story. But then i believe that the benchmark itself acts differently, it could be that random on vmware is actually urandom. can you retest with /dev/urandom on both? /dev/random is known to be slow, and as far as i know, its not even anywhere used on proxmox.
  8. R

    [SOLVED] NVME disk "Available Spare" problem.

    /dev/nvme0n1 -> Thats the namespace of the nvme, means the actual disk where data/partitions are on them. /dev/nvme0 -> Thats the raw disk itself, you can split it into multiple namespaces, if the disk supports it, for passthrough for example, so imagine it as pcie port itself or something, and...
  9. R

    [SOLVED] NVME disk "Available Spare" problem.

    dont do the dd and rm -f commands separately, du it as one command, exactly as i posted above. because the first command will write zeroes to your drive (into the zeroes file), till there is absolutely no space left, and the second will delete the zeroes to make space again. So basically as one...
  10. R

    Ram Benchmarking Speed (with processing) declined on moving to Proxmox from ESXI on same hardware

    The faster one is directly on the Host, tested simply with Hirens Boot CD, so no drivers, probably not max speed, dunno. The Second (Slower one) is inside a WS2019 VM, with all drivers etc... There is definitively a big difference, but in my case, thats anyway all that fast that it simply...
  11. R

    [SOLVED] why does vmbr0 has an IP?

    If you put it onto the nic that is assigned to vmbr? - I think that only the communication between host<->vm will not work. - But the host should be still reachable from anything else, and VM's should have no issues either. - Example: auto eno1 iface eno1 inet static address...
  12. R

    Worse performance with higher specs server

    PS: i forgot to mention. FSYNCS/SECOND: 909.08 -> is not really bad, looks okay to me for lets say 4SSD's in Raid 10. FSYNCS/SECOND: 2932.92 -> For 6x 10k SAS-Disks, looks far too much to me, like it's just a cached result. Maybe we should start with that. Because i get: CPU BOGOMIPS...
  13. R

    Worse performance with higher specs server

    Check if there is a multiplexer card, if there is, remove it if possible. Check if you can use HBA mode somehow, firmware updates for the raid controller helps often. In my particular case, i need an Raid-Card for the old esxi servers, either a raid card or an iscsi/fs storage, because its esxi...
  14. R

    LXC innerhalb Proxmox VM keine Netzverbindung

    Mach mal Hacken bei Firewall raus. Nichts destotrotz denke ich eher das das was mit unraid zutun hat.
  15. R

    Performance ESXi Importer

    Der ESXI-Import Wizard, ist lahm das stimmt. Nicht destotrotz hab ich mit dem ungefähr 20 VM's migriert über ein Wochenende. Sind so 8TB an Daten insgesamt. Also gings doch trotz ersteindruck relativ schnell, was mir aufgefallen ist, das manche VM's lahm migrieren, andere wiederum schnell, das...
  16. R

    [SOLVED] NVME disk "Available Spare" problem.

    Thats a good advice! Shutdown/poweroff/start fixed some nvme issues i had in the past either with consumer nvme drives.
  17. R

    [SOLVED] After you are done laughing at this one, any advice is welcome (no network or graphics)

    The keyboard still works... i would boot the machine simply, blindly type root and your password, enter and then: insmod r8125 systemctl restart networking Then you should get ssh access again :)
  18. R

    [SOLVED] NVME disk "Available Spare" problem.

    dd if=/dev/zero of=/root/zeroes bs=$((1024*1024)); rm -f /root/zeroes fstrim -v / Wait then 5 Minutes or so and reboot once, just to get sure and check with smartctl again :) It has only 778GB Written, that SSD is basically brand new, lol
  19. R

    Worse performance with higher specs server

    TBH, They are all very Hot, no matter which Raid-Card i seen in my life. So i wouldn't worry at all about the heat. Maybe True HBA-Cards are Cold, but even the 9305 that i use now with IT-FW is very hot. Not sure about those new Broadcom Tri-Mode Controllers, since they are too expensive to test xD
  20. R

    Worse performance with higher specs server

    Just a note, on my Backupserver (ML350 G9), that has 24 3,5inch bays, was some sort of HP-Raid Controller, with a Multiplexer built in. I dont remember which controller it was, but one that comes as default with ML350 G9. And both ZFS-Pools (One Consists of 12x 4TB Sata drives, the other...