Search results

  1. S

    Proxmox VE Ceph Benchmark 2020/09 - hyper-converged with NVMe

    Before going for expensive microns, i wanted to get some numbers: NVMEs (cheap KINGSTON SNVS1000GB): fio --ioengine=libaio --filename=test12345 --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio --size=10G write: IOPS=434, BW=1739KiB/s...
  2. S

    NVMEs get detected as SSD-drives - ceph crush rule problem

    Just checked the manual. With luminous you can set device classes per device - its kind of overwriting the detected class: https://ceph.io/community/new-luminous-crush-device-classes/ that should do the trick, will test.
  3. S

    NVMEs get detected as SSD-drives - ceph crush rule problem

    Hi, i dont quite get it. You do not just "add" disks to a pool, you create a crush map rule by ceph osd crush rule create-replicated replicated_rule_ssd default host ssd So it only gets replicated to this type of device. I'm not aware of another way to -as you recommend - "Just use the hint...
  4. S

    NVMEs get detected as SSD-drives - ceph crush rule problem

    Hi, we just put our first NVMEs in the existing proxmox 6 & ceph nodes. Unfortunately the NVMEs are detected as type SSD. As we want to have a new pool with only the faster NVMEs, we can not distinguish the disks by type: (ceph osd crush rule create-replicated ....) What can we do? thank you.
  5. S

    cluster and VM boot order - race conditions - best practice?

    Well your cluster resource manager should handle this. For example, with pacemaker cluster you simply set constraints between the resources: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-resourceconstraints-haar This could...
  6. S

    cluster and VM boot order - race conditions - best practice?

    This is ignored if HA is active. See your own admin manual ;)
  7. S

    cluster and VM boot order - race conditions - best practice?

    Hi folks, we have several multi-node proxmox clusters. Sometimes we have to cold-start all nodes (full power outtake onsite) that causes a race-condition between VMs. Servers start before firewall/domain controllers are up. (wrong network profiles on windows) Application servers start before...
  8. S

    [SOLVED] Can not install scsi driver in server2012 r2 - no device found

    Hi, proxmox 5.4-13 server 2012r2 Would like to add another disk and use scsi for it. Device manager shows device, but can not install drivers. Tried several ISO files. Any ideas? Thank you Stefan
  9. S

    PCI-e expander - NVMe - ceph performance?

    Hi folks, anyone ever tried these kind of cards? https://www.delock.com/produkte/G_89835/merkmale.html?setLanguage=en We would like to use NVMe-cards in it. Our boards support bifurcation. Ideas?
  10. S

    Node not reachable on vlan, after vlan is used by vm

    Hi, i have a strange behavior on my linux system and can not explain it. Help would be greatly appreciated. host proxmox1 with a trunk/bond interface to Cisco-Switch. proxmox1 has mgmt in vlan100 on same bond: bond1.100@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP...
  11. S

    [SOLVED] Can't install proxmox with "AMD Ryzen 5 2400G with Radeon Vega"

    even when blacklisted amdgpu did it not work for me. System dies with kernel oops righter after boot with latest pve and following hardware: vendor_id : AuthenticAMD cpu family : 23 model : 17 model name : AMD Ryzen 5 2400G with Radeon Vega Graphics stepping : 0 microcode ...
  12. S

    i40e Bug - will there be kernel updates soon?

    Hi, we just got hit by a bug with 4.15.18-10-pve Kernel and i40 10G network cards from intel. https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1779756 Problem seems to be fixed in kernel version >=4.18. Will 4.18 be available in proxmox in the near distance? Any help is greatly...
  13. S

    proxmox slow with ssd disks on lsi 9260-8i

    SOLVED. One of the SSD drives had very poor performance and just got replaced by samsung RMA.
  14. S

    proxmox slow with ssd disks on lsi 9260-8i

    I now got your point that you do not like this SSD drives and would not use it. No smart IDs that would indicate an issue with the drives at all. I do not have windows available to test the controller. I will get the same controller (but with vendor brand from fujitsu) and will check if...
  15. S

    proxmox slow with ssd disks on lsi 9260-8i

    Maybe you missed this in my text: "ssds have no smart errors. did short and long tests.". And you may have missed this as well "Removing ssd drives from raid controller for test and attach to onboard sata-port. write rate is ~ 400MB/s." So the drives are ok and can be fast.
  16. S

    proxmox slow with ssd disks on lsi 9260-8i

    750 Evo Samsung SSD. Performance is almost the same with fio: root@proxmox:/ssd# fio --max-jobs=1 --numjobs=1 --readwrite=write --blocksize=4M --size=5G --direct=1 --name=fiojob fiojob: (g=0): rw=write, bs=4M-4M/4M-4M/4M-4M, ioengine=psync, iodepth=1 fio-2.16 Starting 1 process fiojob: Laying...
  17. S

    proxmox slow with ssd disks on lsi 9260-8i

    Seems you missed the following text: "Removing ssd drives from raid controller for test and attach to onboard sata-port. write rate is ~ 400MB/s."

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!