Just checked the manual. With luminous you can set device classes per device - its kind of overwriting the detected class:
https://ceph.io/community/new-luminous-crush-device-classes/
that should do the trick, will test.
Hi,
i dont quite get it. You do not just "add" disks to a pool, you create a crush map rule by
ceph osd crush rule create-replicated replicated_rule_ssd default host ssd
So it only gets replicated to this type of device. I'm not aware of another way to -as you recommend - "Just use the hint...
Hi,
we just put our first NVMEs in the existing proxmox 6 & ceph nodes. Unfortunately the NVMEs are detected as type SSD.
As we want to have a new pool with only the faster NVMEs, we can not distinguish the disks by type:
(ceph osd crush rule create-replicated ....)
What can we do?
thank you.
Well your cluster resource manager should handle this. For example, with pacemaker cluster you simply set constraints between the resources:
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-resourceconstraints-haar
This could...
Hi folks,
we have several multi-node proxmox clusters. Sometimes we have to cold-start all nodes (full power outtake onsite) that causes a race-condition between VMs.
Servers start before firewall/domain controllers are up. (wrong network profiles on windows)
Application servers start before...
Hi,
proxmox 5.4-13
server 2012r2
Would like to add another disk and use scsi for it. Device manager shows device, but can not install drivers. Tried several ISO files.
Any ideas?
Thank you
Stefan
Hi folks,
anyone ever tried these kind of cards?
https://www.delock.com/produkte/G_89835/merkmale.html?setLanguage=en
We would like to use NVMe-cards in it.
Our boards support bifurcation.
Ideas?
Hi,
i have a strange behavior on my linux system and can not explain it. Help would be greatly appreciated.
host proxmox1 with a trunk/bond interface to Cisco-Switch.
proxmox1 has mgmt in vlan100 on same bond:
bond1.100@bond1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP...
even when blacklisted amdgpu did it not work for me. System dies with kernel oops righter after boot with
latest pve and following hardware:
vendor_id : AuthenticAMD
cpu family : 23
model : 17
model name : AMD Ryzen 5 2400G with Radeon Vega Graphics
stepping : 0
microcode ...
Hi,
we just got hit by a bug with 4.15.18-10-pve Kernel and i40 10G network cards from intel.
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1779756
Problem seems to be fixed in kernel version >=4.18.
Will 4.18 be available in proxmox in the near distance?
Any help is greatly...
I now got your point that you do not like this SSD drives and would not use it.
No smart IDs that would indicate an issue with the drives at all. I do not have windows available to test the controller. I will get the same controller (but with vendor brand from fujitsu) and will check if...
Maybe you missed this in my text: "ssds have no smart errors. did short and long tests.". And you may have missed this as well "Removing ssd drives from raid controller for test and attach to onboard sata-port. write rate is ~ 400MB/s."
So the drives are ok and can be fast.
750 Evo Samsung SSD.
Performance is almost the same with fio:
root@proxmox:/ssd# fio --max-jobs=1 --numjobs=1 --readwrite=write --blocksize=4M --size=5G --direct=1 --name=fiojob
fiojob: (g=0): rw=write, bs=4M-4M/4M-4M/4M-4M, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
fiojob: Laying...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.