Yes, looks fairly reasonable as the RocksDB on HDD will drastically reduce the performance.
If you want decent speed the RocksDB and WAL of the OSD should reside on SSD devices.
Ich habe mehrere Jahre lang lvm-cache verwendet und hab es inzwischen wieder abgebaut.
Der Performance-Gewinn war am Ende nicht wie erhofft und es wird eben dauerhaft auf die SSD geschrieben.
Es scheint mir nachhaltiger, SSD und HDD als getrennte Storages einzubinden und die VM-Images nach...
Proxmox can manage LXC containers. These are more than just Docker containers and are similar to leightweight virtual machines.Therefor you do not just have an "SSH container".
Proxmox offers to download many turnkey linux templates when you open a storage and go to "CT templates".
Den Cluster kannst Du schon aufbauen und auch VMs drauf betreiben.
Es wird nur keine Hochverfügbarkeit für die VMs geben, da deren Images ja auf lokalem Storage liegen, der eben weg ist, wenn der Proxmox-Knoten stirbt.
Im Normalbetrieb lassen sich die VMs dank Storage-Migration aber auch live...
12 HDD OSDs on one SSD (how large are they?) is a bit too much as failure domain and IOPS stress on the SSD.
And you should not use these SSDs for the Proxmox installation if they are to be used as RocksDB partitions (there is no journal any more for Ceph OSDs). HDD-OSDs always need their...
It looks like somehow the port number 6789 became part of the IPv6 address.
Check your ceph.conf if the IPv6 addresses for the MONs are correct. In the end you do not need the default port in the ceph.conf.
Just list the IPv6 addresses in the mon_host line.
You need at least 7 online nodes in the cluster to form a quorum.
If there are less than 7 nodes that see each other the cluster will stop working.
Why? Because the cluster logic has to assume that the other nodes are connected somewhere else (network split brain) and can form a majority there...
The screenshot only shows 2 NVMe.
If you really have 4 use them as DB/WAL device for the HDDs and an additional OSD. If possible use the NVMe controller to create 2 namespaces on each NVMe. Otherwise use LVM. Make the DB/WAL volume 70G for each HDD, 6 of these on each NVMe, use the rest for an...
If you want to automate the Proxmox installation itself you could go via installing Debian first and then adding Proxmox.
I am sure a Debian installation can be automated the way you need it.
Because the first ARP request for .201 was answered with the MAC address of vmbr0.
Look into your station's neighbor table to see which IP address currently resolves to which MAC.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.