Recent content by gurubert

  1. gurubert

    Ceph Rebalance Speed Sanity Check

    Yes, looks fairly reasonable as the RocksDB on HDD will drastically reduce the performance. If you want decent speed the RocksDB and WAL of the OSD should reside on SSD devices.
  2. gurubert

    Wo anfangen bei SSD tiering in Proxmox?

    Ich habe mehrere Jahre lang lvm-cache verwendet und hab es inzwischen wieder abgebaut. Der Performance-Gewinn war am Ende nicht wie erhofft und es wird eben dauerhaft auf die SSD geschrieben. Es scheint mir nachhaltiger, SSD und HDD als getrennte Storages einzubinden und die VM-Images nach...
  3. gurubert

    Container Templates

    The container templates are stored in the storage. Templates need a storage of type filesystem and are stored in the templates directory.
  4. gurubert

    Container Templates

    Proxmox can manage LXC containers. These are more than just Docker containers and are similar to leightweight virtual machines.Therefor you do not just have an "SSH container". Proxmox offers to download many turnkey linux templates when you open a storage and go to "CT templates".
  5. gurubert

    Bluestore erroring opening db

    You're right, with only SSDs there should be no external RocksDB. Could you paste the output of ceph -s and ceph osd df tree here?
  6. gurubert

    Bluestore erroring opening db

    Does this OSD have an external RocksDB, i.e. is it an HDD with the RocksDB on SSD? Is the external device available?
  7. gurubert

    Proxmox - Netzwerk / MAC Problematik

    Wenn die pfSense VM in diesem Netz kommunizieren soll, muss die MAC der vNIC freigegeben werden, denn von dieser kommen die Ethernet-Pakete.
  8. gurubert

    Datacenter und/oder Cluster mit local storage only

    Den Cluster kannst Du schon aufbauen und auch VMs drauf betreiben. Es wird nur keine Hochverfügbarkeit für die VMs geben, da deren Images ja auf lokalem Storage liegen, der eben weg ist, wenn der Proxmox-Knoten stirbt. Im Normalbetrieb lassen sich die VMs dank Storage-Migration aber auch live...
  9. gurubert

    3nodes Ceph with 2xSSD in Raid1 for Proxmox + Journal, 24x1.2TB for OSD

    12 HDD OSDs on one SSD (how large are they?) is a bit too much as failure domain and IOPS stress on the SSD. And you should not use these SSDs for the Proxmox installation if they are to be used as RocksDB partitions (there is no journal any more for Ceph OSDs). HDD-OSDs always need their...
  10. gurubert

    ceph client connecting to cephFS - what am i doing wrong?

    It looks like somehow the port number 6789 became part of the IPv6 address. Check your ceph.conf if the IPv6 addresses for the MONs are correct. In the end you do not need the default port in the ceph.conf. Just list the IPv6 addresses in the mon_host line.
  11. gurubert

    Ceph and PVE host crashes when benchmarking on specific hardware

    You will have to talk to the HP support about the compatibility of the components involved.
  12. gurubert

    Quorum and cluster question

    You need at least 7 online nodes in the cluster to form a quorum. If there are less than 7 nodes that see each other the cluster will stop working. Why? Because the cluster logic has to assume that the other nodes are connected somewhere else (network split brain) and can form a majority there...
  13. gurubert

    The disk situation on my server, how to set up Ceph reasonably?

    The screenshot only shows 2 NVMe. If you really have 4 use them as DB/WAL device for the HDDs and an additional OSD. If possible use the NVMe controller to create 2 namespaces on each NVMe. Otherwise use LVM. Make the DB/WAL volume 70G for each HDD, 6 of these on each NVMe, use the rest for an...
  14. gurubert

    [HELP] How to add a custom user and ssh key during proxmox installation

    If you want to automate the Proxmox installation itself you could go via installing Debian first and then adding Proxmox. I am sure a Debian installation can be automated the way you need it.
  15. gurubert

    [SOLVED] Two Bridges One Subnet

    Because the first ARP request for .201 was answered with the MAC address of vmbr0. Look into your station's neighbor table to see which IP address currently resolves to which MAC.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!