Search results

  1. gurubert

    Netzwerkprobleme nach Kernel Update

    Es schaltet das Verarbeiten von VLANs auf der Hardware ab. Da scheint es ein Problem in der Kombination Treiber und Hardware zu geben.
  2. gurubert

    Netzwerkprobleme nach Kernel Update

    Guck mal https://forum.proxmox.com/threads/error-i40e_aq_rc_enospc-forcing-overflow-promiscuous-on-pf.62875/ an
  3. gurubert

    Ceph select specific OSD to form a Pool

    Ceph can only distinguish between rotating and non-rotating devices. It uses the flag in /sys/block/DEVICE/queue/rotational for that purpose. If this file contains 1 the OSD gets "hdd" as device class. If it contains 0 the device class "ssd" is assigned. But you can change the device class of...
  4. gurubert

    Datacenter und/oder Cluster mit local storage only

    BTW: Wir verkaufen unser Know-How auch. ;)
  5. gurubert

    Datacenter und/oder Cluster mit local storage only

    Die Anbindung macht keinen Unterschied, solange die LUN an allen Knoten die selbe ist. Ja, die Präsi ist von mir, da sind inzwischen aber wieder neue Kenntnisse hinzugekommen. Insbesondere beim Formatieren der LUN sollte mkfs.ocfs2 mit -T vmstore aufgerufen werden, damit die Blocksize und vor...
  6. gurubert

    Datacenter und/oder Cluster mit local storage only

    Unsupported lässt sich auf einer FC-LUN ein OCFS2 anlegen und an jedem Proxmox-Knoten mounten. Proxmox sieht das dann als gemeinsam genutztes Verzeichnis-Storage (wie NFS) und mit qcow2-Images gibt es dann Thin-Provisioning und Snapshots.
  7. gurubert

    Datacenter und/oder Cluster mit local storage only

    Proxmox supported dann nur thick provisioned VM Images ohne Snapshots.
  8. gurubert

    Ceph Rebalance Speed Sanity Check

    Yes, looks fairly reasonable as the RocksDB on HDD will drastically reduce the performance. If you want decent speed the RocksDB and WAL of the OSD should reside on SSD devices.
  9. gurubert

    Wo anfangen bei SSD tiering in Proxmox?

    Ich habe mehrere Jahre lang lvm-cache verwendet und hab es inzwischen wieder abgebaut. Der Performance-Gewinn war am Ende nicht wie erhofft und es wird eben dauerhaft auf die SSD geschrieben. Es scheint mir nachhaltiger, SSD und HDD als getrennte Storages einzubinden und die VM-Images nach...
  10. gurubert

    Container Templates

    The container templates are stored in the storage. Templates need a storage of type filesystem and are stored in the templates directory.
  11. gurubert

    Container Templates

    Proxmox can manage LXC containers. These are more than just Docker containers and are similar to leightweight virtual machines.Therefor you do not just have an "SSH container". Proxmox offers to download many turnkey linux templates when you open a storage and go to "CT templates".
  12. gurubert

    Bluestore erroring opening db

    You're right, with only SSDs there should be no external RocksDB. Could you paste the output of ceph -s and ceph osd df tree here?
  13. gurubert

    Bluestore erroring opening db

    Does this OSD have an external RocksDB, i.e. is it an HDD with the RocksDB on SSD? Is the external device available?
  14. gurubert

    Proxmox - Netzwerk / MAC Problematik

    Wenn die pfSense VM in diesem Netz kommunizieren soll, muss die MAC der vNIC freigegeben werden, denn von dieser kommen die Ethernet-Pakete.
  15. gurubert

    Datacenter und/oder Cluster mit local storage only

    Den Cluster kannst Du schon aufbauen und auch VMs drauf betreiben. Es wird nur keine Hochverfügbarkeit für die VMs geben, da deren Images ja auf lokalem Storage liegen, der eben weg ist, wenn der Proxmox-Knoten stirbt. Im Normalbetrieb lassen sich die VMs dank Storage-Migration aber auch live...
  16. gurubert

    3nodes Ceph with 2xSSD in Raid1 for Proxmox + Journal, 24x1.2TB for OSD

    12 HDD OSDs on one SSD (how large are they?) is a bit too much as failure domain and IOPS stress on the SSD. And you should not use these SSDs for the Proxmox installation if they are to be used as RocksDB partitions (there is no journal any more for Ceph OSDs). HDD-OSDs always need their...
  17. gurubert

    ceph client connecting to cephFS - what am i doing wrong?

    It looks like somehow the port number 6789 became part of the IPv6 address. Check your ceph.conf if the IPv6 addresses for the MONs are correct. In the end you do not need the default port in the ceph.conf. Just list the IPv6 addresses in the mon_host line.
  18. gurubert

    Ceph and PVE host crashes when benchmarking on specific hardware

    You will have to talk to the HP support about the compatibility of the components involved.
  19. gurubert

    [SOLVED] Quorum and cluster question

    You need at least 7 online nodes in the cluster to form a quorum. If there are less than 7 nodes that see each other the cluster will stop working. Why? Because the cluster logic has to assume that the other nodes are connected somewhere else (network split brain) and can form a majority there...
  20. gurubert

    The disk situation on my server, how to set up Ceph reasonably?

    The screenshot only shows 2 NVMe. If you really have 4 use them as DB/WAL device for the HDDs and an additional OSD. If possible use the NVMe controller to create 2 namespaces on each NVMe. Otherwise use LVM. Make the DB/WAL volume 70G for each HDD, 6 of these on each NVMe, use the rest for an...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!