gurubert's latest activity

  • gurubert
    Proxmox does not use DHCP for network configuration. You could remove the IP configuration from vmbr0 and add a local IP on vmbr1 and also point the default gateway to the opnsense VM. You just have to make sure that these changes happen so...
  • gurubert
    the packages in all our repositories are maintained by us, not the community. this is also wrong, there are no additional features associated with a subscription other than the enterprise repository and, depending on subscription level, access...
  • gurubert
    PVE is open source. You can use it with all features for free (no artificial limitations and no stripped down version). What you pay for is technical support and/or access to the more stable enterprise repo. There is no way to get that for free...
  • gurubert
    @kartheek.kp If your requirement is only HDD for storage you will struggle with IOPS... However, If you can't afford SSD for main storage, I suggest considering if you can afford 1 or 2 SSDs (per host) to support your HDD storage. Even smaller...
  • gurubert
    gurubert reacted to Falk R.'s post in the thread Backup auf tägliche USB-HDDs with Like Like.
    Ein Namespace ist nur eine Logische Zuordnung, aber du nutzt die gleichen Dedup Chunks im Backend. Kein Geld ausgeben, aber Ransomwareschutz wollen, naja das klappt nicht so wirklich gut. Gerade Lösungen mit Menschlicher Interaktion (USB HDD /...
  • gurubert
    gurubert reacted to powderhorn's post in the thread [SOLVED] Ceph Object RGW with Like Like.
    @Drallas, I was frustrated to find this too but there are some reasons why this is the case that I kind of understand. 1 - As much as I really want rgw, prox isn't obliged to support all of the features of ceph just because it supports parts of...
  • gurubert
    gurubert replied to the thread Ceph support - Not Proxmox.
    Ah, I did not know about Ceph nano. It says that it only exposes S3 which is HTTP. You will not be able to access the Ceph "cluster" running inside this container with anything else.
  • gurubert
    gurubert replied to the thread Ceph support - Not Proxmox.
    If a MON has 127.0.0.1 as its IP there is something fundamentally wrong in the setup. The MONs need an IP from Ceph's public_network so that they are reachable from all the other Ceph daemons and the clients.
  • gurubert
    Bitte nicht, nimm dann lieber eine 24 Core CPU, die Skaliert deutlich besser als zwei kleine CPUs mit QPI oder UPI Link dazwischen. Das ist inzwischen normal. Für Corosync bitte kein Bond. Da reicht auch locker 1 GBit und dann lieber 2 IPs für...
  • gurubert
    Do not use HDD only with Ceph.
  • gurubert
    Are these two separate CephFS volumes? What does "ceph fs status" show? If that's the case you need to specify the fs name with the fs= option to the mount command.
  • gurubert
    gurubert replied to the thread OSD ghost.
    Ceph is not able to run dual stack. You have to either use IPv4 or IPv6 on both networks.
  • gurubert
    gurubert reacted to itNGO's post in the thread ceph slow_ops occured but fast with Like Like.
    When multiple OSDs have slow IOPS at the same time, it might be a network/connection-issue. What does journalctl and ceph-log say?
  • gurubert
    gurubert replied to the thread Ceph total storage/useable.
    Failure donain host will not allow two copies on the same host. All PGs of the pool will be degraded in such a situation as Ceph is not able to find a location for the fourth copy.
  • gurubert
    gurubert replied to the thread Ceph total storage/useable.
    No, the size of the pool defines the number of copies it creates (when being replicated, not erasure coded). It has nothing to do with the number of nodes. A size=3 pool will distribute the three copies over four nodes randomly. If an OSD is...
  • gurubert
    gurubert replied to the thread Ceph total storage/useable.
    Your datastore01 pool has a size of 4. It stores four copies of each object, hence its usable capacity is only 25% of the total capacity.
  • gurubert
    gurubert reacted to VictorSTS's post in the thread Multipath Ceph storage network with Like Like.
    That depends on your corosync configuration and which interfaces are used for corosync and Ceph. Corosync networks are completely different thing than Ceph networks. In fact, you may even have quorum in PVE and not in Ceph and vice versa. Also...
  • gurubert
    gurubert reacted to VictorSTS's post in the thread Multipath Ceph storage network with Like Like.
    Use MLAG with both F5 switches and configure LACP 802.3ad LAG both in PVE and in the switches and you will get both links in an active/active setup with failover/failback times tipically under 400ms. Remember to add the LAG to the Ceph vlan. You...
  • gurubert
    gurubert replied to the thread Multipath Ceph storage network.
    This is Ethernet and not Fibre-Channel. You cannot have two separate interfaces in the same VLAN. Create a stack out of the two switches (or MLAG or whatever the vendor calls it) and then a link aggregation group for each Ceph node with one...
  • gurubert
    Since the "device_health_metrics" pool is present, you must be on older versions of Ceph & Proxmox VE. Before you upgrade, create a new pool and move all the disk of the VMs to it. If you check the configuration of the "CephPool01" storage in...