Search results

  1. RokaKen

    Add cephfs storage

    Please post logs as text within tags (like your original post) instead of graphics. In any case, it would appear that the remote MONs are not referring the MDS server (for the subdir). Other than that, I can't tell anything.
  2. RokaKen

    Add cephfs storage

    My guess would be be that you did not add the /etc/pve/priv/ceph/<STORAGE_ID>.secret file for the NEW storage. See Storage:CephFS Also, I notice you have configured the CEPH internal health metrics module pool as image storage in PVE. That is a really bad idea for several reasons, but it will...
  3. RokaKen

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    Yes, that should work. I've never changed PGs (pg_num and pgp_num) via the GUI -- always used CEPH CLI. Perhaps @aaron or other staff can confirm the PVE tooling does the same thing.
  4. RokaKen

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    Yes, you just need to create a CRUSH rule for the NVME class (similar to what was done for the SSD class) and then set the existing pools to use that instead of the default replicated_rule. There's details here [0] and in the CEPH documentation [1]. The resulting data migration will cause...
  5. RokaKen

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    So, yes, the poor performance of one SSD is affecting all storage. As you can see from your ceph osd crush tree --show-shadow, the root default includes _both_ NVME and SSD OSDs. Then, the CRUSH "replicated_rule" uses that for the 'device_health_metrics', 'ceph-nvme', 'cephfs_data' and...
  6. RokaKen

    [SOLVED] 3 node ceph - performance degraded due to bad disk? affecting other pool? crushmap?

    What is the output of the following: ceph osd pool ls detail ceph osd crush rule dump ceph osd df tree
  7. RokaKen

    Configuring Dual Corosync Networks

    That's expected behavior. You may use the command corosync-cfgtool -n to see the status of both rings. It should produce something like: Local node ID 2, transport knet nodeid: 3 reachable LINK: 0 udp (172.25.10.72->172.25.10.73) enabled connected mtu: 1397 LINK: 1 udp...
  8. RokaKen

    [SOLVED] misplaced objects after removing OSD

    Well, warnings have a purpose -- ignoring them is up to you. But, OBJECT_MISPLACED [0] doesn't indicate an immediate problem unless you lose another OSD. [0] https://docs.ceph.com/en/nautilus/rados/operations/health-checks/#object-misplaced Assuming you have a default 3/2 size for your...
  9. RokaKen

    Expend CEPH pool storage

    Technically, "resize" refers to the number of replicas, so no, don't change size. However, you are asking about PG count -- yes, I would increase PG count to at least 1024 for a total of 24 OSDs. If you use the PG autoscaler [0] (default enabled), it will change the PG count automatically...
  10. RokaKen

    Prevent SystemD from renaming after upgrade.

    Yes Yes, as described in the section 3.3.2 of the PVE Admin Guide [0] and explained at the included link Predictable Network Interface Names [1], one solution would be using systemd.link [2] to maintain reasonable names of your own choosing. [0]...
  11. RokaKen

    LAG, HP1810 Switch

    If cat /proc/net/bonding/bond0 shows the bond up and the slaves with proper actor / partner lacp pdu entries, then it's just cosmetic -- ignore it. Something like: [ 32.447962] bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond [ 32.447973] bond0: active...
  12. RokaKen

    CEPH storage confusion

    The calculations are not well documented except in the source code, IMO. AFAIK, the following are true: %USED ~= STORED / (STORED + MAX AVAIL) MAX AVAIL is calculated with respect to the OSD full ratio (95%), most full OSD in pool (%), WEIGHT, total weight of set of OSDs in pool and...
  13. RokaKen

    Bridge + NAT + Firewall

    See the wiki on Network Configuration [0], in particular, the note about "conntrack zones". [0] https://pve.proxmox.com/wiki/Network_Configuration#_masquerading_nat_with_tt_span_class_monospaced_iptables_span_tt
  14. RokaKen

    Tuning of vCPUs for VM when underlying HW has two sockets

    See the answer and related links here: https://forum.proxmox.com/threads/cpu-pinning.73893/post-329969
  15. RokaKen

    Tuning of vCPUs for VM when underlying HW has two sockets

    See https://pve.proxmox.com/wiki/NUMA I have: # numactl --hardware available: 2 nodes (0-1) node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 node 0 size: 72473 MB node 0 free: 53427 MB node 1 cpus: 1 3 5 7 9 11 13 15 17 19 21 23 node 1 size: 72571 MB node 1 free: 44733 MB node distances: node 0...
  16. RokaKen

    Bulk OSD replace in Ceph

    IFF you have sufficient space on the remaining OSDs per node (none would reach near full ratio) and PGs per OSD (mon_max_pg_per_osd), I would drain the OSD(s) to be replaced with ceph osd reweight {ID} 0 and then replace them per node. After OSD replacement, the PGs could rebalance at their...
  17. RokaKen

    pool error

    Well, I consider the "pg_autoscaler" to be more annoyance than benefit, but see my post here: https://forum.proxmox.com/threads/pve7-ceph-16-2-5-pools-and-number-of-pg.96341/post-417434 for a workaround (and the post linked therein for a deeper discussion/explanation).
  18. RokaKen

    Error creating snapshot: rbd image already exists

    I have none -- would only be conjecture without more information. The image was created at Thu Sep 30 16:18:00 2021 so does that coincide with creating a VM snapshot from GUI? CLI? What do the task logs show? Then, what was done to remove the VM snapshot? GUI? CLI? Task log? You might be...
  19. RokaKen

    Adding second corosync ring best practice

    The procedure is documented in Corosync Redundancy [0], specifically section 5.8.1. There is additional reference information here [1]. [0] https://pve.proxmox.com/pve-docs/pve-admin-guide.html#pvecm_redundancy [1] http://people.redhat.com/ccaulfie/docs/KnetCorosync.pdf

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!