Search results

  1. C

    [SOLVED] Ceph cluster network vs. Ceph public network: which data is transferred over which network?

    Hi, I have completed setup of 6 node cluster running PVE and Ceph. This is my ceph configuration: root@ld3955:~# more /etc/pve/ceph.conf [global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network =...
  2. C

    [SOLVED] Ceph health error: pg current state unknown, last acting [76]

    Yes, there's a pool with ID 4. root@ld3955:~# ceph osd pool ls detail pool 4 'backup' replicated size 3 min_size 2 crush_rule 1 object_hash rjenkins pg_num 1024 pgp_num 1024 last_change 7626 flags hashpspool stripe_width 0 application rbd pool 6 'nvme' replicated size 3 min_size 2 crush_rule 2...
  3. C

    [SOLVED] Ceph health error: pg current state unknown, last acting [76]

    Restarting osd.76 fixed the issue. Now, ceph health detail does not report this again. root@ld3955:~# ceph health detail HEALTH_WARN 2 pools have many more objects per pg than average; clock skew detected on mon.ld5506 MANY_OBJECTS_PER_PG 2 pools have many more objects per pg than average...
  4. C

    [SOLVED] Ceph health error: pg current state unknown, last acting [76]

    Well, I did not wait until all OSDs have been green in WebUI before rebooting another node. What do you mean by "scaling problem in cluster"? I don't think there's an issue with the usage, though. root@ld3955:~# ceph -s cluster: id: 6b1b5117-6e08-4843-93d6-2da3cf8a6bae health...
  5. C

    [SOLVED] Ceph health error: pg current state unknown, last acting [76]

    Hi, I had trouble with my ceph cluster after rebooting the nodes sequentially. This is fixed in the meantime, however there's an error message when executing ceph health detail: root@ld3955:~# ceph health detail HEALTH_WARN 2 pools have many more objects per pg than average; Reduced data...
  6. C

    RBD storage 100% full

    I have modified crush map and ceph cluster runs stable again. Please check the attached document for this crush map; if you don't mind please comment on this crush map in case there's an error. Now, there's only one issue, but this is related to "unknown pgs" and I will open another thread for...
  7. C

    [SOLVED] Creating storage type CephFS fails: error with cfs lock 'file-storage_cfg': mount error: exit code 2

    This "something wrong with Ceph" is identified: Crush Map was resetted to some kind of default. But this results in a faulty Ceph Cluster where device classes and rules are expected to be used in the crush map.
  8. C

    RBD storage 100% full

    Update: This crush map does not reflect the device classes. Therefor it must be customized. I did this already before, therefore my question is: How can the crush map be "resetted" after cluster node reboot? Why does this happen?
  9. C

    RBD storage 100% full

    Actually I defined device classes. The output looks strange to me: root@ld3955:~# ceph osd crush tree --show-shadow ID CLASS WEIGHT TYPE NAME -52 nvme 0 root hdd~nvme -60 nvme 0 host ld5505-hdd~nvme -58 nvme 0 host ld5506-hdd~nvme -56 nvme 0...
  10. C

    [SOLVED] Creating storage type CephFS fails: error with cfs lock 'file-storage_cfg': mount error: exit code 2

    OK. But this should not take hours. My conclusion is that something is wrong with Ceph.
  11. C

    RBD storage 100% full

    root@ld3955:~# ceph -s cluster: id: 6b1b5117-6e08-4843-93d6-2da3cf8a6bae health: HEALTH_WARN 1 MDSs report slow metadata IOs 34080/25026 objects misplaced (136.178%) Reduced data availability: 5115 pgs inactive, 19 pgs peering...
  12. C

    [SOLVED] Creating storage type CephFS fails: error with cfs lock 'file-storage_cfg': mount error: exit code 2

    Hi, after creating MDS and CephFS manually in my cluster, I want to create a storage of type cephfs. However this fails with error: error with cfs lock 'file-storage_cfg': mount error: exit code 2 This is the complete output: root@ld3955:~# pvesm add cephfs pve_cephfs mount error 2 = No such...
  13. C

    RBD storage 100% full

    Hello! I have successfully setup a PVE cluster with Ceph. After creating ceph pools and related RBD storage I moved the VM's drive to this newly created RBD storage. Due to some issues I needed to reboot all cluster nodes one after the other. Since then the PVE storage reports that all RBD is...
  14. C

    [SOLVED] Howto setup watchdog?

    Hi, I'm running PVE cluster on 6 nodes. In total 2 different server models are used, but all are from Lenovo. In the server configuration I can define 3 types of server timeouts: OS Watchdog Loader Watchdog Enable Power Off Delay I read here that by default all hardware watchdog modules are...
  15. C

    [SOLVED] Need some advice for setting up multinode HA cluster

    Right. I managed to add all nodes to the cluster successfully using this command: pvecm add ld3955-corosync1 --ring0_addr 172.16.0.x --ring1_addr 172.16.1.x root@ld3955:~# pvecm status Quorum information ------------------ Date: Wed May 22 14:53:39 2019 Quorum provider...
  16. C

    [SOLVED] Need some advice for setting up multinode HA cluster

    Hi, I want to setup a multinode HA cluster. I've completed OS and PVE installation and configured separate networks; this results in the following /etc/hosts: root@ld3955:~# more /etc/hosts 127.0.0.1 localhost.localdomain localhost 10.97.206.91 ld3955.example.com ld3955 # The...
  17. C

    [SOLVED] Inconsistency Grub Menu vs. /boot/grub/grub.cfg

    Hello! I have a question regarding the content of Grub Menu vs. /boot/grub/grub.cfg. In my case the content is inconsistent. And this is causing an issue with booting a BTRFS snapshot, because the required snapshot won't boot with the options displayed in Grub Menu. I'm running these software...
  18. C

    Why is storage type rbd only for Disk-Image + Container

    I never said that I want to use the same storage for backup, Disk-Image and Container. This makes no sense at all. But I want to create a storage of type rbd to be used for backup. This would create another rbd in a specific pool that is only used for backups.
  19. C

    Why is storage type rbd only for Disk-Image + Container

    Hm... I don't fully understand your response, but maybe my question was not clear. My use case is this: Running PVE + Ceph Cluster I want to store backups in a RBD. Creating a RBD with PVE only allows me to select storage content Disk-Image and Container. As a workaround I created a RBD...
  20. C

    Why is storage type rbd only for Disk-Image + Container

    Hello! Can you please share some information why storage type rbd is only availabel for Disk-Image and Container? I would prefer to dump a backup to another rbd. THX

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!