Recent content by cyp

  1. C

    Ceph Quincy - rbd: listing images failed after 17.2.4 to 17.2.5 upgrade

    Think you are rigth, the problem must come from the name https://docs.ceph.com/en/latest/rados/operations/pools/#pool-names I don't remember how it has been created, probably just by adding storage using this already existing default pool (maybe Proxmox interface must add something to prevent...
  2. C

    Ceph Quincy - rbd: listing images failed after 17.2.4 to 17.2.5 upgrade

    I don't find other pools root@pve24:~# ceph osd lspools 1 .mgr root@pve24:~# The name with a dot at beginning is a bit weird, so I also think something happen with the name during upgrade but /etc/pve/storage.cfg backup show it was named liked that since the install. rbd: global...
  3. C

    Ceph Quincy - rbd: listing images failed after 17.2.4 to 17.2.5 upgrade

    As I understand, it's two ways to write the same thing root@pve22:~# rbd -p .mgr info vm-386-disk-0 rbd: error opening image vm-386-disk-0: (2) No such file or directory root@pve22:~# rbd info .mgr/vm-386-disk-0 rbd: error opening image vm-386-disk-0: (2) No such file or directory
  4. C

    Ceph Quincy - rbd: listing images failed after 17.2.4 to 17.2.5 upgrade

    rbd_id do not seem mandatory, according to the doc, rbd_directory object maps image names to ids https://docs.ceph.com/en/quincy/dev/rbd-layering/#renaming So I get a look to rbd_directory metada, and it seems also ok, volume name match existing headers root@pve24:~# rados -p .mgr...
  5. C

    Ceph Quincy - rbd: listing images failed after 17.2.4 to 17.2.5 upgrade

    Some additional information, I try to use rados command to get more details about pool data. Number of rbd_headers match with the numbers of volume. root@pve24:~# rados -p .mgr ls |grep rbd_header rbd_header.ad6c3c3396b67 rbd_header.e348823bba989 rbd_header.12376b7eef267f...
  6. C

    Ceph Quincy - rbd: listing images failed after 17.2.4 to 17.2.5 upgrade

    Hi, I have upgraded a ceph quincy cluster from 17.2.4 to 17.2.5, it seems running fine after restart all monitors, managers and osds but I have error if I try to backup or start a VMs ERROR: Backup of VM 386 failed - no such volume 'global:vm-386-disk-0' I also get error if I try to look...
  7. C

    Ceph 16.2.7 Pacific cluster Crash

    @francoisd It's unresolved for me but I have opened an issue on Ceph bug tracker, maybe you could find some useful info on it https://tracker.ceph.com/issues/53814
  8. C

    Ceph 16.2.7 Pacific cluster Crash

    Hi all, Few days after an Octopus to Pacific upgrade, I have a crashed Ceph Cluster. Most of the OSD are down (6 on 8) and crash on start. Seems like a lot to https://forum.proxmox.com/threads/ceph-16-2-pacific-cluster-crash.92367/ but switch bluestore_allocator an bluefs_allocator to bitmap...
  9. C

    PVE 7.0 BUG: kernel NULL pointer dereference, address: 00000000000000c0-PF:error_code(0x0000) - No web access no ssh

    To give more info on my case, it was two fresh reinstall on servers previously running fine on Proxmox 6. The servers where removed from a PVE6/CEPH cluster, reinstalled from PVE7 iso and joined to another PVE7/CEPH cluster (formed by same hardware servers running fine with previous kernel...
  10. C

    PVE 7.0 BUG: kernel NULL pointer dereference, address: 00000000000000c0-PF:error_code(0x0000) - No web access no ssh

    Hi, Same problem here, reproduced on two different machine (same hardware) with 5.11.22-4-pve. It occurs after a few days. Sep 7 09:11:51 pve12 kernel: [65320.444899] BUG: kernel NULL pointer dereference, address: 0000000000000000 Sep 7 09:11:51 pve12 kernel: [65320.444941] #PF: supervisor...
  11. C

    Proxmox Backup Server 2.0 released!

    I have the Proxmox Backup Server repository and Proxmox services has been upgraded to 2.0 and working fine. I just have a closer look and the Proxmox kernel is available when I do an apt search, seems it was just not installed during upgrade. I will try to install it and reboot. Maybe it's come...
  12. C

    Proxmox Backup Server 2.0 released!

    Also, on my upgraded server the kernel version is 5.10. I think is the regular bullseye kernel and not a proxmox (5.11) one. Is it normal? The original install came from a template of my dedicated server provider (Scaleway) and I don't know how is it build (Proxmox ISO, Debian ISO + Proxmox...
  13. C

    Proxmox Backup Server 2.0 released!

    Thanks a lot for this release. To upgrade my my server I needed to add bullseye repositories key wget https://enterprise.proxmox.com/debian/proxmox-release-bullseye.gpg -O /etc/apt/trusted.gpg.d/proxmox-release-bullseye.gpg Maybe it can be add to...
  14. C

    Linux Kernel 5.4 for Proxmox VE

    Installed on a few servers and on a cluster of 5 nodes (5 servers like this https://www.scaleway.com/en/dedibox/pro/pro-5-l/ ) with Ceph. Everything run fine and stable since two weeks.
  15. C

    high memory usage during backup

    Thank for the reply, I will try that on the server using ZFS (it's the most problematic one, the other one just go down one time after upgrade but seems more stable since this crash).