Recent content by Mino

  1. M

    CEPH Health Warn

    Little update, a ceph-mon crashed again : { "os_version_id": "10", "assert_condition": "z >= signedspan::zero()", "utsname_release": "5.3.13-1-pve", "os_name": "Debian GNU/Linux 10 (buster)", "entity_name": "mon.proxmox03", "assert_file"...
  2. M

    CEPH Health Warn

    As timesyncd is installed by default, it was my NTP daemon at the beginning. But I always had warnings with CEPH complaining that there was time drifts between the servers. Timesyncd wasn't able to keep the servers synchronized. When i switched to chrony, no more problem. Well at least until I...
  3. M

    CEPH Health Warn

    I have 3 servers like these : - they're synchronized with chrony to public NTP, and I never noticed any NTP problem. Chrony synchronizes system clock to hardware clock every 11 minutes by default. - the hardware is an ASROCKRACK X470D4U motherboard with an AMD Ryzen 5 2600 CPU for each
  4. M

    CEPH Health Warn

    Thanks for the reply, i was able to clear the warning. By the way I discovered there was another crash for the same reason some days earlier : { "os_version_id": "10", "assert_condition": "z >= signedspan::zero()", "utsname_release": "5.0.21-2-pve", "os_name": "Debian GNU/Linux...
  5. M

    CEPH Health Warn

    Hello, Yesterday, CEPH crashed on one server and now I have a health warning on the dashboard. Please see logs attached. root@proxmox01 [~] # pveversion pve-manager/6.1-5/9bf06119 (running kernel: 5.3.13-1-pve) What can I do about this problem, and how to clear the warning ?
  6. M

    Create Ceph OSD on SAN in Proxmox 6

    It doesn't work with `vg_ceph_baie1_01/lv_ceph`. I got the same error. I know CEPH is supposed to work only with physical disks, but as i said, this is a lab environment where i only have SAN storage. So there is no way to install an OSD on a /dev/mapper device ?
  7. M

    Create Ceph OSD on SAN in Proxmox 6

    Ok thanks for the reply. I generated the new keyring but still had an error : root@server:~# ceph-volume lvm prepare --data /dev/mapper/LUN_ceph_baie1_01 Running command: /bin/ceph-authtool --gen-print-key Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring...
  8. M

    Create Ceph OSD on SAN in Proxmox 6

    For example, with multipath device /dev/mapper/LUN_ceph_bay1_01, the underlying paths are /dev/sdc and /dev/sdd. This is the way i could create OSD before on Proxmox 5 / Ceph 12 : ceph-disk prepare /dev/mapper/LUN_ceph_bay1_01 --cluster-uuid CLUSTER_UUID_HERE ceph-disk activate...
  9. M

    Create Ceph OSD on SAN in Proxmox 6

    Hello, With previous proxmox 5 version, i was able to create ceph OSD on a multipath LUN with the command ceph-disk. Now this command disappeared in proxmox 6. I need to test ceph in my Lab, which only has SAN as storage. How could i create OSD with a /dev/mapper/xxxxxx device. The ceph-volume...
  10. M

    [PVE 6.0] Cannot set ZFS arc_min and arc_max

    Ok i found the solution reading your embedded doc in https://my_ip:8006/pve-docs/chapter-sysadmin.html#sysboot I had to run pve-efiboot-tool init /dev/nvme0n1p2 to initialize the ESP partition. Don't know why it wasn't already initialized during installation. Hope it helps someone ...
  11. M

    [PVE 6.0] Cannot set ZFS arc_min and arc_max

    This is not working : root@proxmox02:~# grep 'c_m[ia]' /proc/spl/kstat/zfs/arcstats c_min 4 1053766656 c_max 4 16860266496 root@proxmox02:~# cat /sys/module/zfs/parameters/zfs_arc_min 0 root@proxmox02:~# cat...
  12. M

    [PVE 6.0] Cannot set ZFS arc_min and arc_max

    I even tried removing the comments : root@proxmox02:~# cat /etc/modprobe.d/zfs.conf options zfs zfs_arc_min=536870912 options zfs zfs_arc_max=4294967296 Running update-initramfs in verbose mode shows the file is read : root@proxmox02:~# update-initramfs -uv | grep zfs.conf Adding config...
  13. M

    [PVE 6.0] Cannot set ZFS arc_min and arc_max

    Hello, Just installed fresh pve 6.0 on 2 servers, the 2 are identical : root@proxmox02:~# pveversion pve-manager/6.0-2/865bbe32 (running kernel: 5.0.15-1-pve) I installed with ZFS for root partition : root@proxmox02:~# zpool list NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG...