As timesyncd is installed by default, it was my NTP daemon at the beginning.
But I always had warnings with CEPH complaining that there was time drifts between the servers.
Timesyncd wasn't able to keep the servers synchronized.
When i switched to chrony, no more problem. Well at least until I...
I have 3 servers like these :
- they're synchronized with chrony to public NTP, and I never noticed any NTP problem. Chrony synchronizes system clock to hardware clock every 11 minutes by default.
- the hardware is an ASROCKRACK X470D4U motherboard with an AMD Ryzen 5 2600 CPU for each
Thanks for the reply, i was able to clear the warning.
By the way I discovered there was another crash for the same reason some days earlier :
{
"os_version_id": "10",
"assert_condition": "z >= signedspan::zero()",
"utsname_release": "5.0.21-2-pve",
"os_name": "Debian GNU/Linux...
Hello,
Yesterday, CEPH crashed on one server and now I have a health warning on the dashboard.
Please see logs attached.
root@proxmox01 [~] # pveversion
pve-manager/6.1-5/9bf06119 (running kernel: 5.3.13-1-pve)
What can I do about this problem, and how to clear the warning ?
It doesn't work with `vg_ceph_baie1_01/lv_ceph`. I got the same error.
I know CEPH is supposed to work only with physical disks, but as i said, this is a lab environment where i only have SAN storage.
So there is no way to install an OSD on a /dev/mapper device ?
Ok thanks for the reply.
I generated the new keyring but still had an error :
root@server:~# ceph-volume lvm prepare --data /dev/mapper/LUN_ceph_baie1_01
Running command: /bin/ceph-authtool --gen-print-key
Running command: /bin/ceph --cluster ceph --name client.bootstrap-osd --keyring...
For example, with multipath device /dev/mapper/LUN_ceph_bay1_01, the underlying paths are /dev/sdc and /dev/sdd.
This is the way i could create OSD before on Proxmox 5 / Ceph 12 :
ceph-disk prepare /dev/mapper/LUN_ceph_bay1_01 --cluster-uuid CLUSTER_UUID_HERE
ceph-disk activate...
Hello,
With previous proxmox 5 version, i was able to create ceph OSD on a multipath LUN with the command ceph-disk.
Now this command disappeared in proxmox 6.
I need to test ceph in my Lab, which only has SAN as storage.
How could i create OSD with a /dev/mapper/xxxxxx device. The ceph-volume...
Ok i found the solution reading your embedded doc in https://my_ip:8006/pve-docs/chapter-sysadmin.html#sysboot
I had to run pve-efiboot-tool init /dev/nvme0n1p2 to initialize the ESP partition.
Don't know why it wasn't already initialized during installation.
Hope it helps someone ...
Hello,
Just installed fresh pve 6.0 on 2 servers, the 2 are identical :
root@proxmox02:~# pveversion
pve-manager/6.0-2/865bbe32 (running kernel: 5.0.15-1-pve)
I installed with ZFS for root partition :
root@proxmox02:~# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.