I am trying to reduce the log writing to the consumer SSD disks, based on the Ceph documentation I can move the Ceph logs to the Syslog logs by editing /etc/ceph/ceph.conf and adding:
[global]
log_to_syslog = true
Is this the right way to do it?
I already have Journald writing to memory with Storage=volatile in /etc/systemd/journald.conf
If I run systemctl status systemd-journald I get:
/run is in RAM (mounted on a filesystem of type tmpfs), therefore /run/log/journal as well:
The, If I run journalctl -n 10 I get the following:
There are plenty of ceph-mon entries, I think it is safe to assume Ceph logs are being stored in Syslog, therefore also in RAM
Any feedback will be appreciated, thank you
[global]
log_to_syslog = true
Is this the right way to do it?
I already have Journald writing to memory with Storage=volatile in /etc/systemd/journald.conf
If I run systemctl status systemd-journald I get:
Code:
Dec 05 17:20:27 N1 systemd-journald[386]: Journal started
Dec 05 17:20:27 N1 systemd-journald[386]: Runtime Journal (**/run/log/journal/**077b1ca4f22f451ea08cb39fea071499) is 8.0M, max 641.7M, 633.7M free.
Dec 05 17:20:27 N1 systemd-journald[386]: Runtime Journal (**/run/log/journal/**077b1ca4f22f451ea08cb39fea071499) is 8.0M, max 641.7M, 633.7M free.
/run is in RAM (mounted on a filesystem of type tmpfs), therefore /run/log/journal as well:
Code:
root@N1:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 55M 6.3G 1% /run
/dev/mapper/pve-root 450G 21G 410G 5% /
tmpfs 32G 66M 32G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 64K 36K 23K 62% /sys/firmware/efi/efivars
/dev/sda2 1022M 12M 1011M 2% /boot/efi
/dev/fuse 128M 32K 128M 1% /etc/pve
tmpfs 32G 28K 32G 1% /var/lib/ceph/osd/ceph-0
The, If I run journalctl -n 10 I get the following:
Code:
Dec 06 09:56:15 N1 **ceph-mon[1064]**: 2024-12-06T09:56:15.000-0500 7244ac0006c0 0 log_channel(audit) log [DBG] : from='client.? 10.10.10.6:0/522337331' entity='client.admin' cmd=[{">
Dec 06 09:56:15 N1 **ceph-mon[1064]**: 2024-12-06T09:56:15.689-0500 7244af2006c0 1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>
Dec 06 09:56:20 N1 **ceph-mon[1064]**: 2024-12-06T09:56:20.690-0500 7244af2006c0 1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>
Dec 06 09:56:24 N1 **ceph-mon[1064]**: 2024-12-06T09:56:24.156-0500 7244ac0006c0 0 mon.N1@0(leader) e3 handle_command mon_command({"format":"json","prefix":"df"} v 0)
Dec 06 09:56:24 N1 ceph-mon[1064]: 2024-12-06T09:56:24.156-0500 7244ac0006c0 0 log_channel(audit) log [DBG] : from='client.? 10.10.10.6:0/564218892' entity='client.admin' cmd=[{">
Dec 06 09:56:25 N1 **ceph-mon[1064]**: 2024-12-06T09:56:25.692-0500 7244af2006c0 1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>
Dec 06 09:56:30 N1 **ceph-mon[1064]**: 2024-12-06T09:56:30.694-0500 7244af2006c0 1 mon.N1@0(leader).osd e614 _set_new_cache_sizes cache_size:1020054731 inc_alloc: 348127232 full_allo>
There are plenty of ceph-mon entries, I think it is safe to assume Ceph logs are being stored in Syslog, therefore also in RAM
Any feedback will be appreciated, thank you
Last edited: