Dedicated zfs dataset for /var/log not working

caskote

New Member
Mar 6, 2022
5
0
1
30
Hello all together,

I have a problem when I configure a zfs dataset for /var/log.
After the change, no new logs are displayed with the command "journalctl -xe".

I followed this links:

The following procedure:

1. fresh proxmox installation
2. enable systemd service as explained in
Code:
systemctl enable zfs-import-cache.service 
systemctl enable zfs-mount.service
systemctl enable zfs.target
systemctl enable zfs-import.target
3. create the zfs log dataset and copy the logs
Code:
zpool set cachefile=/etc/zfs/zpool.cache rpool
zfs create -p rpool/var/log
zfs set canmount=off rpool/var
zfs set mountpoint=/var/log-tmp rpool/var/log
rsync -ar /var/log/ /var/log-tmp
zfs set mountpoint=/var/log rpool/var/log
4. Use "zed" to mount the dataset early enough
Code:
mkdir /etc/zfs/zfs-list.cache
touch /etc/zfs/zfs-list.cache/rpool
zed -F
zfs set canmount=off rpool
zfs set canmount=on rpool
cat /etc/zfs/zfs-list.cache/rpool
5. Restart a systemd service to check if it works (it does not work)
Code:
# systemctl restart pve-firewall.service
# journalctl -u pve-firewall.service
-- Journal begins at Sun 2023-05-07 10:32:06 CEST, ends at Tue 2023-06-06 13:35:51 CEST. --
Jun 06 10:32:09 node01 systemd[1]: Starting Proxmox VE firewall...
Jun 06 10:32:09 node01 update-alternatives[1244]: update-alternatives: using /usr/sbin/ebtables-legacy to provide /usr/sbin/ebtables (ebtables) in manual mode
Jun 06 10:32:09 node01 update-alternatives[1251]: update-alternatives: using /usr/sbin/iptables-legacy to provide /usr/sbin/iptables (iptables) in manual mode
Jun 06 10:32:09 node01 update-alternatives[1252]: update-alternatives: using /usr/sbin/ip6tables-legacy to provide /usr/sbin/ip6tables (ip6tables) in manual mode
Jun 06 10:32:10 node01 pve-firewall[1258]: starting server
Jun 06 10:32:10 node01 systemd[1]: Started Proxmox VE firewall.
Jun 06 10:37:04 node01 systemd[1]: Stopping Proxmox VE firewall...
Jun 06 10:37:04 node01 pve-firewall[1258]: received signal TERM
Jun 06 10:37:04 node01 pve-firewall[1258]: server shutting down
Jun 06 10:37:04 node01 pve-firewall[1258]: clear PVE-generated firewall rules
Jun 06 10:37:04 node01 pve-firewall[1258]: server stopped
Jun 06 10:37:05 node01 systemd[1]: pve-firewall.service: Succeeded.
Jun 06  10:37:05 node01 systemd[1]: Stopped Proxmox VE firewall.
Jun 06 10:37:05 node01 systemd[1]: pve-firewall.service: Consumed 2.029s CPU time.
-- Boot bed3c4196e9b44e98fcd5cfce7719a80 --
Jun 06 13:35:48 node01 systemd[1]: Starting Proxmox VE firewall...
Jun 06 13:35:48 node01 pve-firewall[1556]: starting server
# journalctl -xe
Jun 06 11:36:08 node01 systemd[1784]: Startup finished in 58ms.
░░ Subject: User manager start-up is now complete
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The user manager instance for user 0 has been started. All services queued
░░ for starting have been started. Note that other services might still be starting
░░ up or be started at any later time.
░░ 
░░ Startup of the manager took 58061 microseconds.
Jun 06 11:36:08 node01 systemd[1]: Started User Manager for UID 0.
░░ Subject: A start job for unit user@0.service has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit user@0.service has finished successfully.
░░ 
░░ The job identifier is 286.
Jun 06 11:36:08 node01 systemd[1]: Started Session 1 of user root.
░░ Subject: A start job for unit session-1.scope has finished successfully
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ A start job for unit session-1.scope has finished successfully.
░░ 
░░ The job identifier is 349.
Jun 06 11:36:20 node01 systemd[1]: Reloading.
Jun 06 11:36:20 node01 systemd[1]: Reloading.
Jun 06 11:36:20 node01 systemd[1]: Reloading.
Jun 06 11:36:20 node01 systemd[1]: Reloading.
Jun 06 11:36:20 node01 zed[1963]: eid=6 class=config_sync pool='rpool'
Jun 06 11:36:20 node01 systemd[1]: rpool-var-log.mount: Succeeded.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit rpool-var-log.mount has successfully entered the 'dead' state.
Jun 06 11:36:20 node01 systemd[1784]: rpool-var-log.mount: Succeeded.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit UNIT has successfully entered the 'dead' state.
Jun 06 11:36:20 node01 systemd[1784]: rpool-var.mount: Succeeded.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit UNIT has successfully entered the 'dead' state.
Jun 06 11:36:20 node01 systemd[1]: rpool-var.mount: Succeeded.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit rpool-var.mount has successfully entered the 'dead' state.
Jun 06 11:36:20 node01 systemd[1]: rpool-var-log.mount: Succeeded.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit rpool-var-log.mount has successfully entered the 'dead' state.
Jun 06 11:36:20 node01 systemd[1784]: rpool-var-log.mount: Succeeded.
░░ Subject: Unit succeeded
░░ Defined-By: systemd
░░ Support: https://www.debian.org/support
░░ 
░░ The unit UNIT has successfully entered the 'dead' state.


I hope someone has an idea what this can be.

Thanks for your help.