Jan 13 19:23:38 pve systemd[1]: Started pve-cluster.service - The Proxmox VE cluster filesystem.root@pve:/etc/pve/nodes/pve# qm list VMID NAME STATUS MEM(MB) BOOTDISK(GB) PID 100 server running 16384 782.00 1756 101 server running 16384 315.00 1891 Jan 14 00:35:48 pve pmxcfs[1603]: [database] crit: commit transaction failed: database or disk is full#010Jan 14 00:35:48 pve pmxcfs[1603]: [database] crit: rollback transaction failed: cannot rollback - no transaction is active#0102025-01-14T06:31:47.888316+01:00 pve pve-ha-lrm[1737]: unable to write lrm status file - unable to open file '/etc/pve/nodes/pve/lrm_status.tmp.1737' - Input/output error2025-01-14T06:31:52.889111+01:00 pve pve-ha-lrm[1737]: unable to write lrm status file - unable to open file '/etc/pve/nodes/pve/lrm_status.tmp.1737' - Input/output errorclient_loop: send disconnect: Broken piperoot@pve:/# cd /tmp/root@pve:/tmp# df -h .Filesystem Size Used Avail Use% Mounted onrpool/ROOT/pve-1 1.2T 869G 302G 75% /root@pve:/# cd /etc/pve/nodes/pve/root@pve:/etc/pve/nodes/pve# df -h .Filesystem Size Used Avail Use% Mounted on/dev/fuse 128M 16K 128M 1% /etc/pveroot@pve:/etc/pve/nodes/pve# ls -la-rw-r----- 1 root www-data 83 Jan 14 00:35 lrm_statusdrwxr-xr-x 2 root www-data 0 Sep 20 13:30 lxcdrwxr-xr-x 2 root www-data 0 Sep 20 13:30 openvzdrwx------ 2 root www-data 0 Sep 20 13:30 priv-rw-r----- 1 root www-data 1704 Sep 20 13:30 pve-ssl.key-rw-r----- 1 root www-data 1793 Sep 20 13:30 pve-ssl.pemdrwxr-xr-x 2 root www-data 0 Sep 20 13:30 qemu-server-rw-r----- 1 root www-data 556 Jan 13 19:23 ssh_known_hostsroot@pve:/etc/pve/nodes/pve# touch testdateitouch: cannot touch 'testdatei': Input/output errorroot@pve:/# zfs listNAME USED AVAIL REFER MOUNTPOINTrpool 3.75T 302G 166K /rpoolrpool/ROOT 868G 302G 153K /rpool/ROOTrpool/ROOT/pve-1 868G 302G 868G /rpool/data 2.90T 302G 153K /rpool/datarpool/data/vm-100-disk-0 313G 302G 313G -rpool/data/vm-100-disk-1 985G 302G 985G -rpool/data/vm-101-disk-0 303G 302G 303G -rpool/data/vm-101-disk-1 1.34T 302G 1.34T -rpool/var-lib-vz 204K 302G 204K /var/lib/vzroot@pve:/# pvesh ls /cluster/backup-rw-d backup-42d365f8-fa32root@pve:/# pvesh get /cluster/backup/backup-42d365f8-fa32┌────────────────┬──────────────────────┐│ key │ value │╞════════════════╪══════════════════════╡│ compress │ zstd │├────────────────┼──────────────────────┤│ enabled │ 1 │├────────────────┼──────────────────────┤│ fleecing │ {"enabled":"0"} │├────────────────┼──────────────────────┤│ id │ backup-42d365f8-fa32 │├────────────────┼──────────────────────┤│ mode │ snapshot │├────────────────┼──────────────────────┤│ node │ pve │├────────────────┼──────────────────────┤│ notes-template │ {{guestname}} │├────────────────┼──────────────────────┤│ repeat-missed │ 0 │├────────────────┼──────────────────────┤│ schedule │ mon..fri 00:00 │├────────────────┼──────────────────────┤│ storage │ usb │├────────────────┼──────────────────────┤│ type │ vzdump │├────────────────┼──────────────────────┤│ vmid │ 100,101 │└────────────────┴──────────────────────┘root@pve:/# pvesh delete /cluster/backup/backup-42d365f8-fa32trying to acquire cfs lock 'file-vzdump_cron' ...trying to acquire cfs lock 'file-vzdump_cron' ...trying to acquire cfs lock 'file-vzdump_cron' ...trying to acquire cfs lock 'file-vzdump_cron' ...trying to acquire cfs lock 'file-vzdump_cron' ...trying to acquire cfs lock 'file-vzdump_cron' ...trying to acquire cfs lock 'file-vzdump_cron' ...trying to acquire cfs lock 'file-vzdump_cron' ...trying to acquire cfs lock 'file-vzdump_cron' ...cfs-lock 'file-vzdump_cron' error: got lock request timeoutroot@pve:/# df -hFilesystem Size Used Avail Use% Mounted onudev 32G 0 32G 0% /devtmpfs 6.3G 2.9M 6.3G 1% /runrpool/ROOT/pve-1 1.2T 869G 302G 75% /tmpfs 32G 46M 32G 1% /dev/shmtmpfs 5.0M 0 5.0M 0% /run/lockefivarfs 512K 84K 424K 17% /sys/firmware/efi/efivarsrpool/var-lib-vz 302G 256K 302G 1% /var/lib/vzrpool 302G 256K 302G 1% /rpoolrpool/ROOT 302G 256K 302G 1% /rpool/ROOTrpool/data 302G 256K 302G 1% /rpool/data/dev/fuse 128M 16K 128M 1% /etc/pvetmpfs 6.3G 0 6.3G 0% /run/user/1000nano /etc/pve/jobs.cfg vzdump: backup-42d365f8-fa32 schedule mon..fri 00:00 compress zstd enabled 1 fleecing 0 mode snapshot node pve notes-template {{guestname}} repeat-missed 0 storage usb vmid 100,101root@pve:/# df -hFilesystem Size Used Avail Use% Mounted onudev 32G 0 32G 0% /devtmpfs 6.3G 1.5M 6.3G 1% /runrpool/ROOT/pve-1 869G 869G 0 100% /tmpfs 32G 0 32G 0% /dev/shmtmpfs 5.0M 0 5.0M 0% /run/lockefivarfs 512K 84K 424K 17% /sys/firmware/efi/efivarsrpool/var-lib-vz 256K 256K 0 100% /var/lib/vzrpool 256K 256K 0 100% /rpoolrpool/ROOT 256K 256K 0 100% /rpool/ROOTrpool/data 256K 256K 0 100% /rpool/datatmpfs 6.3G 0 6.3G 0% /run/user/1000Jan 16 09:10:49 pve systemd[1]: Starting pve-cluster.service - The Proxmox VE cluster filesystem...Jan 16 09:10:49 pve pmxcfs[1542]: [database] crit: chmod failed: No space left on deviceJan 16 09:10:49 pve pmxcfs[1542]: [main] crit: memdb_open failed - unable to open database '/var/lib/pve-cluster/config.db'Jan 16 09:10:49 pve pmxcfs[1542]: [main] notice: exit proxmox configuration filesystem (-1)Jan 16 09:10:49 pve systemd[1]: pve-cluster.service: Control process exited, code=exited, status=255/EXCEPTIONJan 16 09:10:49 pve systemd[1]: pve-cluster.service: Failed with result 'exit-code'.Jan 16 09:10:49 pve systemd[1]: Failed to start pve-cluster.service - The Proxmox VE cluster filesystem.Thank you very much for your fast replyplease post
- storage.cfg
- jobs.cfg
- /etc/vzdump.conf
something must still be misconfigured...
# vzdump default settings#tmpdir: DIR#dumpdir: DIR#storage: STORAGE_ID#mode: snapshot|suspend|stop#bwlimit: KBPS#performance: [max-workers=N][,pbs-entries-max=N]#ionice: PRI#lockwait: MINUTES#stopwait: MINUTES#stdexcludes: BOOLEAN#mailto: ADDRESSLIST#prune-backups: keep-INTERVAL=N[,...]#script: FILENAME#exclude-path: PATHLIST#pigz: N#notes-template: {{guestname}}#pbs-change-detection-mode: legacy|data|metadata#fleecing: enabled=BOOLEAN,storage=STORAGE_IDroot@pve:/# ls -ltra /etc/pve/total 14drwxr-xr-x 2 root root 2 Sep 20 13:28 .drwxr-xr-x 91 root root 186 Dec 12 12:09 ..root@pve:/# df -h=>rpool/ROOT/pve-1 869G 869G 0 100% /tmpfs 32G 0 32G 0% /dev/shm=>rpool 256K 256K 0 100% /rpoolrpool/ROOT 256K 256K 0 100% /rpool/ROOTrpool/data 256K 256K 0 100% /rpool/datatmpfs 6.3G 0 6.3G 0% /run/user/1000du -sm /* | sort -n
du -sm /mnt/* | sort -n
du -sm /* | sort -ndu: cannot access '/proc/1632/task/1632/fd/4': No such file or directorydu: cannot access '/proc/1632/task/1632/fdinfo/4': No such file or directorydu: cannot access '/proc/1632/fd/3': No such file or directorydu: cannot access '/proc/1632/fdinfo/3': No such file or directory0 /dev(...) bla bla6 /etc225 /boot1040 /var1971 /usr=>885959 /usbroot@pve:/# ll /usb/dump/-rw-r--r-- 1 root root 10825 Nov 22 01:34 vzdump-qemu-100-2024_11_22-00_00_02.log-rw-r--r-- 1 root root 930820206030 Nov 22 01:34 vzdump-qemu-100-2024_11_22-00_00_02.vma.zst-rw-r--r-- 1 root root 6 Nov 22 01:34 vzdump-qemu-100-2024_11_22-00_00_02.vma.zst.notes-rw-r--r-- 1 root root 3771 Nov 22 02:04 vzdump-qemu-101-2024_11_22-01_34_01.logroot@pve:/# rm -rf /usb/dump/root@pve:/# df -hFilesystem Size Used Avail Use% Mounted onudev 32G 0 32G 0% /devtmpfs 6.3G 1.5M 6.3G 1% /runrpool/ROOT/pve-1 869G 298G 571G 35% /rpool/var-lib-vz 571G 256K 571G 1% /var/lib/vzrpool 571G 256K 571G 1% /rpoolrpool/ROOT 571G 256K 571G 1% /rpool/ROOTrpool/data 571G 256K 571G 1% /rpool/dataSorry, I was out order the last days .... at least, the Proxmox is running now without problems (I hope so):please post
- storage.cfg
- jobs.cfg
- /etc/vzdump.conf
something must still be misconfigured...
1) egrep -v '#|^ *$' /etc/pve/storage.cfg dir: local path /var/lib/vz content iso,vztmpl,backupzfspool: local-zfs pool rpool/data content images,rootdir sparse 1dir: backup path /var/lib/vz/dump content backup,images prune-backups keep-all=1 shared 0dir: usb1 path /pladde/usb1 content backup is_mountpoint /pladde/usb1 prune-backups keep-last=2 shared 02) egrep -v '#|^ *$' /etc/pve/jobs.cfg vzdump: backup-380e378e-5bed schedule 21:00 compress gzip enabled 1 fleecing 0 mailnotification always mailto support@somewhere mode snapshot node pve notes-template {{guestname}} notification-mode legacy-sendmail repeat-missed 0 storage usb1 vmid 1003)egrep -v '#|^ *$' /etc/vzdump.conf root@pve:/# cat /etc/vzdump.conf # vzdump default settings#tmpdir: DIR#dumpdir: DIR#storage: STORAGE_ID#mode: snapshot|suspend|stop#bwlimit: KBPS#performance: [max-workers=N][,pbs-entries-max=N]#ionice: PRI#lockwait: MINUTES#stopwait: MINUTES#stdexcludes: BOOLEAN#mailto: ADDRESSLIST#prune-backups: keep-INTERVAL=N[,...]#script: FILENAME#exclude-path: PATHLIST#pigz: N#notes-template: {{guestname}}#pbs-change-detection-mode: legacy|data|metadata#fleecing: enabled=BOOLEAN,storage=STORAGE_IDdir: usb path /usb content backup is_mountpoint /pladde/usb1 pvesm set /usb1 --is_mountpoint /pladde/usb1# Pruefe auf freien PlattenplatzGETPERCENTAGE='s/.* \([0-9]\{1,3\}\)%.*/\1/'if $CHECK_HDMINFREE ; then KBISFREE=`df /$DATA_PATH | tail -n1 | sed -e "$GETPERCENTAGE"` INODEISFREE=`df -i /$DATA_PATH | tail -n1 | sed -e "$GETPERCENTAGE"` if [ $KBISFREE -ge $HDMINFREE -o $INODEISFREE -ge $HDMINFREE ] ; then logger "Fatal: Not enough space left for rotating backups!" exit fifidir: usb
path /usb
content backup
is_mountpoint /pladde/usb1
We use essential cookies to make this site work, and optional cookies to enhance your experience.