(i have moved the german topic here because of the greater reach, origin: https://forum.proxmox.com/threads/b...len-angeblich-nicht-genügend-speicher.156384/ )
Hello everyone,
I would like to restore a backup.
Setup:
Dell Optiplex Micro, Proxmox Linux 6.8.12-2-pve (2024-09-05T10:03Z).
The following drives have been created:
sda: ISO image & container templates, 17 GB/100 GB used.
sdb: Disk image, Container, Snippets, 340 GB/983 GB used.
NAS1 via smb/cifs: VZDump backupfile
NAS2 via smb/cifs (on and connected only once a month): VZDump backupfile
Size of the backup (excerpt from the log):
Code:
2024-10-22 03:04:26 INFO: Total bytes written: 69654425600 (65GiB, 262MiB/s)
2024-10-22 03:04:30 INFO: archive file size: 6.01GB
When restoring, it aborts at some point with the error that there is no more space for unpacking.
If I monitor the memory in parallel in the shell, I see the following:
Before backup restore:
Code:
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 781M 2.9M 779M 1% /run
/dev/mapper/pve-root 94G 16G 74G 18% /
tmpfs 3.9G 46M 3.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 256K 182K 70K 73% /sys/firmware/efi/efivars
/dev/sdb 916G 318G 553G 37% /mnt/sdb
/dev/sda2 1022M 12M 1011M 2% /boot/efi
//192.168.1.99/proxmox_backup 28T 17T 11T 61% /mnt/pve/NAS
tmpfs 781M 0 781M 0% /run/user/0
/dev/fuse 128M 24K 128M 1% /etc/pve
Abort:
Code:
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 781M 2.8M 779M 1% /run
/dev/mapper/pve-root 94G 16G 74G 18% /
tmpfs 3.9G 52M 3.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 256K 182K 70K 73% /sys/firmware/efi/efivars
/dev/sdb 916G 383G 487G 45% /mnt/sdb
/dev/sda2 1022M 12M 1011M 2% /boot/efi
//192.168.1.99/proxmox_backup 28T 17T 11T 61% /mnt/pve/NAS
tmpfs 781M 0 781M 0% /run/user/0
/dev/fuse 128M 24K 128M 1% /etc/pve
/dev/loop4 69G 65G 0 100% /var/lib/lxc/102/rootfs
I can recognize the following:
1. Unpacking takes place in /dev/loop4.
2. The named directory(?, or is that a "virtual drive"?) is full, which is why the error occurs.
3. The data is actually written to sdb, where there is still enough space even if it is canceled.
I don't know why I can't manage with the 69 GB there. Opened in Winrar, the program confirms the size of the backup: unpacked size 69,229,176,417 bytes
Memory config:
View attachment 76713
How do I fix my problem now? I would be very happy to receive help. If information is missing just let me know
Hello everyone,
I would like to restore a backup.
Setup:
Dell Optiplex Micro, Proxmox Linux 6.8.12-2-pve (2024-09-05T10:03Z).
The following drives have been created:
sda: ISO image & container templates, 17 GB/100 GB used.
sdb: Disk image, Container, Snippets, 340 GB/983 GB used.
NAS1 via smb/cifs: VZDump backupfile
NAS2 via smb/cifs (on and connected only once a month): VZDump backupfile
Size of the backup (excerpt from the log):
Code:
2024-10-22 03:04:26 INFO: Total bytes written: 69654425600 (65GiB, 262MiB/s)
2024-10-22 03:04:30 INFO: archive file size: 6.01GB
When restoring, it aborts at some point with the error that there is no more space for unpacking.
If I monitor the memory in parallel in the shell, I see the following:
Before backup restore:
Code:
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 781M 2.9M 779M 1% /run
/dev/mapper/pve-root 94G 16G 74G 18% /
tmpfs 3.9G 46M 3.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 256K 182K 70K 73% /sys/firmware/efi/efivars
/dev/sdb 916G 318G 553G 37% /mnt/sdb
/dev/sda2 1022M 12M 1011M 2% /boot/efi
//192.168.1.99/proxmox_backup 28T 17T 11T 61% /mnt/pve/NAS
tmpfs 781M 0 781M 0% /run/user/0
/dev/fuse 128M 24K 128M 1% /etc/pve
Abort:
Code:
Filesystem Size Used Avail Use% Mounted on
udev 3.8G 0 3.8G 0% /dev
tmpfs 781M 2.8M 779M 1% /run
/dev/mapper/pve-root 94G 16G 74G 18% /
tmpfs 3.9G 52M 3.8G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
efivarfs 256K 182K 70K 73% /sys/firmware/efi/efivars
/dev/sdb 916G 383G 487G 45% /mnt/sdb
/dev/sda2 1022M 12M 1011M 2% /boot/efi
//192.168.1.99/proxmox_backup 28T 17T 11T 61% /mnt/pve/NAS
tmpfs 781M 0 781M 0% /run/user/0
/dev/fuse 128M 24K 128M 1% /etc/pve
/dev/loop4 69G 65G 0 100% /var/lib/lxc/102/rootfs
I can recognize the following:
1. Unpacking takes place in /dev/loop4.
2. The named directory(?, or is that a "virtual drive"?) is full, which is why the error occurs.
3. The data is actually written to sdb, where there is still enough space even if it is canceled.
I don't know why I can't manage with the 69 GB there. Opened in Winrar, the program confirms the size of the backup: unpacked size 69,229,176,417 bytes
Memory config:
View attachment 76713
How do I fix my problem now? I would be very happy to receive help. If information is missing just let me know