Guten Morgen zusammen,
ich habe am 26.11. mein System von Proxmox 6 auf das aktuelle 7.3-3 upgedatet. Das Update lief ohne erkennbare Fehler eigentlich problemlos durch und alles lief bis letzter nacht.
Seit letzter Nacht wurde mein ioBroker in einem CT beendet und im log konnte ich sehen das es plötzlich nur noch ein Read Only Filesystem war.
Als erste Hilfe wollte ich dann ein aktuelles Backup des CT restoren was aber fehlschlug:
Daraufhin mal das Volume angeschaut und verwundert festgestellt das es Voll ist:
Kann mir jemand sagen warum der Thin Pool plötzlich voll ist?
Wie finde ich raus womit er voll ist und was kann ich machen damit ich mein System aufräumen und den CT neu erstellen bzw. Restoren kann.
Ich bin für jeglichen hinweis dankbar.
ich habe am 26.11. mein System von Proxmox 6 auf das aktuelle 7.3-3 upgedatet. Das Update lief ohne erkennbare Fehler eigentlich problemlos durch und alles lief bis letzter nacht.
Seit letzter Nacht wurde mein ioBroker in einem CT beendet und im log konnte ich sehen das es plötzlich nur noch ein Read Only Filesystem war.
Als erste Hilfe wollte ich dann ein aktuelles Backup des CT restoren was aber fehlschlug:
Code:
recovering backed-up configuration from 'local:backup/vzdump-lxc-101-2022_11_26-10_49_45.tar.zst'
WARNING: Thin pool pve/data is out of data space.
TASK ERROR: unable to restore CT 101 - lvcreate 'pve/vm-101-disk-0' error: Cannot create new thin volume, free space in thin pool pve/data reached threshold.
Daraufhin mal das Volume angeschaut und verwundert festgestellt das es Voll ist:
Code:
root@pve:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.5M 3.2G 1% /run
/dev/mapper/pve-root 126G 98G 23G 82% /
tmpfs 16G 46M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
omv 1.4G 128K 1.4G 1% /omv
motioneye 3.6T 128K 3.6T 1% /motioneye
/dev/fuse 128M 20K 128M 1% /etc/pve
nextcloud 640G 128K 640G 1% /nextcloud
nextcloud/basevol-110-disk-0 640G 128K 640G 1% /nextcloud/basevol-110-disk-0
nextcloud/subvol-111-disk-0 674G 35G 640G 6% /nextcloud/subvol-111-disk-0
nextcloud/subvol-104-disk-0 640G 128K 640G 1% /nextcloud/subvol-104-disk-0
nextcloud/subvol-104-disk-1 641G 89M 640G 1% /nextcloud/subvol-104-disk-1
tmpfs 3.2G 0 3.2G 0% /run/user/0
root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 559.66g 99.90 22.13
root pve -wi-ao---- 127.75g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 8.00g data 26.06
vm-100-disk-3 pve Vwi-aotz-- <3.43t data 12.80
vm-101-disk-1 pve Vwi-a-tz-- 30.00g data 95.61
vm-102-disk-0 pve Vwi-aotz-- 32.00g data 34.67
vm-103-disk-0 pve Vwi-aotz-- 8.00g data 59.91
vm-104-disk-0 pve Vwi-a-tz-- 40.00g data 18.50
vm-104-disk-1 pve Vwi-a-tz-- 674.00g data 0.22
vm-105-disk-0 pve Vwi-aotz-- 12.00g data 26.97
vm-106-disk-0 pve Vwi-aotz-- 20.00g data 95.89
vm-107-disk-0 pve Vwi-aotz-- 4.00g data 25.71
vm-108-disk-0 pve Vwi-aotz-- 8.00g data 69.59
vm-111-disk-0 pve Vwi-aotz-- 32.00g data 78.88
root@pve:~# dmsetup ls --tree
pve-data (253:5)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-root (253:1)
└─ (8:3)
pve-swap (253:0)
└─ (8:3)
pve-vm--100--disk--0 (253:7)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--100--disk--3 (253:19)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--101--disk--1 (253:10)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--102--disk--0 (253:8)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--103--disk--0 (253:16)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--104--disk--0 (253:11)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--104--disk--1 (253:12)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--105--disk--0 (253:6)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--106--disk--0 (253:9)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--107--disk--0 (253:14)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--108--disk--0 (253:15)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
pve-vm--111--disk--0 (253:13)
└─pve-data-tpool (253:4)
├─pve-data_tdata (253:3)
│ └─ (8:3)
└─pve-data_tmeta (253:2)
└─ (8:3)
root@pve:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve twi-aotz-- 559.66g 99.90 22.13
root pve -wi-ao---- 127.75g
swap pve -wi-ao---- 8.00g
vm-100-disk-0 pve Vwi-aotz-- 8.00g data 26.06
vm-100-disk-3 pve Vwi-aotz-- <3.43t data 12.80
vm-101-disk-1 pve Vwi-a-tz-- 30.00g data 95.61
vm-102-disk-0 pve Vwi-aotz-- 32.00g data 34.67
vm-103-disk-0 pve Vwi-aotz-- 8.00g data 59.91
vm-104-disk-0 pve Vwi-a-tz-- 40.00g data 18.50
vm-104-disk-1 pve Vwi-a-tz-- 674.00g data 0.22
vm-105-disk-0 pve Vwi-aotz-- 12.00g data 26.97
vm-106-disk-0 pve Vwi-aotz-- 20.00g data 95.89
vm-107-disk-0 pve Vwi-aotz-- 4.00g data 25.71
vm-108-disk-0 pve Vwi-aotz-- 8.00g data 69.59
vm-111-disk-0 pve Vwi-aotz-- 32.00g data 78.88
root@pve:~# pvs
PV VG Fmt Attr PSize PFree
/dev/sda3 pve lvm2 a-- 931.00g <233.59g
root@pve:~#
root@pve:~# lvdisplay
--- Logical volume ---
LV Path /dev/pve/swap
LV Name swap
VG Name pve
LV UUID cgNzYU-XWYn-mvqL-42le-H1Bc-nl7S-0J2mT8
LV Write Access read/write
LV Creation host, time proxmox, 2020-03-22 12:28:23 +0100
LV Status available
# open 2
LV Size 8.00 GiB
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
--- Logical volume ---
LV Path /dev/pve/root
LV Name root
VG Name pve
LV UUID XOvvBK-EH8A-FoXr-P6ED-fjpV-5Nx9-yN2QKt
LV Write Access read/write
LV Creation host, time proxmox, 2020-03-22 12:28:24 +0100
LV Status available
# open 1
LV Size 127.75 GiB
Current LE 32704
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:1
--- Logical volume ---
LV Name data
VG Name pve
LV UUID v1kLba-7m8o-NEdW-wNGK-aUJj-DjRF-IzXdNy
LV Write Access read/write (activated read only)
LV Creation host, time proxmox, 2020-03-22 12:28:24 +0100
LV Pool metadata data_tmeta
LV Pool data data_tdata
LV Status available
# open 0
LV Size 559.66 GiB
Allocated pool data 99.90%
Allocated metadata 22.13%
Current LE 143274
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:5
--- Logical volume ---
LV Path /dev/pve/vm-105-disk-0
LV Name vm-105-disk-0
VG Name pve
LV UUID GQc1Mi-VcvG-HCT3-c7uG-djOh-bU7x-qK8Ow3
LV Write Access read/write
LV Creation host, time pve, 2020-03-22 16:27:51 +0100
LV Pool name data
LV Status available
# open 1
LV Size 12.00 GiB
Mapped size 26.97%
Current LE 3072
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
--- Logical volume ---
LV Path /dev/pve/vm-102-disk-0
LV Name vm-102-disk-0
VG Name pve
LV UUID 8KakNR-5YGL-MSft-Lu4U-7DdQ-GjVa-kZydaB
LV Write Access read/write
LV Creation host, time pve, 2020-04-01 21:59:19 +0200
LV Pool name data
LV Status available
# open 1
LV Size 32.00 GiB
Mapped size 34.67%
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
--- Logical volume ---
LV Path /dev/pve/vm-106-disk-0
LV Name vm-106-disk-0
VG Name pve
LV UUID 1VoG2u-9d5j-ds4F-nbK5-5mO9-d7dp-GtS96J
LV Write Access read/write
LV Creation host, time pve, 2020-10-14 14:16:35 +0200
LV Pool name data
LV Status available
# open 1
LV Size 20.00 GiB
Mapped size 95.89%
Current LE 5120
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:9
--- Logical volume ---
LV Path /dev/pve/vm-101-disk-1
LV Name vm-101-disk-1
VG Name pve
LV UUID cM8Uwv-uqpK-zx8M-3DLU-2hZo-KrHm-REBOIk
LV Write Access read/write
LV Creation host, time pve, 2022-02-04 07:59:17 +0100
LV Pool name data
LV Status available
# open 0
LV Size 30.00 GiB
Mapped size 95.61%
Current LE 7680
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:10
--- Logical volume ---
LV Path /dev/pve/vm-104-disk-0
LV Name vm-104-disk-0
VG Name pve
LV UUID 4cXrSW-PM8M-ndxt-h4Sd-eMYQ-5hTB-Com5yD
LV Write Access read/write
LV Creation host, time pve, 2022-09-15 15:42:11 +0200
LV Pool name data
LV Status available
# open 0
LV Size 40.00 GiB
Mapped size 18.50%
Current LE 10240
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:11
--- Logical volume ---
LV Path /dev/pve/vm-104-disk-1
LV Name vm-104-disk-1
VG Name pve
LV UUID qZErlJ-D1lx-S2de-k17r-ovro-Kg0g-2IFgAM
LV Write Access read/write
LV Creation host, time pve, 2022-09-15 15:46:34 +0200
LV Pool name data
LV Status available
# open 0
LV Size 674.00 GiB
Mapped size 0.22%
Current LE 172544
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:12
--- Logical volume ---
LV Path /dev/pve/vm-111-disk-0
LV Name vm-111-disk-0
VG Name pve
LV UUID rJDHAU-m7YN-gOQh-cdb2-4Zxn-NLPu-sZIzCI
LV Write Access read/write
LV Creation host, time pve, 2022-09-16 10:10:50 +0200
LV Pool name data
LV Status available
# open 1
LV Size 32.00 GiB
Mapped size 78.88%
Current LE 8192
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:13
--- Logical volume ---
LV Path /dev/pve/vm-107-disk-0
LV Name vm-107-disk-0
VG Name pve
LV UUID E8FKAl-jpCG-yQGa-sGn5-r5Wh-ffNC-HUmKAJ
LV Write Access read/write
LV Creation host, time pve, 2022-11-21 18:38:49 +0100
LV Pool name data
LV Status available
# open 1
LV Size 4.00 GiB
Mapped size 25.71%
Current LE 1024
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:14
--- Logical volume ---
LV Path /dev/pve/vm-108-disk-0
LV Name vm-108-disk-0
VG Name pve
LV UUID sYojqo-je9N-cNhN-IP0U-a3MW-pw3W-5sHRdq
LV Write Access read/write
LV Creation host, time pve, 2022-11-21 19:11:08 +0100
LV Pool name data
LV Status available
# open 1
LV Size 8.00 GiB
Mapped size 69.59%
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:15
--- Logical volume ---
LV Path /dev/pve/vm-103-disk-0
LV Name vm-103-disk-0
VG Name pve
LV UUID 457FPq-dPPR-X0MK-w9nQ-ef73-11xz-Kvgn6e
LV Write Access read/write
LV Creation host, time pve, 2022-11-26 14:08:38 +0100
LV Pool name data
LV Status available
# open 1
LV Size 8.00 GiB
Mapped size 59.91%
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:16
--- Logical volume ---
LV Path /dev/pve/vm-100-disk-0
LV Name vm-100-disk-0
VG Name pve
LV UUID qKRnPl-UPfE-LSxd-fRMH-FT8Y-SWAd-S7uRiM
LV Write Access read/write
LV Creation host, time pve, 2022-11-27 18:29:30 +0100
LV Pool name data
LV Status available
# open 1
LV Size 8.00 GiB
Mapped size 26.06%
Current LE 2048
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:7
--- Logical volume ---
LV Path /dev/pve/vm-100-disk-3
LV Name vm-100-disk-3
VG Name pve
LV UUID Tcvkqa-6PoR-IhUR-2rQu-eZi7-vaIl-qID06v
LV Write Access read/write
LV Creation host, time pve, 2022-11-27 18:29:31 +0100
LV Pool name data
LV Status available
# open 1
LV Size <3.43 TiB
Mapped size 12.80%
Current LE 898560
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:19
root@pve:~#
Kann mir jemand sagen warum der Thin Pool plötzlich voll ist?
Wie finde ich raus womit er voll ist und was kann ich machen damit ich mein System aufräumen und den CT neu erstellen bzw. Restoren kann.
Ich bin für jeglichen hinweis dankbar.
Last edited: