local-lvm PLÖTZLICH voll

Bartimaus

New Member
Sep 7, 2023
22
0
1
Moin,

ich brauche mal Eure Hilfe auf der Suche nach einem Speicherfresser in meinem Proxmox "local-lvm".
Gestern Mittag ist die Platzbelegung sprunghaft angestiegen, und ich weiss nicht wieso. Es wurden keine Container, VMs on der sonstiges angelegt.
Hier ist die Ausgabe von lvs:
Code:
root@pve:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  cache         pve Vwi-aotz-- 418.51g data        65.82                                
  data          pve twi-aotz-- 400.00g             93.90  3.53                          
  root          pve -wi-ao----  96.00g                                                  
  swap          pve -wi-ao----   8.00g                                                  
  vm-100-disk-0 pve Vwi-aotz--   2.00g data        50.85                                
  vm-102-disk-0 pve Vwi-aotz--   2.00g data        54.97                                
  vm-103-disk-0 pve Vwi-aotz--   2.00g data        65.17                                
  vm-107-disk-0 pve Vwi-aotz--   8.00g data        36.76                                
  vm-108-disk-0 pve Vwi-aotz--  10.00g data        67.83                                
  vm-109-disk-0 pve Vwi-aotz--   4.00g data        52.44                                
  vm-110-disk-0 pve Vwi-aotz--   9.00g data        98.34                                
  vm-112-disk-0 pve Vwi-aotz-- 820.00m data        56.71                                
  vm-115-disk-0 pve Vwi-aotz--  11.00g data        97.75                                
  vm-116-disk-0 pve Vwi-aotz--   8.00g data        20.18                                
  vm-119-disk-0 pve Vwi-aotz--   2.00g data        87.01                                
  vm-201-disk-0 pve Vwi-a-tz--   4.00m data        14.06                                
  vm-201-disk-1 pve Vwi-a-tz--  16.00g data        28.90                                
  vm-300-disk-0 pve Vwi-a-tz--  32.00g data        0.00                                  
  vm-301-disk-0 pve Vwi-aotz--   4.00m data        14.06                                
  vm-301-disk-1 pve Vwi-aotz--  80.00g data        22.67                                
  vm-301-disk-2 pve Vwi-aotz--   4.00m data        1.56                                  
  vm-400-disk-0 pve Vwi-a-tz--   4.00m data        14.06                                
  vm-400-disk-1 pve Vwi-a-tz-- 128.00g data        30.27                                
  vm-400-disk-2 pve Vwi-a-tz--   4.00m data        1.56                                  
root@pve:~#

Die Summe der belegten Speicher für LXC/VM belegt in Summe um die 338GB,.

Hier ein Screen vom plötzlichen Speicheranstieg gestern mittag:

1747562357005.png

Wieso werden 406GB als belegt angezeigt anstelle der errechneten 338GB ?
Welchen Input braucht Ihr noch ?

Hier noch ein Auszug aus dem Syslog von gestern:

Code:
May 17 09:15:01 pve CRON[3265811]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 17 09:15:03 pve CRON[3265811]: pam_unix(cron:session): session closed for user root
May 17 09:17:01 pve CRON[3267226]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 17 09:17:01 pve CRON[3267227]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
May 17 09:17:01 pve CRON[3267226]: pam_unix(cron:session): session closed for user root
May 17 10:05:01 pve CRON[3294967]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 17 10:05:01 pve CRON[3294968]: (root) CMD (/usr/bin/bash /usr/local/bin/loeschen.sh >> /var/log/cron/loeschen.log 2>&1)
May 17 10:05:06 pve CRON[3294967]: pam_unix(cron:session): session closed for user root
May 17 10:13:35 pve smartd[1062]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 69 to 68
May 17 10:17:01 pve CRON[3302269]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 17 10:17:01 pve CRON[3302270]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
May 17 10:17:01 pve CRON[3302269]: pam_unix(cron:session): session closed for user root
May 17 10:43:35 pve smartd[1062]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 68 to 59
May 17 11:00:52 pve systemd[1]: Starting man-db.service - Daily man-db regeneration...
May 17 11:00:52 pve systemd[1]: man-db.service: Deactivated successfully.
May 17 11:00:52 pve systemd[1]: Finished man-db.service - Daily man-db regeneration.
May 17 11:13:35 pve smartd[1062]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 59 to 66
May 17 11:17:01 pve CRON[3337626]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 17 11:17:01 pve CRON[3337627]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
May 17 11:17:01 pve CRON[3337626]: pam_unix(cron:session): session closed for user root
May 17 11:43:35 pve smartd[1062]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 66 to 68
May 17 12:01:52 pve systemd[1]: Starting apt-daily.service - Daily apt download activities...
May 17 12:01:52 pve systemd[1]: apt-daily.service: Deactivated successfully.
May 17 12:01:52 pve systemd[1]: Finished apt-daily.service - Daily apt download activities.
May 17 12:17:01 pve CRON[3372693]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 17 12:17:01 pve CRON[3372694]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
May 17 12:17:01 pve CRON[3372693]: pam_unix(cron:session): session closed for user root
May 17 13:17:01 pve CRON[3407756]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 17 13:17:01 pve CRON[3407757]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
May 17 13:17:01 pve CRON[3407756]: pam_unix(cron:session): session closed for user root
May 17 14:13:35 pve smartd[1062]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 68 to 64
May 17 14:17:01 pve CRON[3442711]: pam_unix(cron:session): session opened for user root(uid=0) by (uid=0)
May 17 14:17:01 pve CRON[3442713]: (root) CMD (cd / && run-parts --report /etc/cron.hourly)
May 17 14:17:01 pve CRON[3442711]: pam_unix(cron:session): session closed for user root
May 17 14:38:31 pve dmeventd[594]: WARNING: Thin pool pve-data-tpool data is now 80.10% full.
May 17 14:41:32 pve pvedaemon[2370509]: <root@pam> successful auth for user 'root@pam'
May 17 14:43:33 pve pvedaemon[2370318]: worker exit
May 17 14:43:33 pve pvedaemon[1541]: worker 2370318 finished
May 17 14:43:33 pve pvedaemon[1541]: starting 1 worker(s)
May 17 14:43:33 pve pvedaemon[1541]: worker 3458865 started
May 17 14:43:35 pve smartd[1062]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 64 to 63
May 17 14:44:17 pve pvedaemon[2370509]: worker exit
May 17 14:44:17 pve pvedaemon[1541]: worker 2370509 finished
May 17 14:44:17 pve pvedaemon[1541]: starting 1 worker(s)
May 17 14:44:17 pve pvedaemon[1541]: worker 3459453 started
May 17 14:44:48 pve pvedaemon[3459906]: starting lxc termproxy UPID:pve:0034CB42:0497A964:682884C0:vncproxy:100:root@pam:
May 17 14:44:48 pve pvedaemon[2389108]: <root@pam> starting task UPID:pve:0034CB42:0497A964:682884C0:vncproxy:100:root@pam:
May 17 14:44:48 pve pvedaemon[3459453]: <root@pam> successful auth for user 'root@pam'
May 17 14:44:56 pve pvedaemon[2389108]: <root@pam> end task UPID:pve:0034CB42:0497A964:682884C0:vncproxy:100:root@pam: OK
May 17 14:44:56 pve pvedaemon[3460000]: starting lxc termproxy UPID:pve:0034CBA0:0497AC80:682884C8:vncproxy:102:root@pam:
May 17 14:44:56 pve pvedaemon[2389108]: <root@pam> starting task UPID:pve:0034CBA0:0497AC80:682884C8:vncproxy:102:root@pam:
May 17 14:44:56 pve pvedaemon[3459453]: <root@pam> successful auth for user 'root@pam'
May 17 14:45:00 pve pvedaemon[2389108]: <root@pam> end task UPID:pve:0034CBA0:0497AC80:682884C8:vncproxy:102:root@pam: OK
May 17 14:45:00 pve pvedaemon[3460036]: starting lxc termproxy UPID:pve:0034CBC4:0497AE05:682884CC:vncproxy:103:root@pam:
May 17 14:45:00 pve pvedaemon[2389108]: <root@pam> starting task UPID:pve:0034CBC4:0497AE05:682884CC:vncproxy:103:root@pam:
May 17 14:45:00 pve pvedaemon[3458865]: <root@pam> successful auth for user 'root@pam'
May 17 14:45:06 pve pvedaemon[2389108]: <root@pam> end task UPID:pve:0034CBC4:0497AE05:682884CC:vncproxy:103:root@pam: OK
May 17 14:45:06 pve pvedaemon[3460117]: starting lxc termproxy UPID:pve:0034CC15:0497B078:682884D2:vncproxy:107:root@pam:
May 17 14:45:06 pve pvedaemon[3458865]: <root@pam> starting task UPID:pve:0034CC15:0497B078:682884D2:vncproxy:107:root@pam:
May 17 14:45:06 pve pvedaemon[3459453]: <root@pam> successful auth for user 'root@pam'
May 17 14:45:23 pve pvedaemon[3458865]: <root@pam> end task UPID:pve:0034CC15:0497B078:682884D2:vncproxy:107:root@pam: OK
May 17 14:47:31 pve dmeventd[594]: WARNING: Thin pool pve-data-tpool data is now 85.40% full.
May 17 14:55:50 pve pvestatd[1520]: auth key pair too old, rotating..
May 17 15:13:35 pve smartd[1062]: Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 63 to 6


LG
 
Last edited:
Hi, eventuell hat eine VM angefangen viel zu schreiben. Irgendwo eventuell auch noch einen Snapshot vergessen?
 
Hi, nein, die VMs sind alle deaktiviert und werden nur temporär hochgefahren. Aktiv sind nur LXCs. Auch da nix unauffälliges, LOCAL-LVM ist auch nicht als BackupDevice aktiv
 
Ich habe vor 2 Wochen erfolgreich "LOCAL-LVM" verkleinert, und den freigewordenen Speicherplatz andersweitig genutzt. Das läuft auch ohne Probleme

 
Last edited:
Habe jetzt alle LXC/VM auf eine neu angeschlossenen SSD verschoben. Also lt. GUI sind weder Backups noch LXC oder VM noch auf der local-lvm gespeichert.
Local-lvm meinte immer noch, das 70% belegt sind.

Local-lvm also gelöscht, dann lvremove pve/data und neu angelegt. Danach war local-lvm auch leer. Jetzt verschiebe ich gerade wieder alle container etc zurück, und noch sieht es gut aus
 
Code:
root@pve:~# df -h
Filesystem             Size  Used Avail Use% Mounted on
udev                    16G     0   16G   0% /dev
tmpfs                  3.1G  3.2M  3.1G   1% /run
/dev/mapper/pve-root    94G   49G   41G  55% /
tmpfs                   16G   34M   16G   1% /dev/shm
tmpfs                  5.0M     0  5.0M   0% /run/lock
efivarfs               128K   11K  113K   9% /sys/firmware/efi/efivars
/dev/nvme0n1p2        1022M  344K 1022M   1% /boot/efi
/dev/mapper/pve-cache  417G   40K  396G   1% /mnt/cache
/dev/sda1              7.3T  5.3T  1.6T  78% /mnt/8TBSSD
/dev/fuse              128M   40K  128M   1% /etc/pve
tmpfs                  3.1G     0  3.1G   0% /run/user/0
/dev/sdb1               57G  7.8G   49G  14% /mnt/backup

Code:
root@pve:~# lvs
  LV            VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  cache         pve Vwi-aotz-- 424.51g data        1.80                                   
  data          pve twi-aotz-- 400.00g             23.74  4.37                           
  root          pve -wi-ao----  96.00g                                                   
  swap          pve -wi-ao----   8.00g                                                   
  vm-100-disk-0 pve Vwi-aotz--   2.00g data        52.91                                 
  vm-102-disk-0 pve Vwi-aotz--   2.00g data        51.72                                 
  vm-103-disk-0 pve Vwi-a-tz--   2.00g data        54.11                                 
  vm-107-disk-0 pve Vwi-aotz--   8.00g data        35.35                                 
  vm-109-disk-0 pve Vwi-a-tz--   4.00g data        53.94                                 
  vm-110-disk-0 pve Vwi-aotz--   9.00g data        87.23                                 
  vm-112-disk-0 pve Vwi-aotz-- 820.00m data        46.76                                 
  vm-115-disk-0 pve Vwi-aotz--  11.00g data        54.44                                 
  vm-116-disk-0 pve Vwi-aotz--   8.00g data        20.93                                 
  vm-119-disk-0 pve Vwi-aotz--   2.00g data        86.70                                 
  vm-201-disk-0 pve Vwi-a-tz--  16.00g data        28.91                                 
  vm-201-disk-1 pve Vwi-a-tz--   4.00m data        14.06                                 
  vm-300-disk-0 pve Vwi-a-tz--  32.00g data        0.00                                   
  vm-301-disk-0 pve Vwi-a-tz--  80.00g data        25.27                                 
  vm-301-disk-1 pve Vwi-a-tz--   4.00m data        14.06                                 
  vm-301-disk-2 pve Vwi-a-tz--   4.00m data        1.56                                   
  vm-400-disk-0 pve Vwi-a-tz-- 128.00g data        28.64                                 
  vm-400-disk-1 pve Vwi-a-tz--   4.00m data        14.06                                 
  vm-400-disk-2 pve Vwi-a-tz--   4.00m data        1.56

Was auch immer das falsch lief, jetzt sieht es wieder normal aus