[SOLVED] WARNING: You have not turned on protection against thin pools running out of space

Discussion in 'Proxmox VE (Deutsch)' started by Horst Zimmermann, Aug 14, 2019.

  1. Horst Zimmermann

    Horst Zimmermann New Member

    Joined:
    May 17, 2019
    Messages:
    1
    Likes Received:
    0
    Hallo zusammen,

    als erstes, ich bin blutiger anfänger was Proxmox betrifft und habe ein Problem mit meinem pve Host.

    Bei einem manuellen Backup habe ich im Log folgende Fehler:
    Das Backup wird auf mein NAS Server geschoben und ist auch in Ordnung, ich habe es testweise auf meinem zweiten node geschoben und alles funktioniert wie es soll

    Was mir allerdings Sorgen macht sind die im "Log mit Fehlermeldungen ROT markierten Ausgaben"

    Ich verstehe folgendes dabei:

    Meine "thin_Pools" sind nicht gegen vollaufen geschützt aber ich habe keine Ahnung wie ich das ändern kann, vielleicht kenn mir jemand auf die Sprünge helfen....

    Was mir noch aufgefallen ist, ich habe in meinem System gar keine dev/sdg und keine dev/sdh

    root@pve01:~# lsblk
    NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    sda 8:0 0 931.5G 0 disk
    ├─sda1 8:1 0 1007K 0 part
    ├─sda2 8:2 0 512M 0 part
    └─sda3 8:3 0 931G 0 part
    sdb 8:16 0 111.8G 0 disk
    ├─SDD03--120GB-SDD03--120GB_tmeta 253:20 0 112M 0 lvm
    │ └─SDD03--120GB-SDD03--120GB-tpool 253:22 0 111.6G 0 lvm
    │ ├─SDD03--120GB-SDD03--120GB 253:23 0 111.6G 0 lvm
    │ ├─SDD03--120GB-vm--200--disk--0 253:24 0 16G 0 lvm
    │ ├─SDD03--120GB-vm--202--disk--0 253:25 0 20G 0 lvm
    │ └─SDD03--120GB-vm--203--disk--0 253:26 0 24G 0 lvm
    └─SDD03--120GB-SDD03--120GB_tdata 253:21 0 111.6G 0 lvm
    └─SDD03--120GB-SDD03--120GB-tpool 253:22 0 111.6G 0 lvm
    ├─SDD03--120GB-SDD03--120GB 253:23 0 111.6G 0 lvm
    ├─SDD03--120GB-vm--200--disk--0 253:24 0 16G 0 lvm
    ├─SDD03--120GB-vm--202--disk--0 253:25 0 20G 0 lvm
    └─SDD03--120GB-vm--203--disk--0 253:26 0 24G 0 lvm
    sdc 8:32 0 111.8G 0 disk
    ├─SDD04--120GB-SDD04--120GB_tmeta 253:13 0 112M 0 lvm
    │ └─SDD04--120GB-SDD04--120GB-tpool 253:15 0 111.6G 0 lvm
    │ ├─SDD04--120GB-SDD04--120GB 253:16 0 111.6G 0 lvm
    │ ├─SDD04--120GB-vm--201--disk--0 253:17 0 25G 0 lvm
    │ ├─SDD04--120GB-vm--204--disk--0 253:18 0 30G 0 lvm
    │ └─SDD04--120GB-vm--205--disk--0 253:19 0 30G 0 lvm
    └─SDD04--120GB-SDD04--120GB_tdata 253:14 0 111.6G 0 lvm
    └─SDD04--120GB-SDD04--120GB-tpool 253:15 0 111.6G 0 lvm
    ├─SDD04--120GB-SDD04--120GB 253:16 0 111.6G 0 lvm
    ├─SDD04--120GB-vm--201--disk--0 253:17 0 25G 0 lvm
    ├─SDD04--120GB-vm--204--disk--0 253:18 0 30G 0 lvm
    └─SDD04--120GB-vm--205--disk--0 253:19 0 30G 0 lvm
    sdd 8:48 0 465.8G 0 disk
    ├─SSD01--500GB-SSD01--500GB_tmeta 253:7 0 120M 0 lvm
    │ └─SSD01--500GB-SSD01--500GB-tpool 253:9 0 465.5G 0 lvm
    │ ├─SSD01--500GB-SSD01--500GB 253:10 0 465.5G 0 lvm
    │ ├─SSD01--500GB-vm--300--disk--0 253:11 0 80G 0 lvm
    │ └─SSD01--500GB-vm--302--disk--0 253:12 0 80G 0 lvm
    └─SSD01--500GB-SSD01--500GB_tdata 253:8 0 465.5G 0 lvm
    └─SSD01--500GB-SSD01--500GB-tpool 253:9 0 465.5G 0 lvm
    ├─SSD01--500GB-SSD01--500GB 253:10 0 465.5G 0 lvm
    ├─SSD01--500GB-vm--300--disk--0 253:11 0 80G 0 lvm
    └─SSD01--500GB-vm--302--disk--0 253:12 0 80G 0 lvm
    sde 8:64 0 465.8G 0 disk
    ├─SSD02--500GB-SSD02--500GB_tmeta 253:0 0 120M 0 lvm
    │ └─SSD02--500GB-SSD02--500GB-tpool 253:2 0 465.5G 0 lvm
    │ ├─SSD02--500GB-SSD02--500GB 253:3 0 465.5G 0 lvm
    │ ├─SSD02--500GB-vm--301--disk--0 253:4 0 16G 0 lvm
    │ ├─SSD02--500GB-vm--303--disk--0 253:5 0 64G 0 lvm
    │ └─SSD02--500GB-vm--304--disk--0 253:6 0 100G 0 lvm
    └─SSD02--500GB-SSD02--500GB_tdata 253:1 0 465.5G 0 lvm
    └─SSD02--500GB-SSD02--500GB-tpool 253:2 0 465.5G 0 lvm
    ├─SSD02--500GB-SSD02--500GB 253:3 0 465.5G 0 lvm
    ├─SSD02--500GB-vm--301--disk--0 253:4 0 16G 0 lvm
    ├─SSD02--500GB-vm--303--disk--0 253:5 0 64G 0 lvm
    └─SSD02--500GB-vm--304--disk--0 253:6 0 100G 0 lvm
    sdf 8:80 0 931.5G 0 disk
    ├─sdf1 8:81 0 1007K 0 part
    ├─sdf2 8:82 0 512M 0 part
    └─sdf3 8:83 0 931G 0 part
    sr0 11:0 1 1024M 0 rom




    INFO: starting new backup job: vzdump 204 --mode snapshot --compress lzo --node pve01 --storage NAS-VM-Backups --remove 0
    INFO: filesystem type on dumpdir is 'cifs' -using /var/tmp/vzdumptmp10879 for temporary files
    INFO: Starting Backup of VM 204 (lxc)
    INFO: Backup started at 2019-08-14 16:02:23
    INFO: status = running
    INFO: CT Name: sql
    INFO: backup mode: snapshot
    INFO: ionice priority: 7
    INFO: create storage snapshot 'vzdump'
    /dev/sdg: open failed: No medium found
    WARNING: You have not turned on protection against thin pools running out of space.
    WARNING: Set activation/thin_pool_autoextend_threshold below 100 to trigger automatic extension of thin pools before they get full.
    /dev/sdh: open failed: No medium found
    Logical volume "snap_vm-204-disk-0_vzdump" created.
    WARNING: Sum of all thin volume sizes (115.00 GiB) exceeds the size of thin pool SDD04-120GB/SDD04-120GB and the size of whole volume group (111.79 GiB).
    /dev/sdg: open failed: No medium found
    /dev/sdh: open failed: No medium found
    /dev/sdg: open failed: No medium found
    /dev/sdh: open failed: No medium found

    INFO: creating archive '/mnt/pve/NAS-VM-Backups/dump/vzdump-lxc-204-2019_08_14-16_02_23.tar.lzo'
    INFO: Total bytes written: 1610035200 (1.5GiB, 58MiB/s)
    INFO: archive file size: 628MB
    INFO: remove vzdump snapshot
    /dev/sdg: open failed: No medium found
    Logical volume "snap_vm-204-disk-0_vzdump" successfully removed
    /dev/sdh: open failed: No medium found
    INFO: Finished Backup of VM 204 (00:00:34)
    INFO: Backup finished at 2019-08-14 16:02:57
    INFO: Backup job finished successfully
    TASK OK

    proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
    pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
    pve-kernel-5.0: 6.0-6
    pve-kernel-helper: 6.0-6
    pve-kernel-4.15: 5.4-6
    pve-kernel-5.0.18-1-pve: 5.0.18-3
    pve-kernel-5.0.15-1-pve: 5.0.15-1
    pve-kernel-4.15.18-18-pve: 4.15.18-44
    pve-kernel-4.15.18-17-pve: 4.15.18-43
    pve-kernel-4.15.18-16-pve: 4.15.18-41
    pve-kernel-4.15.18-15-pve: 4.15.18-40
    pve-kernel-4.15.18-14-pve: 4.15.18-39
    pve-kernel-4.15.18-13-pve: 4.15.18-37
    pve-kernel-4.15.18-12-pve: 4.15.18-36
    ceph-fuse: 12.2.12-pve1
    corosync: 3.0.2-pve2
    criu: 3.11-3
    glusterfs-client: 5.5-3
    ksm-control-daemon: 1.3-1
    libjs-extjs: 6.0.1-10
    libknet1: 1.10-pve2
    libpve-access-control: 6.0-2
    libpve-apiclient-perl: 3.0-2
    libpve-common-perl: 6.0-3
    libpve-guest-common-perl: 3.0-1
    libpve-http-server-perl: 3.0-2
    libpve-storage-perl: 6.0-7
    libqb0: 1.0.5-1
    lvm2: 2.03.02-pve3
    lxc-pve: 3.1.0-63
    lxcfs: 3.0.3-pve60
    novnc-pve: 1.0.0-60
    proxmox-mini-journalreader: 1.1-1
    proxmox-widget-toolkit: 2.0-5
    pve-cluster: 6.0-5
    pve-container: 3.0-5
    pve-docs: 6.0-4
    pve-edk2-firmware: 2.20190614-1
    pve-firewall: 4.0-7
    pve-firmware: 3.0-2
    pve-ha-manager: 3.0-2
    pve-i18n: 2.0-2
    pve-qemu-kvm: 4.0.0-5
    pve-xtermjs: 3.13.2-1
    qemu-server: 6.0-7
    smartmontools: 7.0-pve2
    spiceterm: 3.1-1
    vncterm: 1.6-1
    zfsutils-linux: 0.8.1-pve1
    Disks.PNG LVM-Thin.PNG LVM.PNG


    EDIT: SOLVED Changed to XEN-Server


    Gruß
    Horst
     
    #1 Horst Zimmermann, Aug 14, 2019
    Last edited: Aug 16, 2019 at 21:55
  1. This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
    By continuing to use this site, you are consenting to our use of cookies.
    Dismiss Notice