PVE Laufwerke mounten nicht mehr

Georg-EGF

New Member
Nov 8, 2023
4
0
1
Hallo zusammen, ich bin neu im Forum, gelich mit mit einem riesen problem.
Mir ist gestern nach einer Installation einer neuen VM das ganze PVE um die Ohren geflogen, angeblich wegen fehlendem Speicherpaltz auf der LVM Partion.
Ein Start der Weboberfläche funktioniert auch nicht mehr.
Das Verzeichnis /etc/pve ist leer.



Nachfolgend einige Ausgaben der Konfiguration:
Code:
    blkid

    /dev/nvme0n1p2: UUID="8EBB-1FAD" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="14877743-690e-407f-a2df-a87662956814"

    /dev/nvme0n1p3: LABEL="rpool" UUID="10181835309781586700" UUID_SUB="15207051549955602856" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="72744ed6-a883-4413-bc03-8f3c8cbb4cdf"

    /dev/nvme1n1p2: UUID="8EBB-7B1D" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="6e28e99e-e325-4e5c-8ce7-10e8e241b9ce"

    /dev/nvme1n1p3: LABEL="rpool" UUID="10181835309781586700" UUID_SUB="9960068624275807245" BLOCK_SIZE="4096" TYPE="zfs_member" PARTUUID="ae8ab018-01ea-4f7f-8cf8-a281974f5ad4"

    /dev/nvme0n1p1: PARTUUID="211517fe-646e-4d93-b386-549a28024bf3"

    /dev/nvme1n1p1: PARTUUID="e308df5f-d3c3-4c5b-bdde-5197dbecd6d7"

    /dev/zd0p1: UUID="c80ebff0-ab0d-414a-b3b3-c499f1e1f30d" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2a13455-01"

    /dev/zd0p5: UUID="77bd514f-e066-49cd-96a6-4886e36123ef" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2a13455-05"

    /dev/zd0p6: UUID="3cae5fa9-cd82-4b08-af34-fb106961c533" TYPE="swap" PARTUUID="b2a13455-06"

    /dev/zd0p7: UUID="55a13b10-fa2b-4c7b-8814-128bc5599007" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2a13455-07"

    /dev/zd0p8: UUID="f6a52586-1b4f-4cc2-8b96-39110206670b" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="b2a13455-08"

    /dev/zd16p1: UUID="00bae866-1d98-44d0-9946-e69d1ac58664" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="0a5ab846-01"

    /dev/zd16p5: UUID="604cbbf4-bb34-49b7-b488-ffdb1c1bf311" TYPE="swap" PARTUUID="0a5ab846-05"

    /dev/zd32p1: UUID="13b88607-c885-4920-bc88-258f59dd5089" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="fda853e2-01"

    /dev/zd32p5: UUID="407ccb4a-a39a-46ed-b37a-4b081403725e" TYPE="swap" PARTUUID="fda853e2-05"

    /dev/zd48p1: UUID="24c583c3-5732-4ab3-867e-e0af29a9d7f1" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="7e581399-01"

    /dev/zd48p5: UUID="5a8e8909-c531-4b47-a257-22c22e60d7c8" TYPE="swap" PARTUUID="7e581399-05"

    /dev/zd64p1: PARTUUID="bc1b1deb-350b-478e-abcb-8c35d4f1dee8"

    /dev/zd64p2: UUID="1AC8-939B" BLOCK_SIZE="512" TYPE="vfat" PARTLABEL="EFI System Partition" PARTUUID="7874f793-c945-4edd-924f-1f454f3be06c"

    /dev/zd64p3: UUID="80f547fb-26d3-4082-aea9-ad6e5ce162b3" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="1ed5aeab-5076-4bf9-b849-3b999ed7acc0"

    /dev/zd80p1: UUID="4f918a4e-bca0-4436-a670-7570ac3042bb" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="9eaa341d-01"

    /dev/zd80p5: UUID="98d688cf-394f-4175-ae8e-63e2f8de2b35" TYPE="swap" PARTUUID="9eaa341d-05"

    /dev/zd96p1: LABEL="System-reserviert" BLOCK_SIZE="512" UUID="74F8CF99F8CF584E" TYPE="ntfs" PARTUUID="5f45c59d-01"

    /dev/zd96p2: BLOCK_SIZE="512" UUID="C8B4D3A8B4D39772" TYPE="ntfs" PARTUUID="5f45c59d-02"

    /dev/zd96p3: BLOCK_SIZE="512" UUID="86A02CB3A02CAB9D" TYPE="ntfs" PARTUUID="5f45c59d-03"

    /dev/zd112p1: LABEL="System-reserviert" BLOCK_SIZE="512" UUID="AE28F15428F11C51" TYPE="ntfs" PARTUUID="c19b3327-01"

    /dev/zd112p2: BLOCK_SIZE="512" UUID="0C17000B16FFF394" TYPE="ntfs" PARTUUID="c19b3327-02"

    /dev/zd112p3: BLOCK_SIZE="512" UUID="38341F47341F0810" TYPE="ntfs" PARTUUID="c19b3327-03"

    /dev/zd112p4: LABEL="D" BLOCK_SIZE="512" UUID="5816764C16762AE0" TYPE="ntfs" PARTUUID="c19b3327-04"

    /dev/zd128p1: UUID="NBnu1c-0Y7j-5urK-vJtf-04AV-SYwu-3LXkUc" TYPE="LVM2_member" PARTUUID="ae6aa5fe-01"

    /dev/zd144p1: UUID="c4d11bdd-a7ca-4b3f-a0ff-41051db1efd1" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="df4cdb96-01"

    /dev/zd144p5: UUID="86d8ba71-d611-4efd-a98f-f35cfb702c1b" TYPE="swap" PARTUUID="df4cdb96-05"


Ausgabe von fdisk -l
Code:
 fdisk -l
Disk /dev/nvme0n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WDS200T2B0C-00PXH0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: CE08E6F8-1F45-481B-98F9-BDEB68D2CE8A

Device           Start        End    Sectors  Size Type
/dev/nvme0n1p1      34       2047       2014 1007K BIOS boot
/dev/nvme0n1p2    2048    1050623    1048576  512M EFI System
/dev/nvme0n1p3 1050624 3907029134 3905978511  1.8T Solaris /usr & Apple ZFS


Disk /dev/nvme1n1: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: WDC WDS200T2B0C-00PXH0
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: BAA47686-201E-46AD-943D-961236CA1189

Device           Start        End    Sectors  Size Type
/dev/nvme1n1p1      34       2047       2014 1007K BIOS boot
/dev/nvme1n1p2    2048    1050623    1048576  512M EFI System
/dev/nvme1n1p3 1050624 3907029134 3905978511  1.8T Solaris /usr & Apple ZFS


Disk /dev/zd0: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xb2a13455

Device     Boot    Start       End   Sectors  Size Id Type
/dev/zd0p1 *        2048  39452671  39450624 18.8G 83 Linux
/dev/zd0p2      39454718 209713151 170258434 81.2G  5 Extended
/dev/zd0p5      39454720  53254143  13799424  6.6G 83 Linux
/dev/zd0p6      53256192  55255039   1998848  976M 82 Linux swap / Solaris
/dev/zd0p7      55257088  57729023   2471936  1.2G 83 Linux
/dev/zd0p8      57731072 209713151 151982080 72.5G 83 Linux

Partition 2 does not start on physical sector boundary.


Disk /dev/zd16: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x0a5ab846

Device      Boot     Start       End   Sectors  Size Id Type
/dev/zd16p1 *         2048 102856703 102854656   49G 83 Linux
/dev/zd16p2      102858750 104855551   1996802  975M  5 Extended
/dev/zd16p5      102858752 104855551   1996800  975M 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd32: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xfda853e2

Device      Boot     Start       End   Sectors  Size Id Type
/dev/zd32p1 *         2048 207714303 207712256   99G 83 Linux
/dev/zd32p2      207716350 209713151   1996802  975M  5 Extended
/dev/zd32p5      207716352 209713151   1996800  975M 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd48: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x7e581399

Device      Boot     Start       End   Sectors  Size Id Type
/dev/zd48p1 *         2048 207714303 207712256   99G 83 Linux
/dev/zd48p2      207716350 209713151   1996802  975M  5 Extended
/dev/zd48p5      207716352 209713151   1996800  975M 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd64: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: gpt
Disk identifier: CB03EEBF-1D10-4E28-8626-8F3D4B759EF5

Device        Start       End   Sectors  Size Type
/dev/zd64p1    2048      4095      2048    1M BIOS boot
/dev/zd64p2    4096   1054719   1050624  513M EFI System
/dev/zd64p3 1054720 104855551 103800832 49.5G Linux filesystem


Disk /dev/zd80: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x9eaa341d

Device      Boot    Start      End  Sectors  Size Id Type
/dev/zd80p1 *        2048 65107967 65105920   31G 83 Linux
/dev/zd80p2      65110014 67106815  1996802  975M  5 Extended
/dev/zd80p5      65110016 67106815  1996800  975M 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.


Disk /dev/zd96: 200 GiB, 214748364800 bytes, 419430400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x5f45c59d

Device      Boot     Start       End   Sectors   Size Id Type
/dev/zd96p1 *         2048    104447    102400    50M  7 HPFS/NTFS/exFAT
/dev/zd96p2         104448 418350454 418246007 199.4G  7 HPFS/NTFS/exFAT
/dev/zd96p3      418351104 419426303   1075200   525M 27 Hidden NTFS WinRE


Disk /dev/zd112: 80 GiB, 85899345920 bytes, 167772160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xc19b3327

Device       Boot    Start       End  Sectors  Size Id Type
/dev/zd112p1 *        2048   1126399  1124352  549M  7 HPFS/NTFS/exFAT
/dev/zd112p2       1126400  80841875 79715476   38G  7 HPFS/NTFS/exFAT
/dev/zd112p3      80842752  81917951  1075200  525M 27 Hidden NTFS WinRE
/dev/zd112p4      81920000 167770111 85850112 40.9G  7 HPFS/NTFS/exFAT


Disk /dev/zd128: 1.46 TiB, 1610612736000 bytes, 3145728000 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xae6aa5fe

Device       Boot Start        End    Sectors  Size Id Type
/dev/zd128p1       2048 3145727999 3145725952  1.5T 83 Linux


Disk /dev/zd144: 1.17 TiB, 1288490188800 bytes, 2516582400 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0xdf4cdb96

Device       Boot      Start        End    Sectors  Size Id Type
/dev/zd144p1 *          2048 2514581503 2514579456  1.2T 83 Linux
/dev/zd144p2      2514583550 2516580351    1996802  975M  5 Extended
/dev/zd144p5      2514583552 2516580351    1996800  975M 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.

Ausgabe von Mount:
Code:
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
proc on /proc type proc (rw,relatime)
udev on /dev type devtmpfs (rw,nosuid,relatime,size=32836540k,nr_inodes=8209135,mode=755,inode64)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000)
tmpfs on /run type tmpfs (rw,nosuid,nodev,noexec,relatime,size=6574216k,mode=755,inode64)
/rpool/ROOT/pve-1 on / type zfs (rw,noatime,xattr,noacl)
securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,inode64)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k,inode64)
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime)
pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime)
efivarfs on /sys/firmware/efi/efivars type efivarfs (rw,nosuid,nodev,noexec,relatime)
bpf on /sys/fs/bpf type bpf (rw,nosuid,nodev,noexec,relatime,mode=700)
systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=27720)
hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
mqueue on /dev/mqueue type mqueue (rw,nosuid,nodev,noexec,relatime)
debugfs on /sys/kernel/debug type debugfs (rw,nosuid,nodev,noexec,relatime)
tracefs on /sys/kernel/tracing type tracefs (rw,nosuid,nodev,noexec,relatime)
sunrpc on /run/rpc_pipefs type rpc_pipefs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,nosuid,nodev,noexec,relatime)
configfs on /sys/kernel/config type configfs (rw,nosuid,nodev,noexec,relatime)
/rpool on /rpool type zfs (rw,noatime,xattr,noacl)
/rpool/data on /rpool/data type zfs (rw,noatime,xattr,noacl)
/rpool/ROOT on /rpool/ROOT type zfs (rw,noatime,xattr,noacl)
lxcfs on /var/lib/lxcfs type fuse.lxcfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,allow_other)
tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,size=6574212k,nr_inodes=1643553,mode=700,inode64)


 
Hi,
wie voll die Partitionen sind kann mit df -h eingesehen werden.
Das /etc/pve leer ist liegt wahrscheinlich daran das pve-cluster nicht richtig starten konnte.
Magst du mal den Output von journalctl -eu pve-cluster.service posten?
 
Hi,
die Ausgabe von df -h
Code:
Filesystem        Size  Used Avail Use% Mounted on
udev               32G     0   32G   0% /dev
tmpfs             6.3G  699M  5.6G  11% /run
rpool/ROOT/pve-1  3.3G  3.3G     0 100% /
tmpfs              32G     0   32G   0% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
rpool             128K  128K     0 100% /rpool
rpool/data        128K  128K     0 100% /rpool/data
rpool/ROOT        128K  128K     0 100% /rpool/ROOT
tmpfs             6.3G     0  6.3G   0% /run/user/0

und von journalctl -eu pve-cluster.service

Code:
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-node/VM2: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2020: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2002: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/100: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2021: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2000: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2004: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2009: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/DS1: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/local-zfs: -1
Nov 07 14:33:19 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/local: -1
Nov 07 14:34:00 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/local-zfs: -1
Nov 07 14:34:00 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/DS1: -1
Nov 07 14:34:00 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/local: -1
Nov 07 14:34:51 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/local: -1
Nov 07 14:34:51 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/local-zfs: -1
Nov 07 14:34:51 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/DS1: -1
Nov 07 14:36:03 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-node/VM2: -1
Nov 07 14:36:04 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2000: -1
Nov 07 14:36:04 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2021: -1
Nov 07 14:36:04 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2004: -1
Nov 07 14:36:04 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/102: -1
Nov 07 14:36:04 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2009: -1
Nov 07 14:36:04 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2002: -1
Nov 07 14:36:04 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/2020: -1
Nov 07 14:36:04 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-vm/100: -1
Nov 07 14:37:00 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/local: -1
Nov 07 14:37:00 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/DS1: -1
Nov 07 14:37:00 VM2 pmxcfs[2494]: [status] notice: RRDC update error /var/lib/rrdcached/db/pve2-storage/VM2/local-zfs: -1
lines 977-1006/1006 (END)




Übrigens, dass das rpool/ROOT/pve-1 3.3G 3.3G 0 100% / anzeigt kann eigentlich nicht sein, die Belegung war bei 38% bevor ich die neue VM aufgestzt habe, ein Laufwerk mit 1GByte.
Theoretisch könnte ich aber die ID 100 (neu aufgesetzt) und die ID 102 (altes System) löschen.
Ich weiß nur nicht, wie das gehen soll ohne Oberfläche.
 
Last edited:
Korrigiere mich wenn ich falsch liege, aber du hast dein System auf zfs installiert, oder? Nicht auf lvm.
Dann wäre der output von zfs list und zpool status interessant.
Nach dem Output von df -h wurde der root partition nur ~3GB gegeben
 
Ja, so ist das.
Kann man das root nicht einfach vergrößern?
Ich habe da übrigens nicht dran herum gefummelt, die Größen sind von selbst angelegt worden bei der Installation.


zfs list:
Code:
NAME                        USED  AVAIL     REFER  MOUNTPOINT
rpool                      1.76T     0B       96K  /rpool
rpool/ROOT                 3.23G     0B       96K  /rpool/ROOT
rpool/ROOT/pve-1           3.23G     0B     3.23G  /
rpool/data                 1.76T     0B       96K  /rpool/data
rpool/data/vm-100-disk-0    447G     0B      447G  -
rpool/data/vm-102-disk-0   1.18T     0B     1.18T  -
rpool/data/vm-102-disk-1   1.58G     0B     1.58G  -
rpool/data/vm-102-disk-2   5.02G     0B     5.02G  -
rpool/data/vm-2000-disk-0  39.3G     0B     39.3G  -
rpool/data/vm-2002-disk-0  6.89G     0B     6.89G  -
rpool/data/vm-2004-disk-0  10.9G     0B     10.9G  -
rpool/data/vm-2009-disk-0  3.86G     0B     3.86G  -
rpool/data/vm-2020-disk-0  40.8G     0B     40.8G  -
rpool/data/vm-2021-disk-0  39.9G     0B     39.9G  -

zpool status:
Code:
  pool: rpool
 state: ONLINE
  scan: scrub repaired 0B in 00:22:01 with 0 errors on Sun Oct  8 00:46:02 2023
config:

        NAME                                                 STATE     READ WRITE CKSUM
        rpool                                                ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            nvme-eui.e8238fa6bf530001001b448b45428c14-part3  ONLINE       0     0     0
            nvme-eui.e8238fa6bf530001001b448b45428ae4-part3  ONLINE       0     0     0

errors: No known data errors
 
Ich hab mir gerade mal das Szenario nachgebaut. Wenn sich die VMs nen zpool mit dem Host root fs teilen und der voll läuft, siehts tatsächlich genau wie bei dir aus und die Symptome sind die gleichen (webinterface weg und /etc/pve leer):

Code:
Filesystem        Size  Used Avail Use% Mounted on
udev              1.9G     0  1.9G   0% /dev
tmpfs             391M  688K  391M   1% /run
rpool/ROOT/pve-1  1.3G  1.3G     0 100% /
tmpfs             2.0G     0  2.0G   0% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
rpool             128K  128K     0 100% /rpool
rpool/data        128K  128K     0 100% /rpool/data
rpool/ROOT        128K  128K     0 100% /rpool/ROOT
tmpfs             391M     0  391M   0% /run/user/0

Ideal wärs wenn du noch nen paar daten rumfliegen hast die du weglöschen kannst, wenn nicht, solltest du mit zfs list ne liste an vm disks haben. Wenn eine entbehrlich ist, kannst du die für Speicherplatz opfern, in meinem Fall wars jetzt die vm-100: zfs destroy rpool/data/vm-100-disk-0

Nach nem reboot ist Proxmox wieder ohne probleme hochgekommen.
 
Super, hat geklapt.
Ausgabe von zfs list nun:
Code:
NAME                        USED  AVAIL     REFER  MOUNTPOINT
rpool                       152G  1.61T       96K  /rpool
rpool/ROOT                 10.0G  1.61T       96K  /rpool/ROOT
rpool/ROOT/pve-1           3.23G  1.61T     3.23G  /
rpool/data                  142G  1.61T       96K  /rpool/data
rpool/data/vm-2000-disk-0  39.3G  1.61T     39.3G  -
rpool/data/vm-2002-disk-0  6.89G  1.61T     6.89G  -
rpool/data/vm-2004-disk-0  10.9G  1.61T     10.9G  -
rpool/data/vm-2009-disk-0  3.86G  1.61T     3.86G  -
rpool/data/vm-2020-disk-0  40.7G  1.61T     40.7G  -
rpool/data/vm-2021-disk-0  39.9G  1.61T     39.9G  -

DANKE!!!!!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!