[SOLVED] Nach Upgrade auf Proxmox 8 Probleme

corin.corvus

Active Member
Apr 8, 2020
132
13
38
37
Moin,

ich habe gerade mein 3 Node Cluster auf V8 hochgezogen.

leider habe ich nun immer Probleme. Kann diese nicht ganz erklären. Gefühlt starten derzeit ständig die Server neu oder verlieren die Verbindung zueinander.

Das hier passiert in den Logs jedes Nodes:
Code:
Jul 10 19:34:38 N-2 corosync[1365]:   [TOTEM ] Retransmit List: 79
Jul 10 19:34:38 N-2 corosync[1365]:   [TOTEM ] Retransmit List: 79
Jul 10 19:34:38 N-2 corosync[1365]:   [TOTEM ] Retransmit List: 79
Jul 10 19:34:38 N-2 corosync[1365]:   [TOTEM ] Retransmit List: 79
Jul 10 19:34:38 N-2 corosync[1365]:   [TOTEM ] Retransmit List: 79
Jul 10 19:34:38 N-2 corosync[1365]:   [TOTEM ] Retransmit List: 79
Jul 10 19:34:38 N-2 corosync[1365]:   [TOTEM ] Retransmit List: 79
Jul 10 19:34:41 N-2 corosync[1365]:   [KNET  ] link: host: 4 link: 0 is down
Jul 10 19:34:41 N-2 corosync[1365]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Jul 10 19:34:41 N-2 corosync[1365]:   [KNET  ] host: host: 4 has no active links
Jul 10 19:34:43 N-2 corosync[1365]:   [KNET  ] rx: host: 4 link: 0 is up
Jul 10 19:34:43 N-2 corosync[1365]:   [KNET  ] link: Resetting MTU for link 0 because host 4 joined
Jul 10 19:34:43 N-2 corosync[1365]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Jul 10 19:34:43 N-2 corosync[1365]:   [KNET  ] pmtud: Global data MTU changed to: 8885
Jul 10 19:34:48 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9
Jul 10 19:34:50 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9
Jul 10 19:34:51 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9
Jul 10 19:34:52 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9
Jul 10 19:34:52 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9
Jul 10 19:34:53 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9
Jul 10 19:34:54 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9
Jul 10 19:34:55 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9
Jul 10 19:34:56 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:34:57 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:34:59 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:00 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:02 N-2 corosync[1365]:   [KNET  ] link: host: 3 link: 0 is down
Jul 10 19:35:02 N-2 corosync[1365]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Jul 10 19:35:02 N-2 corosync[1365]:   [KNET  ] host: host: 3 has no active links
Jul 10 19:35:03 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:04 N-2 corosync[1365]:   [KNET  ] rx: host: 3 link: 0 is up
Jul 10 19:35:04 N-2 corosync[1365]:   [KNET  ] link: Resetting MTU for link 0 because host 3 joined
Jul 10 19:35:04 N-2 corosync[1365]:   [KNET  ] host: host: 3 (passive) best link: 0 (pri: 1)
Jul 10 19:35:04 N-2 corosync[1365]:   [KNET  ] pmtud: Global data MTU changed to: 8885
Jul 10 19:35:05 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:06 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:08 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:10 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab ad
Jul 10 19:35:12 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab ad
Jul 10 19:35:13 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:13 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:14 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:15 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab b0
Jul 10 19:35:16 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab b0 b1
Jul 10 19:35:17 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab b0
Jul 10 19:35:17 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:18 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:19 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab
Jul 10 19:35:20 N-2 corosync[1365]:   [TOTEM ] Retransmit List: a9 ab


Ich weiß nicht wie ich das lösen soll, kriege die Nodes nur teilweise zu fassen. Habe nun 3 und 4 abgeschaltet, damit 2 nicht ständig neu startet.

Bekomme bei pvecm status das hier:
Code:
root@N-2:/etc/pve# pvecm status
Can't use an undefined value as a HASH reference at /usr/share/perl5/PVE/CLI/pvecm.pm line 486, <DATA> line 960.

corosync.conf ist leer und bearbeitung geht nicht (read only).

Laut "Cluster" Ansicht habe ich kein Cluster mehr:
1689011091512.png

Die Nodes sind aber noch sichtbar und wenn die hochfahren, kann ich auch connecten untereinander.

Auf N-3 ist das Cluster noch da und die Config ist einsehbar. N-3 ist der Master, wie ich sehe (nachdem ich N-1 entfernt hatte).

Da hat sich wohl ziemlich was zerschossen :/


Würde mich über Hilfe freuen.

LG
 
Last edited:
Update:
Kaum mach ich N-4 an, laufen N-3 und 4 eine kurze Weile, fahren ihre VMs hoch und schwups, ist N-3 weg.

Syslog N-3:
Code:
Jul 10 20:00:54 N-3 pvedaemon[1697]: <root@pam> successful auth for user 'root@pam'
Jul 10 20:00:55 N-3 nfsrahead[1845]: setting /mnt/pve/Backup readahead to 128
Jul 10 20:00:55 N-3 pvestatd[1667]: status update time (12.235 seconds)
Jul 10 20:00:55 N-3 corosync[1621]:   [KNET  ] rx: host: 4 link: 0 is up
Jul 10 20:00:55 N-3 corosync[1621]:   [KNET  ] link: Resetting MTU for link 0 because host 4 joined
Jul 10 20:00:55 N-3 corosync[1621]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Jul 10 20:00:55 N-3 pvestatd[1667]: command '/sbin/vgscan --ignorelockingfailure --mknodes' failed: exit code 5
Jul 10 20:00:55 N-3 corosync[1621]:   [KNET  ] pmtud: PMTUD link change for host: 4 link: 0 from 469 to 8885
Jul 10 20:00:55 N-3 corosync[1621]:   [KNET  ] pmtud: Global data MTU changed to: 8885
Jul 10 20:00:55 N-3 kernel: iscsi: registered transport (tcp)
Jul 10 20:00:55 N-3 kernel: scsi host2: iSCSI Initiator over TCP/IP
Jul 10 20:00:55 N-3 kernel: scsi 2:0:0:0: Direct-Access     TrueNAS  iSCSI Disk       370  PQ: 0 ANSI: 6
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: Attached scsi generic sg2 type 0
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: Power-on or device reset occurred
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: [sdc] 8589934590 512-byte logical blocks: (4.40 TB/4.00 TiB)
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: [sdc] 4096-byte physical blocks
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: [sdc] Write Protect is off
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: [sdc] Mode Sense: 83 00 10 08
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: [sdc] Preferred minimum I/O size 4096 bytes
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: [sdc] Optimal transfer size 524288 bytes
Jul 10 20:00:55 N-3 kernel: sd 2:0:0:0: [sdc] Attached SCSI disk
Jul 10 20:00:55 N-3 corosync[1621]:   [QUORUM] Sync members[2]: 3 4
Jul 10 20:00:55 N-3 corosync[1621]:   [QUORUM] Sync joined[1]: 4
Jul 10 20:00:55 N-3 corosync[1621]:   [TOTEM ] A new membership (3.35d) was formed. Members joined: 4
Jul 10 20:00:55 N-3 pmxcfs[1516]: [dcdb] notice: members: 3/1516, 4/1435
Jul 10 20:00:55 N-3 pmxcfs[1516]: [dcdb] notice: starting data syncronisation
Jul 10 20:00:55 N-3 corosync[1621]:   [QUORUM] This node is within the primary component and will provide service.
Jul 10 20:00:55 N-3 corosync[1621]:   [QUORUM] Members[2]: 3 4
Jul 10 20:00:55 N-3 corosync[1621]:   [MAIN  ] Completed service synchronization, ready to provide service.
Jul 10 20:00:55 N-3 lvm[1868]: PV /dev/sdc online, VG iscsi-1 is complete.
Jul 10 20:00:55 N-3 pmxcfs[1516]: [dcdb] notice: cpg_send_message retried 1 times
Jul 10 20:00:55 N-3 pmxcfs[1516]: [status] notice: node has quorum
Jul 10 20:00:55 N-3 pmxcfs[1516]: [status] notice: members: 3/1516, 4/1435
Jul 10 20:00:55 N-3 pmxcfs[1516]: [status] notice: starting data syncronisation
Jul 10 20:00:55 N-3 pmxcfs[1516]: [dcdb] notice: received sync request (epoch 3/1516/00000002)
Jul 10 20:00:55 N-3 pmxcfs[1516]: [status] notice: received sync request (epoch 3/1516/00000002)
Jul 10 20:00:55 N-3 systemd[1]: Started lvm-activate-iscsi-1.service - /sbin/lvm vgchange -aay --autoactivation event iscsi-1.
Jul 10 20:00:55 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c d e
Jul 10 20:00:55 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c d e
Jul 10 20:00:55 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c d e
Jul 10 20:00:55 N-3 pmxcfs[1516]: [dcdb] notice: received all states
Jul 10 20:00:55 N-3 pmxcfs[1516]: [dcdb] notice: leader is 4/1435
Jul 10 20:00:55 N-3 pmxcfs[1516]: [dcdb] notice: synced members: 4/1435
Jul 10 20:00:55 N-3 pmxcfs[1516]: [dcdb] notice: waiting for updates from leader
Jul 10 20:00:55 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c d e
Jul 10 20:00:55 N-3 pmxcfs[1516]: [status] notice: received all states
Jul 10 20:00:55 N-3 pmxcfs[1516]: [status] notice: all data is up to date
Jul 10 20:00:55 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c d
Jul 10 20:00:55 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c d
Jul 10 20:00:55 N-3 lvm[1886]: /dev/dm-6 excluded: device is an LV.
Jul 10 20:00:56 N-3 lvm[1880]:   1 logical volume(s) in volume group "iscsi-1" now active
Jul 10 20:00:56 N-3 systemd[1]: lvm-activate-iscsi-1.service: Deactivated successfully.
Jul 10 20:00:56 N-3 iscsid[1417]: Connection1:0 to [target: s-1.iscsi:iscsi, portal: 10.0.0.17,3260] through [iface: default] is operational now
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.114.171:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.0.1:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.65.143:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.84.20:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.28.50:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.51.161:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.16.0.1:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.7.49:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.43.189:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.145.112:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 iscsid[1417]: connect to 172.17.4.169:3260 failed (Network is unreachable)
Jul 10 20:00:56 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c d
Jul 10 20:00:56 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c d
Jul 10 20:00:56 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:56 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 12
Jul 10 20:00:56 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 12
Jul 10 20:00:56 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 12
Jul 10 20:00:56 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 12
Jul 10 20:00:56 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:56 N-3 pve-guests[1842]: <root@pam> starting task UPID:N-3:00000761:000014D7:64AC4758:startall::root@pam:
Jul 10 20:00:56 N-3 pvesh[1842]: Starting VM 201
Jul 10 20:00:56 N-3 pve-guests[1889]: <root@pam> starting task UPID:N-3:00000762:000014DA:64AC4758:qmstart:201:root@pam:
Jul 10 20:00:56 N-3 pve-guests[1890]: start VM 201: UPID:N-3:00000762:000014DA:64AC4758:qmstart:201:root@pam:
Jul 10 20:00:57 N-3 systemd[1]: Created slice qemu.slice - Slice /qemu.
Jul 10 20:00:57 N-3 systemd[1]: Started 201.scope.
Jul 10 20:00:57 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:57 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:57 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:57 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:57 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:57 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:58 N-3 kernel: device tap201i0 entered promiscuous mode
Jul 10 20:00:58 N-3 kernel: vmbr0: port 2(tap201i0) entered blocking state
Jul 10 20:00:58 N-3 kernel: vmbr0: port 2(tap201i0) entered disabled state
Jul 10 20:00:58 N-3 kernel: vmbr0: port 2(tap201i0) entered blocking state
Jul 10 20:00:58 N-3 kernel: vmbr0: port 2(tap201i0) entered forwarding state
Jul 10 20:00:58 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:00:59 N-3 iscsid[1417]: connect to 172.17.0.10:3260 failed (Network is unreachable)
Jul 10 20:01:00 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:01:00 N-3 chronyd[1451]: Selected source 85.214.98.59 (2.debian.pool.ntp.org)
Jul 10 20:01:00 N-3 chronyd[1451]: System clock TAI offset set to 37 seconds
Jul 10 20:01:01 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:01:01 N-3 iscsid[1417]: connect to 172.17.43.189:3260 failed (Network is unreachable)
Jul 10 20:01:01 N-3 iscsid[1417]: connect to 172.17.145.112:3260 failed (Network is unreachable)
Jul 10 20:01:01 N-3 iscsid[1417]: connect to 172.17.4.169:3260 failed (Network is unreachable)
Jul 10 20:01:01 N-3 pvesh[1842]: Starting VM 202
Jul 10 20:01:01 N-3 pve-guests[1989]: start VM 202: UPID:N-3:000007C5:000016D0:64AC475D:qmstart:202:root@pam:
Jul 10 20:01:01 N-3 pve-guests[1889]: <root@pam> starting task UPID:N-3:000007C5:000016D0:64AC475D:qmstart:202:root@pam:
Jul 10 20:01:01 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:01:01 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:01:02 N-3 systemd[1]: Started 202.scope.
Jul 10 20:01:02 N-3 iscsid[1417]: connect to 172.17.114.171:3260 failed (Network is unreachable)
Jul 10 20:01:02 N-3 iscsid[1417]: connect to 172.17.0.1:3260 failed (Network is unreachable)
Jul 10 20:01:02 N-3 iscsid[1417]: connect to 172.17.65.143:3260 failed (Network is unreachable)
Jul 10 20:01:02 N-3 iscsid[1417]: connect to 172.17.84.20:3260 failed (Network is unreachable)
Jul 10 20:01:02 N-3 iscsid[1417]: connect to 172.17.28.50:3260 failed (Network is unreachable)
Jul 10 20:01:02 N-3 iscsid[1417]: connect to 172.17.51.161:3260 failed (Network is unreachable)
Jul 10 20:01:02 N-3 iscsid[1417]: connect to 172.16.0.1:3260 failed (Network is unreachable)
Jul 10 20:01:02 N-3 iscsid[1417]: connect to 172.17.7.49:3260 failed (Network is unreachable)
Jul 10 20:01:02 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 14
Jul 10 20:01:04 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 14
Jul 10 20:01:06 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 14
Jul 10 20:01:08 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 14
Jul 10 20:01:08 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 14
Jul 10 20:01:10 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c 14
Jul 10 20:01:11 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:01:12 N-3 iscsid[1417]: connect to 172.17.43.189:3260 failed (Network is unreachable)
Jul 10 20:01:12 N-3 iscsid[1417]: connect to 172.17.145.112:3260 failed (Network is unreachable)
Jul 10 20:01:12 N-3 iscsid[1417]: connect to 172.17.4.169:3260 failed (Network is unreachable)
Jul 10 20:01:12 N-3 iscsid[1417]: connect to 172.17.0.10:3260 failed (Network is unreachable)
Jul 10 20:01:13 N-3 iscsid[1417]: connect to 172.17.0.1:3260 failed (Network is unreachable)
Jul 10 20:01:13 N-3 iscsid[1417]: connect to 172.17.65.143:3260 failed (Network is unreachable)
Jul 10 20:01:13 N-3 iscsid[1417]: connect to 172.17.84.20:3260 failed (Network is unreachable)
Jul 10 20:01:13 N-3 iscsid[1417]: connect to 172.17.28.50:3260 failed (Network is unreachable)
Jul 10 20:01:13 N-3 iscsid[1417]: connect to 172.17.51.161:3260 failed (Network is unreachable)
Jul 10 20:01:13 N-3 iscsid[1417]: connect to 172.16.0.1:3260 failed (Network is unreachable)
Jul 10 20:01:13 N-3 iscsid[1417]: connect to 172.17.7.49:3260 failed (Network is unreachable)
Jul 10 20:01:13 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
Jul 10 20:01:14 N-3 corosync[1621]:   [TOTEM ] Retransmit List: c
 
So nach Entfernung aller Replikationen vom Master und dann langsamen Hochfahren der anderen Nodes einzeln hauen 2 Nodes immer noch Retransmit Logs raus, aber bisher starten sie nicht mehr neu. VMs sind an.

Scheint also erstmal zu laufen, wenn auch alles extrem langsam (sehe nur noch kb/s migrationen von vms, keine mb/s mehr.
Das hier kommt auch immer wieder:
Code:
Jul 10 20:52:28 N-2 corosync[1411]:   [TOTEM ] Retransmit List: 4a6
Jul 10 20:52:29 N-2 corosync[1411]:   [TOTEM ] Retransmit List: 4aa
Jul 10 20:52:34 N-2 pve-ha-lrm[1592]: loop take too long (33 seconds)
Jul 10 20:52:34 N-2 corosync[1411]:   [KNET  ] link: host: 4 link: 0 is down
Jul 10 20:52:34 N-2 corosync[1411]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Jul 10 20:52:34 N-2 corosync[1411]:   [KNET  ] host: host: 4 has no active links
Jul 10 20:52:36 N-2 corosync[1411]:   [KNET  ] rx: host: 4 link: 0 is up
Jul 10 20:52:36 N-2 corosync[1411]:   [KNET  ] link: Resetting MTU for link 0 because host 4 joined
Jul 10 20:52:36 N-2 corosync[1411]:   [KNET  ] host: host: 4 (passive) best link: 0 (pri: 1)
Jul 10 20:52:36 N-2 corosync[1411]:   [TOTEM ] Token has not been received in 2737 ms
Jul 10 20:52:40 N-2 corosync[1411]:   [KNET  ] pmtud: Global data MTU changed to: 8885


Die Retransmits gehen fleißig weiter, immer mal zwischen den Nodes. Mir schein, das ist normal?

Habe auf einem Node noch eine Disk, die ich nicht entfernen kann.
Versuche ich das, sagt er "snapshot is cloned". Versuche ich das dann wie hier beschrieben: https://www.claudiokuenzler.com/blog/453/zfs-cannot-delete-destroy-snapshot-is-cloned-dependant

sagt er "Dataset is busy".

Ist eine Disk von einer VM, die da nicht mehr ist. Replication Jobs sind gelöscht. Also ein Überbleibsel.

Würde mich generell freuen, wenn sich das alles einmal jemand anschauen kann und mir sagen kann, ob hier noch Probleme bestehen, so wie das aktuell läuft gehe ich davon aus.

Replikation Configs kann ich nun auch nicht mehr einsehen. replication.cfg gibt es aber
1689015779116.png

LG

Nachtrag:
Jetzt ist keine Webgui mehr erreichbar, aber alle Services laufen.

Bitte um Hilfe :/
 
Last edited:
Moin,

ich konnte nun den ganzen Absturzt/Neustart Sturm beruhigen, indem ich die SSH Einstellungen zurückgestellt habe:
/etc/ssh/sshd_config -> If you have not changed this file manually, the only differences should be a replacement of ChallengeResponseAuthentication no with KbdInteractiveAuthentication no and some irrelevant changes in comments (lines starting with #).

Habe das hier in die sshd_config eingetragen:
ChallengeResponseAuthentication no

und den KBD Teil auskommentiert.

Die restlichen Probleme blieben und ich habe nun begonnen das Cluster komplett neu zu erstellen. Ich hoffe dass das nicht wieder passiert.

LG
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!