Hello,
I've recently have an issue where rollbacking a snapshot never completed and is stuck in 'D' state
I'm trying to figure out what might have cause this as I've done this multiple times on this node without issues. It seems the issue is only happening with this CT. There is no errors in dmesg nor journalctl. Is there a place I can check ?
Here are some informations:
I've recently have an issue where rollbacking a snapshot never completed and is stuck in 'D' state
Code:
root 26279 0.0 0.0 35188 3024 ? D 09:50 0:00 zfs rollback rpool/data/subvol-104-disk-1@initialsetup
I'm trying to figure out what might have cause this as I've done this multiple times on this node without issues. It seems the issue is only happening with this CT. There is no errors in dmesg nor journalctl. Is there a place I can check ?
Here are some informations:
Code:
proxmox-ve: 5.2-2 (running kernel: 4.15.17-1-pve)
pve-manager: 5.2-2 (running version: 5.2-2/b1d1c7f4)
pve-kernel-4.15: 5.2-3
pve-kernel-4.13: 5.1-45
pve-kernel-4.15.17-3-pve: 4.15.17-13
pve-kernel-4.15.17-2-pve: 4.15.17-10
pve-kernel-4.15.17-1-pve: 4.15.17-9
pve-kernel-4.13.16-3-pve: 4.13.16-49
pve-kernel-4.13.16-2-pve: 4.13.16-48
pve-kernel-4.13.16-1-pve: 4.13.16-46
pve-kernel-4.13.13-6-pve: 4.13.13-42
pve-kernel-4.13.13-5-pve: 4.13.13-38
pve-kernel-4.13.13-4-pve: 4.13.13-35
pve-kernel-4.13.13-3-pve: 4.13.13-34
pve-kernel-4.13.13-2-pve: 4.13.13-33
pve-kernel-4.13.13-1-pve: 4.13.13-31
pve-kernel-4.13.8-3-pve: 4.13.8-30
pve-kernel-4.13.8-2-pve: 4.13.8-28
pve-kernel-4.13.4-1-pve: 4.13.4-26
corosync: 2.4.2-pve5
criu: 2.11.1-1~bpo90
glusterfs-client: 3.8.8-1
ksm-control-daemon: 1.2-2
libjs-extjs: 6.0.1-2
libpve-access-control: 5.0-8
libpve-apiclient-perl: 2.0-4
libpve-common-perl: 5.0-33
libpve-guest-common-perl: 2.0-16
libpve-http-server-perl: 2.0-9
libpve-storage-perl: 5.0-23
libqb0: 1.0.1-1
lvm2: 2.02.168-pve6
lxc-pve: 3.0.0-3
lxcfs: 3.0.0-1
novnc-pve: 1.0.0-1
proxmox-widget-toolkit: 1.0-19
pve-cluster: 5.0-27
pve-container: 2.0-23
pve-docs: 5.2-4
pve-firewall: 3.0-12
pve-firmware: 2.0-4
pve-ha-manager: 2.0-5
pve-i18n: 1.0-6
pve-libspice-server1: 0.12.8-3
pve-qemu-kvm: 2.11.1-5
pve-xtermjs: 1.0-5
qemu-server: 5.0-28
smartmontools: 6.5+svn4324-1
spiceterm: 3.0-5
vncterm: 1.5-3
zfsutils-linux: 0.7.9-pve1~bpo9
Code:
Name Type Status Total Used Available %
nas-1 dir active 7779824880 3327663864 4452161016 42.77%
local dir active 1318379648 5557632 1312822016 0.42%
local-zfs zfspool active 1679255580 366433528 1312822052 21.82%
Code:
[2018-08-20 08:45:46] audit: type=1400 audit(1534768746.657:4687): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=20144 comm="(ionclean)" flags="rw, rslave"
[2018-08-20 09:15:45] audit: type=1400 audit(1534770546.099:4688): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=1748 comm="(ionclean)" flags="rw, rslave"
[2018-08-20 09:15:46] audit: type=1400 audit(1534770546.855:4689): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=1849 comm="(ionclean)" flags="rw, rslave"
[2018-08-20 09:45:46] audit: type=1400 audit(1534772346.284:4690): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=18402 comm="(ionclean)" flags="rw, rslave"
[2018-08-20 09:45:46] audit: type=1400 audit(1534772347.044:4691): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=18707 comm="(ionclean)" flags="rw, rslave"
[2018-08-20 09:50:54] audit: type=1400 audit(1534772654.302:4692): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=4997 comm="(pachectl)" flags="rw, rslave"
[2018-08-20 09:50:54] audit: type=1400 audit(1534772654.806:4693): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxc-container-default-cgns" name="/" pid=5004 comm="(pachectl)" flags="rw, rslave"
[2018-08-20 09:53:23] vmbr1: port 7(veth104i0) entered disabled state
[2018-08-20 09:53:23] device veth104i0 left promiscuous mode
Code:
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0h57m with 0 errors on Sun Aug 12 01:21:05 2018
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca05936d10c-part2 ONLINE 0 0 0
wwn-0x5000cca05934a518-part2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000cca059359884-part2 ONLINE 0 0 0
wwn-0x5000cca0593405a8-part2 ONLINE 0 0 0
mirror-2 ONLINE 0 0 0
wwn-0x5000cca059370f48-part2 ONLINE 0 0 0
wwn-0x5000cca0593530d4-part2 ONLINE 0 0 0
logs
mirror-3 ONLINE 0 0 0
wwn-0x5002538050002de5-part2 ONLINE 0 0 0
wwn-0x500253805001ed92-part2 ONLINE 0 0 0
cache
wwn-0x5002538050002de5-part3 ONLINE 0 0 0
wwn-0x500253805001ed92-part3 ONLINE 0 0 0
errors: No known data errors
Code:
rpool/data/subvol-104-disk-1 1.24G 23.8G 1.18G /rpool/data/subvol-104-disk-1
Code:
Aug 20 09:40:11 dev-proxmox-2 zed: eid=1654565 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:11 dev-proxmox-2 zed: eid=1654566 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:11 dev-proxmox-2 zed: eid=1654567 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:11 dev-proxmox-2 zed: eid=1654568 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:11 dev-proxmox-2 zed: eid=1654569 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:12 dev-proxmox-2 systemd[1]: Started Session 1502302 of user root.
Aug 20 09:40:13 dev-proxmox-2 zed: eid=1654570 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:13 dev-proxmox-2 ntpd[5566]: Soliciting pool server 144.217.181.221
Aug 20 09:40:17 dev-proxmox-2 systemd[1]: Started Session 1502303 of user root.
Aug 20 09:40:18 dev-proxmox-2 systemd[1]: Started Session 1502304 of user root.
Aug 20 09:40:20 dev-proxmox-2 zed: eid=1654571 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:20 dev-proxmox-2 zed: eid=1654572 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:20 dev-proxmox-2 zed: eid=1654573 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:20 dev-proxmox-2 zed: eid=1654574 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:20 dev-proxmox-2 zed: eid=1654575 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:20 dev-proxmox-2 systemd[1]: Started Session 1502305 of user root.
Aug 20 09:40:22 dev-proxmox-2 zed: eid=1654576 class=history_event pool_guid=0x82EE2CB119FC19AD
Aug 20 09:40:22 dev-proxmox-2 systemd[1]: Started Session 1502306 of user root.
Aug 20 09:40:25 dev-proxmox-2 systemd[1]: Started Session 1502307 of user root.
Code:
total used free shared buff/cache available
Mem: 125 58 24 0 42 66
Swap: 7 0 7
Last edited: