I have a virtual Windows Server 2016 with 3 drives attached:
1. LVM, system is installed on this drive
2. ZFS pool Albus
3. ZFS pool Snake
In Windows I spanned both ZFS pools into a single dynamic disk. When I try to boot Windows freezes at the windows logon screen. Nothing is shown in the message.log. When I try windows recovery none of the drives show up.
When I detach both ZFS pools the virtual machine boots fine.
Is this related to running out of storage space? Did the windows virtual drive get corrupted? Is there anything I can do to get my data back?
message.log
UPDATE:
I managed to mount the spanned volume by doing the following with help of this blog post:
1. Install ldmtool
2. Run ldm
3. scan /dev/zvol/Albus/vm-101-disk-0
4. scan /dev/zvol/Snake/vm-101-disk-0
5. create all
6. exit ldm
7. mount the mapper previously made with ldm (mount /dev/mapper/ldm_vol_WIN-HAM484PDCBK-Dg0_Volume1 /mnt)
I'm still wondering what happened with my virtual machine though. Anybody any idea?
https://michael-prokop.at/blog/2013...g-microsoft-windows-dynamic-disks-from-linux/
1. LVM, system is installed on this drive
2. ZFS pool Albus
3. ZFS pool Snake
In Windows I spanned both ZFS pools into a single dynamic disk. When I try to boot Windows freezes at the windows logon screen. Nothing is shown in the message.log. When I try windows recovery none of the drives show up.
When I detach both ZFS pools the virtual machine boots fine.
Is this related to running out of storage space? Did the windows virtual drive get corrupted? Is there anything I can do to get my data back?
message.log
Code:
Sep 4 21:29:08 skynet pvedaemon[19361]: <root@pam> starting task UPID:skynet:00005812:0EE44139:5D6FCA34:qmstart:101:root@pam:
Sep 4 21:29:09 skynet kernel: [2498412.388553] device tap101i0 entered promiscuous mode
Sep 4 21:29:09 skynet kernel: [2498412.400039] vmbr0: port 3(tap101i0) entered blocking state
Sep 4 21:29:09 skynet kernel: [2498412.400041] vmbr0: port 3(tap101i0) entered disabled state
Sep 4 21:29:09 skynet kernel: [2498412.400150] vmbr0: port 3(tap101i0) entered blocking state
Sep 4 21:29:09 skynet kernel: [2498412.400152] vmbr0: port 3(tap101i0) entered forwarding state
Code:
zpool status -v
pool: Albus
state: ONLINE
scan: scrub repaired 14.5M in 23h1m with 0 errors on Sun Aug 11 23:25:44 2019
config:
NAME STATE READ WRITE CKSUM
Albus ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
wwn-0x50014ee0af32bba2 ONLINE 0 0 0
wwn-0x50014ee00487f06f ONLINE 0 0 0
wwn-0x50014ee2ba33222a ONLINE 0 0 0
wwn-0x50014ee2b841a1ef ONLINE 0 0 0
errors: No known data errors
pool: Snake
state: ONLINE
scan: scrub repaired 0B in 0h9m with 0 errors on Sun Aug 11 00:33:47 2019
config:
NAME STATE READ WRITE CKSUM
Snake ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca098d0e9da ONLINE 0 0 0
wwn-0x5000cca098d0e9e9 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
sdg ONLINE 0 0 0
sdh ONLINE 0 0 0
errors: No known data errors
Code:
zfs list
NAME USED AVAIL REFER MOUNTPOINT
Albus 15.3T 0B 140K /Albus
Albus/vm-101-disk-0 15.3T 0B 15.3T -
Snake 5.28T 889G 96K /Snake
Snake/vm-101-disk-0 5.28T 1.07T 5.07T -
Code:
df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 106M 3.1G 4% /run
/dev/mapper/pve-root 94G 13G 77G 15% /
tmpfs 16G 43M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 16G 0 16G 0% /sys/fs/cgroup
Albus 128K 128K 0 100% /Albus
Snake 890G 128K 890G 1% /Snake
tmpfs 3.2G 0 3.2G 0% /run/user/0
/dev/fuse 30M 16K 30M 1% /etc/pve
UPDATE:
I managed to mount the spanned volume by doing the following with help of this blog post:
1. Install ldmtool
2. Run ldm
3. scan /dev/zvol/Albus/vm-101-disk-0
4. scan /dev/zvol/Snake/vm-101-disk-0
5. create all
6. exit ldm
7. mount the mapper previously made with ldm (mount /dev/mapper/ldm_vol_WIN-HAM484PDCBK-Dg0_Volume1 /mnt)
I'm still wondering what happened with my virtual machine though. Anybody any idea?
https://michael-prokop.at/blog/2013...g-microsoft-windows-dynamic-disks-from-linux/
Last edited: