Hi all,
a friend of a fried has had a Proxmox running. And as I sometimes play around with Proxmox I was asked for help.
The situation is the following.
The server hat 3 disks. One SATA-DOM with Proxmox (3.x *argh*) installed. And two spinning drives which are in a raid somehow. Now one of the spinners died completely and the SATA-Dom has IO Error and is not able to boot. It seems to be setup with ZFS.
So basically a server which seemed to run for many years without any love given to.
I plugged in another drive and installed latest proxmox 5.3 on it. I thought I could that perhaps just import the pool.
Normal import was not possible due to the IO-Errors but I manged with my limited skills to import the rpool as read-only.
root@pve001:~# zpool status
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub repaired 0B in 0h3m with 0 errors on Thu Jan 14 08:46:13 2016
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda3 ONLINE 0 0 0
errors: 3 data errors, use '-v' for a list
The problem is, that I can't see the "data"-disk in the pool and I have no clue how it was setup. I only can see the data-disk with fdisk:
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F53023F3-D600-5240-AF69-4FAEDB529D9A
Device Start End Sectors Size Type
/dev/sdd1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS
/dev/sdd9 3907012608 3907028991 16384 8M Solaris reserved 1
So there seems to be some ZFS-stuff on the disk which holds the vm-images . The 3 data errors are from older images-files and could be ignored.
I even tried to find out something about how it was configured, but could not access the pve-installation on the rpool as / is already mounted. In general the dataset should be available, but I'm not sure how to mount it to another path.
root@pve001:~# zfs get all | grep mount
rpool mounted yes -
rpool mountpoint /rpool default
rpool canmount on default
rpool/ROOT mounted yes -
rpool/ROOT mountpoint /rpool/ROOT default
rpool/ROOT canmount on default
rpool/ROOT/pve-1 mounted no -
rpool/ROOT/pve-1 mountpoint / local
rpool/ROOT/pve-1 canmount on default
Long story short. Any idea how I can access the data on this sdd1?
best regards
Tim
a friend of a fried has had a Proxmox running. And as I sometimes play around with Proxmox I was asked for help.
The situation is the following.
The server hat 3 disks. One SATA-DOM with Proxmox (3.x *argh*) installed. And two spinning drives which are in a raid somehow. Now one of the spinners died completely and the SATA-Dom has IO Error and is not able to boot. It seems to be setup with ZFS.
So basically a server which seemed to run for many years without any love given to.
I plugged in another drive and installed latest proxmox 5.3 on it. I thought I could that perhaps just import the pool.
Normal import was not possible due to the IO-Errors but I manged with my limited skills to import the rpool as read-only.
root@pve001:~# zpool status
pool: rpool
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: http://zfsonlinux.org/msg/ZFS-8000-8A
scan: scrub repaired 0B in 0h3m with 0 errors on Thu Jan 14 08:46:13 2016
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
sda3 ONLINE 0 0 0
errors: 3 data errors, use '-v' for a list
The problem is, that I can't see the "data"-disk in the pool and I have no clue how it was setup. I only can see the data-disk with fdisk:
Disk /dev/sdd: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: F53023F3-D600-5240-AF69-4FAEDB529D9A
Device Start End Sectors Size Type
/dev/sdd1 2048 3907012607 3907010560 1.8T Solaris /usr & Apple ZFS
/dev/sdd9 3907012608 3907028991 16384 8M Solaris reserved 1
So there seems to be some ZFS-stuff on the disk which holds the vm-images . The 3 data errors are from older images-files and could be ignored.
I even tried to find out something about how it was configured, but could not access the pve-installation on the rpool as / is already mounted. In general the dataset should be available, but I'm not sure how to mount it to another path.
root@pve001:~# zfs get all | grep mount
rpool mounted yes -
rpool mountpoint /rpool default
rpool canmount on default
rpool/ROOT mounted yes -
rpool/ROOT mountpoint /rpool/ROOT default
rpool/ROOT canmount on default
rpool/ROOT/pve-1 mounted no -
rpool/ROOT/pve-1 mountpoint / local
rpool/ROOT/pve-1 canmount on default
Long story short. Any idea how I can access the data on this sdd1?
best regards
Tim