Hello guys, how are you all doing? 
I am new to proxmox a total newbie, now faced with an issue of recovering vms from an about to die hard drive.
fdisk output is shown below,
	
	
	
		
What I see is the datacenter has set up proxmox fresh on linux lvm and not zfs now. So what I am thinking is,
a) Create the vms i have 5 of them from 100-105 with same disk sizes
b) Replace their storage file from backup
First of all is this the way to proceed? Now I have to mount the zfs file system. I tried installing zfs-fuse and on doing an zpool import this is what it gave,
	
	
	
		
How do I go about it? Looking for expert opinion. thanks in advance.
				
			I am new to proxmox a total newbie, now faced with an issue of recovering vms from an about to die hard drive.
fdisk output is shown below,
		Code:
	
	root@server2:~# fdisk -l
Disk /dev/sda: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 3FE654AD-DD15-4E11-A50A-EF90969EEC92
Device      Start        End    Sectors  Size Type
/dev/sda1    2048       4095       2048    1M BIOS boot
/dev/sda2    4096     528383     524288  256M EFI System
/dev/sda3  528384 3907029134 3906500751  1.8T Linux LVM
Disk /dev/sdb: 1.8 TiB, 2000398934016 bytes, 3907029168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C517119C-156E-46D4-9502-24AE469D504A
Device          Start        End    Sectors  Size Type
/dev/sdb1          34       2047       2014 1007K BIOS boot
/dev/sdb2        2048 3907012749 3907010702  1.8T Solaris /usr & Apple ZFS
/dev/sdb9  3907012750 3907029134      16385    8M Solaris reserved 1
Partition 1 does not start on physical sector boundary.
Partition 9 does not start on physical sector boundary.
Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/pve-root: 96 GiB, 103079215104 bytes, 201326592 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/zd0: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/pve-vm--101--disk--1: 120 GiB, 128849018880 bytes, 251658240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disk /dev/mapper/pve-vm--102--disk--1: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x6237c657
Device                                 Boot     Start       End   Sectors  Size Id Type
/dev/mapper/pve-vm--102--disk--1-part1 *         2048 102856703 102854656   49G 83 Linux
/dev/mapper/pve-vm--102--disk--1-part2      102858750 104855551   1996802  975M  5 Extended
/dev/mapper/pve-vm--102--disk--1-part5      102858752 104855551   1996800  975M 82 Linux swap / Solaris
Partition 2 does not start on physical sector boundary.
Disk /dev/mapper/pve-vm--100--disk--1: 620 GiB, 665719930880 bytes, 1300234240 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x000ba059
Device                                 Boot   Start        End    Sectors  Size Id Type
/dev/mapper/pve-vm--100--disk--1-part1 *       2048    1953791    1951744  953M 83 Linux
/dev/mapper/pve-vm--100--disk--1-part2      1953792 1283239935 1281286144  611G 8e Linux LVM
Disk /dev/mapper/pve-vm--103--disk--1: 130 GiB, 139586437120 bytes, 272629760 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x000ad852
Device                                 Boot   Start       End   Sectors   Size Id Type
/dev/mapper/pve-vm--103--disk--1-part1 *       2048   1953791   1951744   953M 83 Linux
/dev/mapper/pve-vm--103--disk--1-part2      1953792 246124543 244170752 116.4G 8e Linux LVM
Disk /dev/mapper/pve-vm--104--disk--1: 300 GiB, 322122547200 bytes, 629145600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disklabel type: dos
Disk identifier: 0x000b380a
Device                                 Boot   Start       End   Sectors   Size Id Type
/dev/mapper/pve-vm--104--disk--1-part1 *       2048   5861375   5859328   2.8G 83 Linux
/dev/mapper/pve-vm--104--disk--1-part2      5861376 486354943 480493568 229.1G 8e Linux LVM
Disk /dev/mapper/pve-vm--105--disk--1: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 65536 bytes
Disk /dev/zd16: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 8192 bytes
I/O size (minimum/optimal): 8192 bytes / 8192 bytes
Disklabel type: dos
Disk identifier: 0x0009f951
Device      Boot   Start       End   Sectors Size Id Type
/dev/zd16p1 *       2048   2099199   2097152   1G 83 Linux
/dev/zd16p2      2099200 209715199 207616000  99G 8e Linux LVMWhat I see is the datacenter has set up proxmox fresh on linux lvm and not zfs now. So what I am thinking is,
a) Create the vms i have 5 of them from 100-105 with same disk sizes
b) Replace their storage file from backup
First of all is this the way to proceed? Now I have to mount the zfs file system. I tried installing zfs-fuse and on doing an zpool import this is what it gave,
		Code:
	
	root@server2:~# zpool import
  pool: rpool
    id: 6734591596441369383
 state: UNAVAIL
status: The pool is formatted using an incompatible version.
action: The pool cannot be imported.  Access the pool on a system running newer
        software, or recreate the pool from backup.
   see: 
config:
        rpool                                                            UNAVAIL  newer version
          mirror-0                                                       DEGRADED
            17620338933240927984                                         UNAVAIL  corrupted data
            disk/by-id/ata-WDC_WD2003FZEX-00SRLA0_WD-WMC6N0E0TX6T-part2  ONLINEHow do I go about it? Looking for expert opinion. thanks in advance.
 
	