ZFS and error issue?

killmasta93

Renowned Member
Aug 13, 2017
974
59
68
31
Hi,
I was wondering if someone could assist me on a few questions i have. Normally i have few proxmox installs on clone computers and no issue, but today first install on a HP server ML110 G6 and it was a nightmare. First i tried installing and i was getting this error
Code:
command chroot /rpool/ROOT/pve-1 dpkg --force-confold --configure- a failed with exit code1 at /usr/bin/proxmoxinstall line 385
I looked over the forums found people saying it was the network cable or the domain name is too long or subdomain, tried all of that nothing. Then after few hours got an idea and bought myself a TPlink network adapter and a dlink(wan adapter for pfSense) and bam it installed, so for some odd reason it wont install on the network adapter of the server. For a moment i was fine with that until i realized for some odd reason it was giving me 10/100 when the adapter is really 1000 not sure why.

The server has 6 disks, 2X500gigs which is the OS proxmox ZFS RAID 1 and 4X2tb ZFS RAID 10 the odd thing is when i run zpool status -v i see the pool of the vmdata which is the RAID 10 but i dont see rpool and whats even odd when i run pveperf vmdata says it cannot find it (see picture) I also checked the RAID 1 was working because i disconnected each disk one by one and it booted up find proxmox, and not sure if its normal on the WebGUI shows LVM which got me thinking thats normally when i install proxmox on ext4 without zfs.

And lastly i been seeing on clean windows server 2012r2 alot of interrupt been reading that it has something to do with the drivers? currently using virtio-win-0.1.141

Thank you

Ip address
Code:
lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp16s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UP group default qlen 1000
    link/ether f4:f2:6d:04:f9:c0 brd ff:ff:ff:ff:ff:ff
3: enp48s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 5c:d9:98:f8:c2:4a brd ff:ff:ff:ff:ff:ff
4: enp30s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 2c:27:d7:87:a0:bc brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 5c:d9:98:f8:c2:4a brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.5/24 brd 192.168.0.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::5ed9:98ff:fef8:c24a/64 scope link
       valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether f4:f2:6d:04:f9:c0 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::f6f2:6dff:fe04:f9c0/64 scope link
       valid_lft forever preferred_lft forever
7: tap101i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 2a:f9:0e:01:3d:d9 brd ff:ff:ff:ff:ff:ff
8: tap100i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 22:44:c2:97:89:3c brd ff:ff:ff:ff:ff:ff
9: tap100i1: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UNKNOWN group default qlen 1000
    link/ether 52:0a:23:e2:4b:7b brd ff:ff:ff:ff:ff:ff

https://imgur.com/a/t3FNw
https://imgur.com/a/d02nU


EDIT: I realized what happened for some odd reason on the installed im 100% sure i put RAID 1 for the 500gigs DISKs but for some odd reason it grab sdb the first disk which was 1tb and put it as ext4
and the 500gig disk into the raid10.. not sure why or how

https://imgur.com/a/QpTfv
 
Last edited: