[SOLVED] ZFS: why i have LVM too?

fireon

Distinguished Member
Oct 25, 2010
4,517
486
153
Austria/Graz
deepdoc.at
Hello,

have here an 3.4 upgrade to 4.0. This day i saw that i have for the rpool lvm too. I'am confuse. For what i need that. Please have a look at my config:

Code:
NAME                                               USED  AVAIL  REFER  MOUNTPOINT
rpool                                             13.2G  13.6G   144K  /rpool
rpool/ROOT                                        9.65G  13.6G   144K  /rpool/ROOT
rpool/ROOT/pve-1                                  9.65G  13.6G  9.65G  /
rpool/swap                                        3.59G  17.2G  17.6M  -
v-machines                                        2.93T  2.33T   104K  /v-machines
v-machines/home                                   2.53T  2.33T  2.53T  /v-machines/home
v-machines/subvol-109-disk-1                       595M  7.49G   527M  /v-machines/subvol-109-disk-1
v-machines/vm-100-disk-1                          35.1G  2.34T  34.4G  -
v-machines/vm-101-disk-1                          30.7G  2.34T  15.2G  -
v-machines/vm-101-state-vor_nagios4               4.53G  2.34T  1.87G  -
v-machines/vm-102-disk-1                          33.0G  2.36T  3.92G  -
v-machines/vm-103-disk-2                          35.1G  2.34T  34.4G  -
v-machines/vm-104-disk-1                          40.3G  2.34T  39.5G  -
v-machines/vm-104-state-vor_check_mk_install      4.53G  2.34T   212M  -
v-machines/vm-105-disk-1                          71.7G  2.37T  34.4G  -
v-machines/vm-105-state-nach_Datenbank_und_world  8.56G  2.34T  2.36G  -
v-machines/vm-105-state-nach_postfix_admin        8.56G  2.34T  2.50G  -
v-machines/vm-105-state-vor_beginn_Mailserver     8.56G  2.34T   607M  -
v-machines/vm-105-state-vor_config_tine           4.63G  2.34T  1.73G  -
v-machines/vm-106-disk-1                          80.8G  2.37T  39.4G  -
v-machines/vm-106-state-vor_landscape             2.57G  2.34T  1.01G  -
v-machines/vm-107-disk-1                          40.3G  2.34T  39.5G  -
v-machines/vm-108-disk-1                          1.03G  2.34T    64K  -
v-machines/vm-108-state-vor_tests_lam             4.53G  2.34T   542M  -

Code:
pool: rpool
 state: ONLINE
  scan: none requested
config:


    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        sda3    ONLINE       0     0     0
        sdh3    ONLINE       0     0     0


errors: No known data errors


  pool: v-machines
 state: ONLINE
  scan: resilvered 1.09T in 4h45m with 0 errors on Sat May 23 02:48:52 2015
config:


    NAME                                            STATE     READ WRITE CKSUM
    v-machines                                      ONLINE       0     0     0
      mirror-0                                      ONLINE       0     0     0
        ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D0KRWP  ONLINE       0     0     0
        ata-WDC_WD20EARX-00ZUDB0_WD-WCC1H0343538    ONLINE       0     0     0
      mirror-1                                      ONLINE       0     0     0
        ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D688XW  ONLINE       0     0     0
        ata-WDC_WD2001FFSX-68JNUN0_WD-WMC5C0D63WM0  ONLINE       0     0     0
      mirror-2                                      ONLINE       0     0     0
        ata-WDC_WD20EARX-00ZUDB0_WD-WCC1H0381420    ONLINE       0     0     0
        ata-WDC_WD20EURS-63S48Y0_WD-WMAZA9381012    ONLINE       0     0     0


errors: No known data errors
So why i have that?

Code:
lvs
  Found duplicate PV p0YFDmzhkDwlUMUxILsj2yKPiPdA1k2W: using /dev/zd288p5 not /dev/zd16p5
  Found duplicate PV p0YFDmzhkDwlUMUxILsj2yKPiPdA1k2W: using /dev/zd48p5 not /dev/zd288p5
  LV   VG   Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel -wi------- 28.27g                                                    
  swap rhel -wi-------  3.20g                                                    
  root vg   -wi------- 11.18g                                                    
  swap vg   -wi-------  1.86g                                                    
root@pve-host:~# vgs
  Found duplicate PV p0YFDmzhkDwlUMUxILsj2yKPiPdA1k2W: using /dev/zd288p5 not /dev/zd16p5
  Found duplicate PV p0YFDmzhkDwlUMUxILsj2yKPiPdA1k2W: using /dev/zd48p5 not /dev/zd288p5
  VG   #PV #LV #SN Attr   VSize  VFree 
  rhel   1   2   0 wz--n- 31.51g 44.00m
  vg     1   2   0 wz--n- 37.20g 24.16g
root@pve-host:~# pvs
  Found duplicate PV p0YFDmzhkDwlUMUxILsj2yKPiPdA1k2W: using /dev/zd288p5 not /dev/zd16p5
  Found duplicate PV p0YFDmzhkDwlUMUxILsj2yKPiPdA1k2W: using /dev/zd48p5 not /dev/zd288p5
  PV           VG   Fmt  Attr PSize  PFree 
  /dev/zd112p2 rhel lvm2 a--  31.51g 44.00m
  /dev/zd48p5  vg   lvm2 a--  37.20g 24.16g


Best Regards
 
Last edited:
Hi,
LVM find every lvm partition on the system at the boot procedure.
this is because zfs automount is on.
 
Sorry but i do not understand you. So what, do LVM on ZFS. Need ZFS LVMfunction or... i never created lvm partitions.

Thank you.
 
Hm, ok i have lvm's in guests. Must i add exclude rules or it doesn't matter, like an blemishes.
 
Normally there should be no problems, but to be sure exclude it from lvm.conf like spirit mentions.

insert in the lvm.conf the following line
Code:
filter = [ "r|/dev/zd*|" ]
 
Normally there should be no problems, but to be sure exclude it from lvm.conf like spirit mentions.

insert in the lvm.conf the following line
Code:
filter = [ "r|/dev/zd*|" ]

Hello,

i've testet this. It works fine and i think it is nicer :)

Best Regards
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!