Proxmox with 2 Raids

TechLineX

Active Member
Mar 2, 2015
213
5
38
Hello,

i have installed the newest version of Proxmox with an ZFS-Filesystem (Raid1) with 2x2TB drives.
There is also another Raid (0) with 2x1TB drives.

The systems seems to run on the local (Raid0). All VM disks are stored at the local-zfs (Raid1). But it seems, that all servers run over the Raid 0 (sdc and sdd).

I want, that the half of the servers use the raid 1 and the other half the raid 0. How can I do that?

regards.

Notes:

Code:
root@host11:~# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
rpool  1.81T   248G  1.57T         -     9%    13%  1.00x  ONLINE  -

Code:
root@host11:~# lsblk
NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda            8:0    0   1.8T  0 disk
├─sda1         8:1    0  1007K  0 part
├─sda2         8:2    0   127M  0 part
└─sda3         8:3    0   1.8T  0 part
sdb            8:16   0   1.8T  0 disk
├─sdb1         8:17   0  1007K  0 part
├─sdb2         8:18   0   127M  0 part
└─sdb3         8:19   0   1.8T  0 part
  ├─pve-swap 251:0    0    62G  0 lvm
  ├─pve-root 251:1    0    96G  0 lvm
  └─pve-data 251:2    0   1.7T  0 lvm
sdc            8:32   0 931.5G  0 disk
├─sdc1         8:33   0  1007K  0 part
├─sdc2         8:34   0 931.5G  0 part
└─sdc9         8:41   0     8M  0 part
sdd            8:48   0 931.5G  0 disk
├─sdd1         8:49   0 931.5G  0 part
└─sdd9         8:57   0     8M  0 part
zd0          230:0    0     8G  0 disk [SWAP]
zd16         230:16   0    50G  0 disk
zd32         230:32   0   250G  0 disk
├─zd32p1     230:33   0   350M  0 part
└─zd32p2     230:34   0 249.7G  0 part
zd48         230:48   0    50G  0 disk
zd64         230:64   0    50G  0 disk
zd80         230:80   0    50G  0 disk
zd96         230:96   0    50G  0 disk
zd112        230:112  0    50G  0 disk
zd128        230:128  0   100G  0 disk
├─zd128p1    230:129  0   350M  0 part
├─zd128p2    230:130  0  29.7G  0 part
└─zd128p3    230:131  0    70G  0 part
zd144        230:144  0    50G  0 disk
zd160        230:160  0   100G  0 disk
zd176        230:176  0   100G  0 disk
zd192        230:192  0    70G  0 disk
zd208        230:208  0    32G  0 disk
zd224        230:224  0    70G  0 disk
zd240        230:240  0    50G  0 disk
zd256        230:256  0    50G  0 disk
zd272        230:272  0    50G  0 disk
zd288        230:288  0    70G  0 disk
zd304        230:304  0    50G  0 disk
├─zd304p1    230:305  0  47.9G  0 part
├─zd304p2    230:306  0     1K  0 part
└─zd304p5    230:309  0   2.1G  0 part
zd320        230:320  0    50G  0 disk
├─zd320p1    230:321  0   350M  0 part
└─zd320p2    230:322  0  49.7G  0 part
zd336        230:336  0    50G  0 disk
├─zd336p1    230:337  0  49.5G  0 part
├─zd336p2    230:338  0     1K  0 part
└─zd336p5    230:341  0   509M  0 part
zd352        230:352  0    32G  0 disk
zd368        230:368  0    50G  0 disk
zd384        230:384  0    50G  0 disk
├─zd384p1    230:385  0    48G  0 part
├─zd384p2    230:386  0     1K  0 part
└─zd384p5    230:389  0     2G  0 part
zd400        230:400  0    50G  0 disk
├─zd400p1    230:401  0   350M  0 part
└─zd400p2    230:402  0  49.7G  0 part
zd416        230:416  0    50G  0 disk
├─zd416p1    230:417  0   350M  0 part
└─zd416p2    230:418  0  49.7G  0 part
zd432        230:432  0    50G  0 disk
├─zd432p1    230:433  0   350M  0 part
└─zd432p2    230:434  0  49.7G  0 part
zd448        230:448  0    50G  0 disk
├─zd448p1    230:449  0   350M  0 part
└─zd448p2    230:450  0  49.7G  0 part
zd464        230:464  0    50G  0 disk
zd496        230:496  0   100G  0 disk
zd512        230:512  0   100G  0 disk
zd528        230:528  0   8.5G  0 disk
zd544        230:544  0    50G  0 disk
zd560        230:560  0    50G  0 disk
zd576        230:576  0    50G  0 disk
├─zd576p1    230:577  0   350M  0 part
└─zd576p2    230:578  0  49.7G  0 part
zd592        230:592  0    50G  0 disk
zd608        230:608  0   2.5G  0 disk



Code:
root@host11:~# sudo lsblk -o NAME,FSTYPE,SIZE,MOUNTPOINT,LABEL
NAME         FSTYPE            SIZE MOUNTPOINT LABEL
sda          isw_raid_member   1.8T
├─sda1                        1007K
├─sda2       vfat              127M
└─sda3       LVM2_member       1.8T
sdb          isw_raid_member   1.8T
├─sdb1                        1007K
├─sdb2       vfat              127M
└─sdb3       LVM2_member       1.8T
  ├─pve-swap swap               62G
  ├─pve-root ext4               96G
  └─pve-data                   1.7T
sdc                          931.5G
├─sdc1                        1007K
├─sdc2       zfs_member      931.5G            rpool
└─sdc9                           8M
sdd                          931.5G
├─sdd1       zfs_member      931.5G            rpool
└─sdd9                           8M
zd0          swap                8G [SWAP]
zd16                            50G
zd32                           250G
├─zd32p1     ntfs              350M            System-reserviert
└─zd32p2     ntfs            249.7G
zd48                            50G
zd64                            50G
zd80                            50G
zd96                            50G
zd112                           50G
zd128                          100G
├─zd128p1    ntfs              350M            System-reserviert
├─zd128p2    ntfs             29.7G
└─zd128p3    ntfs               70G
zd144                           50G
zd160                          100G
zd176                          100G
zd192                           70G
zd208                           32G
zd224                           70G
zd240                           50G
zd256                           50G
zd272                           50G
zd288                           70G
zd304                           50G
├─zd304p1    ext4             47.9G
├─zd304p2                        1K
└─zd304p5    swap              2.1G
zd320                           50G
├─zd320p1    ntfs              350M            System-reserviert
└─zd320p2    ntfs             49.7G
zd336                           50G
├─zd336p1    ext4             49.5G
├─zd336p2                        1K
└─zd336p5                      509M
zd352                           32G
zd368                           50G
zd384                           50G
├─zd384p1    ext4               48G
├─zd384p2                        1K
└─zd384p5    swap                2G
zd400                           50G
├─zd400p1    ntfs              350M            System-reserviert
└─zd400p2    ntfs             49.7G
zd416                           50G
├─zd416p1    ntfs              350M            System-reserviert
└─zd416p2    ntfs             49.7G
zd432                           50G
├─zd432p1    ntfs              350M            System-reserviert
└─zd432p2    ntfs             49.7G
zd448                           50G
├─zd448p1    ntfs              350M            System-reserviert
└─zd448p2    ntfs             49.7G
zd464                           50G
zd496                          100G
zd512                          100G
zd528                          8.5G
zd544                           50G
zd560                           50G
zd576                           50G
├─zd576p1    ntfs              350M            System-reserviert
└─zd576p2    ntfs             49.7G
zd592                           50G
zd608                          2.5G
 
Last edited:
Did I understand you correctly that you use RAID-0? WHY????

Please post also the output of

Code:
zpool status -v

and PLEASE use CODE-Tags to preformat your output.
 
Code:
zpool status -v
  pool: rpool
state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        rpool       ONLINE       0     0     0
          sdc2      ONLINE       0     0     0
          sdd       ONLINE       0     0     0

errors: No known data errors

Code:
lvdisplay
  Found duplicate PV hSVcFlh50r6lNjsauCmu4jYNMzxdbk2s: using /dev/sdb3 not /dev/sda3
  --- Logical volume ---
  LV Path                /dev/pve/swap
  LV Name                swap
  VG Name                pve
  LV UUID                ERJUPD-FEHI-Qhhs-yebk-Ayhs-2bFD-G9GRm2
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-02-09 10:31:45 +0100
  LV Status              available
  # open                 0
  LV Size                62.00 GiB
  Current LE             15872
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:0

  --- Logical volume ---
  LV Path                /dev/pve/root
  LV Name                root
  VG Name                pve
  LV UUID                b7zS3d-p2hK-Iivl-80HN-6FHX-ugSt-gTfgbQ
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-02-09 10:31:46 +0100
  LV Status              available
  # open                 0
  LV Size                96.00 GiB
  Current LE             24576
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:1

  --- Logical volume ---
  LV Path                /dev/pve/data
  LV Name                data
  VG Name                pve
  LV UUID                lqWbJa-8OxW-66s9-YYT0-479B-Nzs9-AkMtBl
  LV Write Access        read/write
  LV Creation host, time proxmox, 2016-02-09 10:31:46 +0100
  LV Status              available
  # open                 0
  LV Size                1.65 TiB
  Current LE             432357
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           251:2

How can I now bind in the second raid in Proxmox? I tried to create a lvm storage, Volume Group pve. It shows 1,8TB free of space, but if i create a vm, the following error recurrs:

Code:
()
Task viewer: VM 120 - Create
Output
Status
Stop
  Found duplicate PV hSVcFlh50r6lNjsauCmu4jYNMzxdbk2s: using /dev/sdb3 not /dev/sda3
  Found duplicate PV hSVcFlh50r6lNjsauCmu4jYNMzxdbk2s: using /dev/sdb3 not /dev/sda3
  Found duplicate PV hSVcFlh50r6lNjsauCmu4jYNMzxdbk2s: using /dev/sdb3 not /dev/sda3
  Found duplicate PV hSVcFlh50r6lNjsauCmu4jYNMzxdbk2s: using /dev/sdb3 not /dev/sda3
TASK ERROR: create failed - lvcreate 'pve/pve-vm-120' error:   Volume group "pve" has insufficient free space (4095 extents): 8192 required.
 
Code:
pvs
  Found duplicate PV hSVcFlh50r6lNjsauCmu4jYNMzxdbk2s: using /dev/sdb3 not /dev/sda3
  PV         VG   Fmt  Attr PSize PFree
  /dev/sdb3  pve  lvm2 a--  1.82t 16.00g

We wanted to create a zfs-storage. Seems, that the second is now LVM. Is this a problem?

Regards
 
On technical site it should work too but... when you have ZFS, you have it, and no other things at the local machine. This LVM make no sense, is much more difficult and complicating all things.
 
Ok, thats not possible. You have only one zfs pool for root (rpool). The rest is what i see lvm. Strange... a second root "/dev/pve/root".
When you are able to move all vm's from the lvm, then you can recreate that disk with zfs. But before you to everything it is very better to spend time in testsystem to see how it really works. ZFS is very fine, but you should collect experience before productive.
And then yes, maybe you spend time to setup the server once right again. I would make it so. Sorry.
 
All VMs are at the zfs-storage. The lvm seems to be empty. Isn't it possible to simply add the lvm storage in Proxmox?

regards
 
All VMs are at the zfs-storage. The lvm seems to be empty. Isn't it possible to simply add the lvm storage in Proxmox?
regards
Thats nice. So the lVMs are empty. Ok. So you can create an second zpool for the two other disk. Please use raid1 not 0. With 0 when one disk is gone everything is death. What is this isw_raid_member? There an special fakeraidcard in use? How much memory have your server?

Yes you can add the LVM simply to your pvehost in storragetab: https://pve.proxmox.com/wiki/Storage_Model
But also here, use Raid1 and not LVM rather LVM Thin.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!