Please explain LVM Thin making your existing Proxmox drive a mirror

bossey1

New Member
Jul 15, 2024
18
1
3
Hi
I hope I won't raise anyone's ire because I have read many pages over the last week on LVM & LVM Thin mirroring . However, all the solutions seem to refer to installing from scratch or assume you already have a pve created for the new disk.

My setup is a newly installed Proxmox 8 on a PowerEdge 730xd installed with an H730P raid controller in HBA mode.
I have two 2.5-inch 600 GB spinners and one has Proxmox installed.
Proxmox doesn't seem to allow creating a mirrored LVM or pre-partitioning in the GUI installer. I was left with installing it on 1 drive so I could attempt a mirror afterwards. It seems from what I've read that there are lots of gotchas here and I haven't quite come across a simple enough (for my brain) explanation or example of how to add the second unformatted (or Formatted) drive to the Proxmox installed one for them to act as one mirrored device.
I don't want to use ZFS on the root but I will use it for VM's which I'd like to install on 8 4 TB drives set up as 4 2-drive ZFS(1) mirrors?

Can anyone please explain how I would mirror the existing drive which consists of:


Code:
lvm> pvs
  PV         VG  Fmt  Attr PSize    PFree
  /dev/sdj3  pve lvm2 a--  <557.91g 70.00g
lvm>

Code:
lvm> lvs
  LV   VG  Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  data pve twi-a-tz-- 389.95g             0.00   0.44                           
  root pve -wi-ao----  70.00g                                                   
  swap pve -wi-ao----  20.00g                                                   
lvm>


The second drive is

Code:
Disk /dev/sdk: 558.91 GiB, 600127266689 bytes, 1172123568 sectors
Disk model: AL14SEB060N     
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 8FDB15F4-5DF5-C747-9057-143050CC8341

My previous install of Proxmox was a few years back with Proxmox 6 on a Debian server hosted by a german outfit but I had the luxery of using a scripted install on software raid partitioned before the OS was installed after which Proxmox was also then installed. With this kit, I am advised I'm better off using LVM-Thin on the Proxmox install and letting ZFS mirrors take care of my RAIDs.
 
I don't want to use ZFS on the root
Why ?! ZFS for PVE OS only is perfect with spinners, there isn't slowdown.
But ZFS on spinners for VM storage will be slow depending your guests and workload.
For me, SSD are mandatory for Win10+ or Win Server 2016+
 
Why ?! ZFS for PVE OS only is perfect with spinners, there isn't slowdown.
But ZFS on spinners for VM storage will be slow depending your guests and workload.
For me, SSD are mandatory for Win10+ or Win Server 2016+
Hi
I'm mostly Linux VM's. May do Windows as I expand my expertise. But I guess the same argument about spinners works. I may get some SSD/NVMe later but cost is an issue for the enterprise ones at present to get the same volume as the spinners.
 
Hi
This seems similar to how I did Proxmox on Debian 10 with Hetzner. I'm thinking I may just fold and do ZFS. It's about time this was easily achieved in Proxmox Gui.
Proxmox Gui not ideal for complicated setups like this !
 
Proxmox Gui not ideal for complicated setups like this !
Lol. I'm finding out.
I used vgextend but when trying this:

lvm> lvconvert -m1 --mirrorlog core -v pve/data

I got an error saying it wasn't allowed on a thin pool.

Operation not permitted on LV pve/data type thinpool.
 
https://unix.stackexchange.com/questions/623346/lvm-type-raid1-thinpool-is-it-possible

May be you should install a Virtual PVE that matches your physical configuration, add a new Virtual disk and try your experiments there.


Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
Thanks to all for answering my cry for help. I think I've thrown the towel in. This shouldn't be so difficult. It seems Truenass allows this so Promox should at least look at it. I will go the way of ZFS with reluctance as I just don't have so many sleepless nights to waste. I'm not getting any younger.
 
Hi again all.

I have arrived at my final hardware setup and need some pointers/help to complete things.

Proxmox is installed on mirrored 400GB drives as can be seen on /dev/sdj & /dev/sdk.

I'm thinking of 4 mirrored pairs of 4TB spinners for my pools;
sda - sdb
sdc - sdd
sde - sdf
sdg - sdh

which I would then SLOG with a single 256GB NVMe and cache with mirrored 512GB NVMe's.

Below is the code I am looking at using

Code:
zpool create -o asift=12 -0 compression=lz4 -f node1_tank mirror a0 b0 mirror c0 d0 mirror e0 f0 mirror g0 h0 log /dev/nvme2n1 mirror cache /dev/nvme0n1 /dev/nvme1n1

  1. Will this work as I intend and provide raid10-type benefits?
  2. Can I add -o encryption=aes-256-gcm -o keylocation=prompt -o keyformat=passphrase and what disadvantages/risks are there for using encryption?
  3. Is there a noticeable benefit to mirroring the log/zil

Any other wizened bits of advice are welcome.

I also have 4 nic interfaces made up of 2x 1GB RJ45 and 2x 10Gb SFP. What is the best way to set these up?

Assuming the pve management interface is RJ45-1 say 192.168.10.10, I was thinking to use both RJ45 interfaces as lan and wan on a OPNsense VM in front of all the services which will be webservers and application servers like nextcloud once the basic configuration is right.

EDIT: zpool create won't accept compression or encryption. Another error I get is invalid vdev specification: mirror requires at least 2 devices.
 
Last edited:
Okay I removed the encryption, compression and cache part of the code and created the pool.

I then amended the cache part to "zpool add node1_tank cache /dev/nvme0n1p1 /dev/nvme1n1p1.
Note I created partitions on the nvme's but this wasn't necessary. It was just me messing around while thinking.

The result is:

Code:
:~# zpool status -v
  pool: node1_tank
 state: ONLINE
config:

        NAME         STATE     READ WRITE CKSUM
        node1_tank   ONLINE       0     0     0
          mirror-0   ONLINE       0     0     0
            a0       ONLINE       0     0     0
            b0       ONLINE       0     0     0
          mirror-1   ONLINE       0     0     0
            c0       ONLINE       0     0     0
            d0       ONLINE       0     0     0
          mirror-2   ONLINE       0     0     0
            e0       ONLINE       0     0     0
            f0       ONLINE       0     0     0
          mirror-3   ONLINE       0     0     0
            g0       ONLINE       0     0     0
            h0       ONLINE       0     0     0
        logs
          nvme2n1p1  ONLINE       0     0     0
        cache
          nvme0n1p1  ONLINE       0     0     0
          nvme1n1p1  ONLINE       0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
config:

        NAME                              STATE     READ WRITE CKSUM
        rpool                             ONLINE       0     0     0
          mirror-0                        ONLINE       0     0     0
            scsi-12345678908097191-part3  ONLINE       0     0     0
            scsi-12345678908087ytu-part3  ONLINE       0     0     0

errors: No known data errors

:/# zfs list
NAME USED AVAIL REFER MOUNTPOINT
node1_tank 624K 14.4T 96K /node1_tank
rpool 14.0G 250G 104K /rpool
rpool/ROOT 2.48G 250G 96K /rpool/ROOT
rpool/ROOT/pve-1 2.48G 250G 2.48G /
rpool/data 96K 250G 96K /rpool/data
rpool/var-lib-vz 11.5G 250G 11.5G /var/lib/vz

Apart from my other queries above, if there is anything anyone sees wrong here please let me know.

Edit:
Seems compression is default and I can confirm LZ4 is on.

:/# zfs get compressratio node1_tank
NAME PROPERTY VALUE SOURCE
node1_tank compressratio 1.00x -

:/# zpool get feature@lz4_compress node1_tank
NAME PROPERTY VALUE SOURCE
node1_tank feature@lz4_compress active local
 
Last edited:
I have to give a massive shout-out to @Dunuin whose posts have provided a great deal of the answers I've needed so far, in a straightforward and easily digested form. There are others too whose quest for answers has prompted some of these issue-solving responses and this community is ace.

Whilst bedding in the above and awaiting critique and redirection, I've just discovered another thread by the above-mentioned that is my next step.

https://forum.proxmox.com/threads/full-system-encryption-with-network-unlock.125441/

I think this caters for my encryption issues on pool creation.

I also have to mention that in the creation of the mirrors above I used aliases for my drive names. This came from

https://pve.proxmox.com/wiki/ZFS:_Tips_and_Tricks

which in itself was a tip from one of the above-mentioned or another and saved a lot of typing.
 
  • Like
Reactions: Dunuin
In my opinion, for slog this vdev is too big. With 10Gbit network 16GB should be enough.
Read the ZFS paper that TrueNAS prepared. There are two parts.
https://www.truenas.com/blog/zfs-pool-performance-1/
Thank you milew. That was a good read and advanced my understanding. my host has 2x1Gb interfaces and 2x10Gb interfaces. The disks are internal so the network feed to clients will probably be from the 1Gb ones but the availability of data on the internal network to the host will be 10Gb.

I already partitioned the slog to allow for periodic swapping of its partition as mentioned somewhere by Dunuin. Do you sugest i make those partitions smaller of are you saying replace them with smaller NVME. This particular host is a Dell R730xd with 384Gbytes Ram. I don't have room for ssd's without re-caging, rewiring and new interface cards.
 
I don't have much experience with slog. On my nvme disks when testing via fio, the maximum slog capacity was 8GB, I don't have this pool in production yet. Large slog sizes will never be used. If some of your disks are platter disks, it would be better if you could reduce the slog and add vdev special for metadata.
You can divide the disk into partitions. Or if your nvme disks allow it, divide them into namespaces. This is an additional level of abstraction in nvme disks, it behaves as if it were a separate disk nvme0n1 that's what you have now and you can cut such a disk into more pieces nvme0n1, nvme0n2... Not all disks allow this, I think only corporate ones will have this function.
 
  • Like
Reactions: bossey1
I don't have much experience with slog. On my nvme disks when testing via fio, the maximum slog capacity was 8GB, I don't have this pool in production yet. Large slog sizes will never be used. If some of your disks are platter disks, it would be better if you could reduce the slog and add vdev special for metadata.
You can divide the disk into partitions. Or if your nvme disks allow it, divide them into namespaces. This is an additional level of abstraction in nvme disks, it behaves as if it were a separate disk nvme0n1 that's what you have now and you can cut such a disk into more pieces nvme0n1, nvme0n2... Not all disks allow this, I think only corporate ones will have this function.
Oh Okay! I will look at this. Thank you again
 
don't forget ZFS is angry about TBW, because write amplification, non datacenter grade ssd can quickly wearout.
 
don't forget ZFS is angry about TBW, because write amplification, non datacenter grade ssd can quickly wearout.
Thank you. That is well noted. I use 2x WDC SN730 512Gb and a Toshiba (KIOXIA) branded 256Gb. These are cheap enough during the initial setup and test to be thrown away if necessary.

What is the needed command to move the metadata to another vdev/partition?
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!