New ProxVE 4.2 - Thin LVM - upgrade:deploy:migrate query

fortechitsolutions

Renowned Member
Jun 4, 2008
437
48
93
Hi all! Just wanted to ask, is there any info available on the new feature in the latest release of proxmox 4.2 just announced today, regarding Thin LVM?

Specifically:

-- I've got a small proxmox cluster using shared fibre SAN storage array with LVM, pretty classic deployment for shared storage
-- it was setup Summer 2015, so has non-thin LVM config presently. ie, when I deploy a new VM on this shared LVM storage, the VM disk use is thick-provisioned immediately.
-- I have the impression, the latest proxmox 4.2 will permit thin-provision LVM storage

however, I'm curious

- can I just upgrade my proxmox hosts to latest, and then ? all new VMs deployed will use this feature ?
or
- is there specific feature config work required to make an existing LVM storage pool capable of doing thin provision?
or
- is this a "remove and replace" kind of upgrade, ie,
--- backup all the VMs
-- destroy the LVM storage
-- re-create using latest proxmox / latest LVM config
-- the new created storage pool will be thin-capable
-- then restore your VMs into the LVM storage, and they will be 'thin' on restore ?

(OR? can I even do a variant on this, something like,
--- live migrate VMs out of <LVM storage> to <alternate storage that is available, but not LVM>
--- once VMs are all out, delete / recreate the LVM storage / after upgrade first to latest proxmox
--- re-migrate the VMs back in to the recreated LVM pool? something else to be done to make them "Thin aware" ? etc ?


any pointers are greatly appreciated!

And - as always - many thanks to the Proxmox team for moving things forward / and adding more great features!


Tim Chipman
 
Ah, OK, that is simple / good to know. Do you know if in future this kind of thing may become possible ? I had impression some form of LVM thin has existed (ie, not in proxmox, just in LVM) since ~a while / but I have the idea, I don't understand the different implication of LVM being used local-only vs shared / and how this interacts with thin vs thick.

Possibly I just need to get a better handle on NFS (for example) as storage target for use-case where thin shared storage is required :)

Thanks!

Tim
 
Yes, absolutely, ceph is an option - one I want to test on a test cluster first before doing production. And.. I have the feeling from reading docs, ceph deploy more or less will be best practice (procuction usable) iff your deploy scenarios is ~
--- >3 nodes, ideally more the merrier; ie, small cluster participating in ceph is likely less performance
--- > plenty of disks per node for ceph (ie, 6 disks/spindles as ceph data storage - or more - per node ideally)
--- 10gig dedicated network for ceph data replication

My current 'main problem' is that - I have more projects that need "small cluster" rather than 'bigger cluster'. So while ceph can be lovely model - it is not so well suited (I think?) for a 2-3 node cluster which wants shared storage for live migrations etc (and esp. quicker recovery in case one node goes offline / crashes out ..)

I appreciate your feedback on this thread!

Tim
 
So while ceph can be lovely model - it is not so well suited (I think?) for a 2-3 node cluster which wants shared storage for live migrations etc (and esp. quicker recovery in case one node goes offline / crashes out ..)
If you can live without redundant storage you could consider a ZFS based storage server. Using the ZFS_over_iscsi plugin gives you live migration between nodes, live snapshots, (linked) clones, and thin provisioned zvols.
 
Yes, it is true. To be honest though I am not sure it addresses the key 'concern' created with NFS as storage model, ie,

- single point of failure, if the NFS storage target (aka ZFS storage target) crashes / goes offline.
- I realize there are custom builds possible, DRBD replication under the hood NFS failover cluster, but the config is much to complex IMHO for what I am happy to call 'stable'
- and likewise, a "fault tolerant" ZFS storage device - has same base issue. I can setup 'secondary replicated ZFS thing" to give me 'pretty good recovery timelines" (ie, if primary storage is destroyed by water damage, then secondary unit in different room - has zfs replica data pushed ~at intervals async.) But this isn't the same thing as a 'fault tolerant' storage device.
And I still have the impression, that base performance of (ZFS or NFS) will be bandwidth-of-NIC rate-limited primarily; but otherwise - ZFS is mainly icing features of ZFS when compared to NFS; but not a core functionality difference for the use case of "higher fault tolerance"

end of the day, small sites with simpler deployments (ie, 2-3 node cluster) - are inherently not going to get 'everything' (ie, modest price, 100% fault tolerant structure, easy management) I think. They can get 'close' but not quite..


Tim
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!