[SOLVED] LVM or EXT4 on KVM guest?

Sebastian2000

Member
Oct 31, 2017
80
1
8
43
Hello

Due to the fu$%·$%... increase price of plesk, I will rebuid all our shared hosting servers, join a lot of smaller server on less but more powerfull server to save money on licence.
I work with cloudlinux actually on each shared hosting, in ext4 filesystem. I have other smaller VPS in LVM. I love LVM because it's possible to add disk and increase space without any reboot, but I have read that can be have an impact on performance, can you give us your opinion about this? It still better ext4 or LVM have really only a few or none impact of performance?
 
We also use LVM (thin) as default storage for VMs.

Yes, but I talk about the guest KVM over the host proxmox with LVM... It's an bad idea and had to be ext4? or LVM will don't (or very less) affect the performance of KVM guest?
 
I don't think so.

Ok, thanks a lot for your answer! I would have confirmation from expert about this before choosing system of these new server and choosing the install on lvm... change server size disk without reboot is an very great things with LVM!
 
change server size disk without reboot is an very great things with LVM!

I don't get your point. If you use the LVM inside of your guest, you have to perform a partition resize at some place, otherwise, the physical volume is not increased - so the problem is exactly the same as for a non-lvm-setup.
 
I don't get your point. If you use the LVM inside of your guest, you have to perform a partition resize at some place, otherwise, the physical volume is not increased - so the problem is exactly the same as for a non-lvm-setup.

Of course, but you can do it all without any reboot ni impact about services, you can do it in an production server... With ext4, I think you had to do it offline with gparted or similar.
 
Of course, but you can do it all without any reboot ni impact about services, you can do it in an production server... With ext4, I think you had to do it offline with gparted or similar.

LVM is a volume manager that has volumes with a filesystem (often EXT4) on it, so you have EXACTLY the same. Please read up on how a volume manager works. Resizing without reboot is exactly the same for LVM-based as it is for non-lvm-based disks. In other words: You do not have problems resizing a disk without a reboot.

Maybe you think about this here: Resizing a partition is not as easy without LVM, yet resizing a whole disk is.


That shows exactly what I'm talking about. The resizefs-part is the same, whereas it is incorporated into the LVM command with the --resizefs command. Behind the curtain, the same problem is called.
 
Of course, but you can do it all without any reboot ni impact about services, you can do it in an production server... With ext4, I think you had to do it offline with gparted or similar.
No, this is not correct. It is depending on your situation of VM. We never use LVM in a VM anymore. This make absolutly no sence. If use the right partionsshema, you can increase every partion easy online on ext4. Follow up this shema:
  • Never make more than one partition on one virtual harddrive, for example: for root on HDD, for swap on HDD, for Log one HDD...
  • Choose an filesystem that online increase is supported, for example ext4 (we use that)
  • Never use parted on CMD because every year the commands will be different an very complicated
  • Use Gparted or an other graphical programm to edit your partitions (we use that on konsoleonlyservers with X11 over SSH)
With this way it is easy the same like in Windows VM's:)

supplement: ugh... @LnxBil there was who faster now :)
 
No, this is not correct. It is depending on your situation of VM. We never use LVM in a VM anymore. This make absolutly no sence. If use the right partionsshema, you can increase every partion easy online on ext4. Follow up this shema:
  • Never make more than one partition on one virtual harddrive, for example: for root on HDD, for swap on HDD, for Log one HDD...
  • Choose an filesystem that online increase is supported, for example ext4 (we use that)
  • Never use parted on CMD because every year the commands will be different an very complicated
  • Use Gparted or an other graphical programm to edit your partitions (we use that on konsoleonlyservers with X11 over SSH)
With this way it is easy the same like in Windows VM's:)

supplement: ugh... @LnxBil there was who faster now :)

Hello

I am do some test, with gparted, putty with X11 activated, and VcXsrv on windows, I obtain gparted windows, but with very unusable chars :
http://hpics.li/6441773
do you have some solution for this?
 
I never unmount. Everything works online, always.

What systemd do you use exactly in VM?
Show me your virtual hdd's and partiontable.

For example here one from my vm's:
Code:
lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb      8:16   0    4G  0 disk [SWAP]
sr0     11:0    1 1024M  0 rom 
sda      8:0    0   32G  0 disk
└─sda1   8:1    0   32G  0 part /
Code:
qm config 110
...
scsi0: SSD-vmdata-KVM:vm-110-disk-2,discard=on,size=32G
scsi1: SSD-vmdata-KVM:vm-110-disk-1,discard=on,size=4G
...
 
I never unmount. Everything works online, always.

What systemd do you use exactly in VM?
Show me your virtual hdd's and partiontable.

For example here one from my vm's:
Code:
lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb      8:16   0    4G  0 disk [SWAP]
sr0     11:0    1 1024M  0 rom
sda      8:0    0   32G  0 disk
└─sda1   8:1    0   32G  0 part /
Code:
qm config 110
...
scsi0: SSD-vmdata-KVM:vm-110-disk-2,discard=on,size=32G
scsi1: SSD-vmdata-KVM:vm-110-disk-1,discard=on,size=4G
...

OS is centos7 in guest.

qm config 100100100
bootdisk: scsi0
cores: 1
ide2: local:iso/CentOS-7.0-1406-x86_64-NetInstall.iso,media=cdrom,size=362M
memory: 2048
name: TEST
net0: virtio=92:4C:4D:C2:C4:81,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local-lvm:vm-100100100-disk-1,size=12G
scsi1: local-lvm:vm-100100100-disk-2,size=5G
scsi2: local-lvm:vm-100100100-disk-3,size=51G
scsi3: local-lvm:vm-100100100-disk-4,size=4G
scsihw: virtio-scsi-pci
smbios1: uuid=68284be2-4f22-416d-bb4b-9a9d6580d0c3
sockets: 1



# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 12G 0 disk
└─sda1 8:1 0 10G 0 part /
sdb 8:16 0 4G 0 disk
└─sdb1 8:17 0 4G 0 part /tmp
sdc 8:32 0 51G 0 disk
└─sdc1 8:33 0 50G 0 part /var
sdd 8:48 0 5G 0 disk
└─sdd1 8:49 0 5G 0 part [SWAP]
sr0 11:0 1 362M 0 rom

And no possibility to resize part on gparted : http://hpics.li/a4a8197

@fireon THANKS A LOT FOR YOUR HELP!!!
 
Online growing works always, whereas online reduce does not work.

I resize always with the new fdisk (the one that can handle GPT), because I never use GUI and gparted resizepart does not work (for my versions) with a correct relative size, only absolute and therefore fdisk does always work faster. Combine this with partprobe to reread the partition table and you're good to go. I just resized multiple disks this week and it never failed me.
 
  • Like
Reactions: Sebastian2000
Online growing works always, whereas online reduce does not work.

I resize always with the new fdisk (the one that can handle GPT), because I never use GUI and gparted resizepart does not work (for my versions) with a correct relative size, only absolute and therefore fdisk does always work faster. Combine this with partprobe to reread the partition table and you're good to go. I just resized multiple disks this week and it never failed me.

Well, I have tried to do it with gdisk but after do it and reread with partprobe :

Error: Partition(s) 1 on /dev/sda have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.

Error: Partition(s) 1 on /dev/sdc have been written, but we have been unable to inform the kernel of the change, probably because it/they are in use. As a result, the old partition(s) will remain in use. You should reboot now before making further changes.

Warning: Unable to open /dev/sr0 read-write (Read-only file system). /dev/sr0 has been opened read-only.
 
Tested here on fresh Ubuntuinstall. Here are the filesytem:
Code:
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
udev            442M       0  442M    0% /dev
tmpfs            93M    4,8M   88M    6% /run
/dev/sda1        32G    5,2G   25G   18% /
tmpfs           464M     96K  464M    1% /dev/shm
tmpfs           5,0M       0  5,0M    0% /run/lock
tmpfs           464M       0  464M    0% /sys/fs/cgroup
tmpfs           400M     16K  400M    1% /run/user/120
tmpfs           400M       0  400M    0% /run/user/0
First i grow up in Proxmox. Then i can see the new space in journallog:
Code:
Jän 09 17:05:14 xtest kernel: sd 2:0:0:0: Capacity data has changed
Jän 09 17:05:14 xtest kernel: sd 2:0:0:0: [sda] 88080384 512-byte logical blocks: (45.1 GB/42.0 GiB)
Jän 09 17:05:14 xtest kernel: sda: detected capacity change from 34359738368 to 45097156608
Then have a look at the attached photos for growing up online with gparted.
On the end i see the new space.
Code:
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
udev            442M       0  442M    0% /dev
tmpfs            93M    4,8M   88M    6% /run
/dev/sda1        42G    5,2G   35G   14% /
tmpfs           464M     96K  464M    1% /dev/shm
tmpfs           5,0M       0  5,0M    0% /run/lock
tmpfs           464M       0  464M    0% /sys/fs/cgroup
tmpfs           400M     16K  400M    1% /run/user/120
tmpfs           400M       0  400M    0% /run/user/0
Time for this: One minute. :)
Tomorrow i test it on CentOS too.
 

Attachments

  • 01.png
    01.png
    49.7 KB · Views: 18
  • 02.png
    02.png
    21.9 KB · Views: 16
  • 03.png
    03.png
    21.1 KB · Views: 8
  • 04.png
    04.png
    34 KB · Views: 8
  • 05.png
    05.png
    52.8 KB · Views: 9
  • 06.png
    06.png
    54.3 KB · Views: 15
Tested here on fresh Ubuntuinstall. Here are the filesytem:
Code:
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
udev            442M       0  442M    0% /dev
tmpfs            93M    4,8M   88M    6% /run
/dev/sda1        32G    5,2G   25G   18% /
tmpfs           464M     96K  464M    1% /dev/shm
tmpfs           5,0M       0  5,0M    0% /run/lock
tmpfs           464M       0  464M    0% /sys/fs/cgroup
tmpfs           400M     16K  400M    1% /run/user/120
tmpfs           400M       0  400M    0% /run/user/0
First i grow up in Proxmox. Then i can see the new space in journallog:
Code:
Jän 09 17:05:14 xtest kernel: sd 2:0:0:0: Capacity data has changed
Jän 09 17:05:14 xtest kernel: sd 2:0:0:0: [sda] 88080384 512-byte logical blocks: (45.1 GB/42.0 GiB)
Jän 09 17:05:14 xtest kernel: sda: detected capacity change from 34359738368 to 45097156608
Then have a look at the attached photos for growing up online with gparted.
On the end i see the new space.
Code:
Dateisystem    Größe Benutzt Verf. Verw% Eingehängt auf
udev            442M       0  442M    0% /dev
tmpfs            93M    4,8M   88M    6% /run
/dev/sda1        42G    5,2G   35G   14% /
tmpfs           464M     96K  464M    1% /dev/shm
tmpfs           5,0M       0  5,0M    0% /run/lock
tmpfs           464M       0  464M    0% /sys/fs/cgroup
tmpfs           400M     16K  400M    1% /run/user/120
tmpfs           400M       0  400M    0% /run/user/0
Time for this: One minute. :)
Tomorrow i test it on CentOS too.

It seem that it's not possible in centos, wait your test to confirm it, thanks again!
 
Nice, don't works on CentOS. Tested with other kernels and programms. ;( Love my Ubuntu :)
Tested on Gentoo and Debian, works also fine.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!