Tweaking the OVH default Proxmox configuration

Ovidiu

Renowned Member
Apr 27, 2014
326
13
83
When I first installed Proxmox on a serevr with 2 x 2TB SATA disks in a RAID1 I went with the defaults but now it looks like I need to change a few things. The system is running a few LXC containers in production mode though.

Code:
#lsblk /dev/sda
NAME         MAJ:MIN RM    SIZE RO TYPE MOUNTPOINT
sda            8:0    0    1.8T  0 disk
├─sda1         8:1    0 1004.5K  0 part
├─sda2         8:2    0   19.5G  0 part /
├─sda3         8:3    0   1023M  0 part [SWAP]
└─sda4         8:4    0    1.8T  0 part
  └─pve-data 252:0    0    1.8T  0 lvm  /var/lib/vz
[63:563] 11:56 [root@james] ~

For one I need a bigger SWAP partition but I don't know much at all about LVM. Any pointers or maybe keywords to google besides resize LVM?

The next issue is that my storage is added as local. I now noticed I could add the same storage (using the volume group pve) as LVM and was wondering what the difference is here? See screenshot.
2016-04-06 13_01_43-james - Proxmox Virtual Environment.png
Also, given a default configuration of 2 HDs would it have been advisable to skip the hardware raid and get ZFS running?
 
The "local" storage is always configured as dir storage point to /var/lib/vz . If you add an LVM volume group as storage using the LVM type, Proxmox will create logical volumes for container file systems and disk images in this volume group. The "pve" volume group in your case will not have much free space available (if at all), so this will not be of much use to you. Unfortunately your swap partition is not in your LVM VG (like the default Proxmox setup would be), so resizing it will probably be more work than replacing it with a swap LV in your 'pve' VG. You would need to shrink your existing 'pve-data' LV first, and then add a new LV.

Shrinking anything with data on it is always a risky business, so make backups before! How you proceed depends on the file system used for /var/lib/vz ('pve-data'), but you will probably need to shrink the filesystem first, then shrink the LV 'pve-data' using lvresize, and finally add a new LV for swap with lvcreate.
 
  • Like
Reactions: Ovidiu
@fabian so apart from the issue with the swap, it seems I can only make snapshots and use snapshot as a backup method with my KVMs but not with my LXCs.

I have read a few threads about snapshots and LXC and I must admit I don't fully understand them.
So in short, will that eventually become available with my current setup yes or no?

If no, what would be the best recommendation: change the LVM setup somehow (I've read something about thin provisioning LVMs) or migrate to a new server without hardware raid and use ZFS for the whole server ( I think that would solve the snapshot issue).

P.S. Is there a how-to somewhere on how to setup Proxmox onto ZFS (with Debian and on OVH / YoSouStart preferably)?
 
The only way I've found to get Proxmox on ZFS @ OVH is to use their KVM/IP (IPMI) and mount the ISO over the WAN.
It's quite painful doing it over the WAN since my upload is only 6Mbps and I'm about 30ms (round trip) away from BHS.

Supposedly you can upload the ISO to them somewhere and mount it more locally, but I haven't been able to figure that out.
 
thanks, I'll keep that in mind for a later time. judging by your answer, I assume when installing from the ISO one has the option to install using ZFS for the whole system.
 
Snapshots for LXC are currently supported with ZFS, LVM-thin and Ceph. The installer .iso allows to setup the whole system using ZFS (instead of the default ext4 on LVM).
 
  • Like
Reactions: Ovidiu
The time to move is getting closer so what I have found atm is a decently priced server, good specs for what I need with ECC RAM.

I've read up on how to install to ZFS, found the tips on SWAP + ZFS but am a bit confused about what the wiki says about booting from ZFS: https://pve.proxmox.com/wiki/Storage:_ZFS#Adding_ZFS_root_file-system_as_storage_with_Plugin

specifically: once I have one big pool, do I divide it into datasets or zvols?
If you install Proxmox with ZFS, VM disks on local storage will be simple files, so you will have 2 layers of file system though reducing performance. This is especially true if you choose qcow2 format, because then you will have a "copy on write" image disk (qcow2) that writes on a "copy on write" file system (ZFS).
that doesn't sound right to me, but if I remove this part "This is especially true" it kinda makes sense. Am I on the right track here?

So if I follow these instructions, do I create a block device ONLY for the VM disks? Everything else stays where? Do I need an extra dataset or zvol or does it all end up in the root?

If I follow these instructions will I be able to use ZFS to:
- use the snapshot backup method for LXC and KVMs?
- create manual snapshots before updating stuff in LXCs and KVMs as a precaution so I can quickly roll-back?
 
If you use your zpool as "dir" storage, you won't get any of the benefits of ZFS integration in proxmox, and instead get the (performance) problems you quoted. If you use the ZFS plugin in Proxmox (by creating a new dataset on your zpool and configuring Proxmox to use it), you can use ZFS directly for containers and zvols for KVM, and can create snapshots of individual VMs/CTs.
 
Thanks Fabian, still struggling with different people using different terms. Please have some more patience and help me understand this.

When you say: "creating a new dataset" you are referring to the wiki saying: "Create a new filesystem" right?

When you say: "you can use ZFS directly for containers" you mean my container disks are now on this dataset created manually and added using the zfs plugin for Proxmox?

But what do you mean by "zvols for KVM"?
 
dataset is ZFS terminology and can mean a ZFS filesystem, snapshot, zvol.

Proxmox offers a ZFS pool plugin, which enables you to use a zpool (in reality, any zfs filesystem/dataset, but this is not exposed over the GUI) as storage. If you configure this ZFS storage in Proxmox, you can use it for containers and VMs. Containers can simply use ZFS directly (they use the host kernel which supports ZFS out of the box), so for each mountpoint we create a ZFS filesystem and mount it when starting the container. VMs cannot use ZFS directly (because you can run pretty much arbitrary operating systems inside the container), so we create a zvol for each VM disk, which means that the VM can use whatever filesystem inside the VM. zvols could be described as block devices backed by zfs.

does this explanation clear things up for you?
 
Yes, thanks, things are becoming clearer. Sorry for all the questions but I don't have a test system to check this out so I had to ask these questions.
 
Managed to become totally confused:

I installed Proxmox 4.2 onto a mirror using zfs for root. I then followed the instructions here: https://pve.proxmox.com/wiki/ZFS#Adding_ZFS_root_file-system_as_storage_with_Plugin
zfs create rpool/zfsdisks

Now add it to the storage (Datacenter -> [Storage] -> Add, choose "ZFS", ad ID let's call it, for example, "zfsvols", as "ZFS Pool" choose "rpool/zfsdisks", set "thin provisioning" and you are ok. When you create a VM choose "zfsvols" as storage.

and then when I check as explained on that same page:
Code:
root@jeeves:~# cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content vztmpl,iso,backup

zfspool: local-zfs
    pool rpool/data
    sparse
    content images,rootdir

zfspool: zfsvols
    pool rpool/zfsdisks
    sparse
    content rootdir,images

zfsvols being what I created using the above tutorial and local-zfs apparently was automatically installed by Proxmox.

So what is the difference between those two now? Or has Proxmox meanwhile changed and does this step automatically making the Wiki obsolete?

Also, it seems Proxmox set itself up to use 4GB of SWAP, how can I change that? I didn't see any options during installation, did I miss that or is this how it works?
 
Last edited:
yes, you can just use local-zfs..
 
so the wiki is outdated :-/
any idea about the swap?

I tried:

Code:
zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool             5.01G  2.63T    96K  /rpool
rpool/ROOT         778M  2.63T    96K  /rpool/ROOT
rpool/ROOT/pve-1   778M  2.63T   778M  /
rpool/data          96K  2.63T    96K  /rpool/data
rpool/swap        4.25G  2.63T    64K  -

zfs get all rpool/swap | grep reserv
rpool/swap  reservation            none                   default
rpool/swap  refreservation         4.25G                  local
rpool/swap  usedbyrefreservation   4.25G                  -


zfs get volsize rpool/swap
NAME        PROPERTY  VALUE    SOURCE
rpool/swap  volsize   4G       local

root@jeeves:/rpool# zfs set volsize=32G rpool/swap
root@jeeves:/rpool# zfs set refreservation=32G rpool/swap

and a swapoff / swapon but free still shows 4GB swap.
Code:
 swapon
NAME     TYPE      SIZE USED PRIO
/dev/zd0 partition   4G   0B   -1
any hint how to change this?
 
getting different values by different tools for my swap partition:

Code:
root@jeeves:~# swapon
NAME     TYPE      SIZE USED PRIO
/dev/zd0 partition   4G   0B   -1

root@jeeves:~# fdisk -l /dev/zd0
Disk /dev/zd0: 32 GiB, 34359738368 bytes, 67108864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Looks like I need to resize that partition? Any help?
 
The "local" storage is always configured as dir storage point to /var/lib/vz . If you add an LVM volume group as storage using the LVM type, Proxmox will create logical volumes for container file systems and disk images in this volume group. The "pve" volume group in your case will not have much free space available (if at all), so this will not be of much use to you. Unfortunately your swap partition is not in your LVM VG (like the default Proxmox setup would be), so resizing it will probably be more work than replacing it with a swap LV in your 'pve' VG. You would need to shrink your existing 'pve-data' LV first, and then add a new LV.

Shrinking anything with data on it is always a risky business, so make backups before! How you proceed depends on the file system used for /var/lib/vz ('pve-data'), but you will probably need to shrink the filesystem first, then shrink the LV 'pve-data' using lvresize, and finally add a new LV for swap with lvcreate.

Snapshots for LXC are currently supported with ZFS, LVM-thin and Ceph. The installer .iso allows to setup the whole system using ZFS (instead of the default ext4 on LVM).

After long tinkering it turns out my ISP provides servers with ECC RAM but only in Canada and I cannot move my IPs (from RIPE to ARIN) so I need to stay with the current server for now.

Would you mind sending me a few more pointers/links/keywords for enabling me to use snapshots and snapshot backups for LXC and KVMs?

I understand I need to shrink my current LV and then create a new one but where does the "thin" part come into play?

Do you guys have a list of recommended "Support partners"? I would rather let someone experienced do this migration.
 
For LXC, there are three storage types that currently support snapshots: ZFS, LVM Thin and Ceph. For Qemu, the same three support snapshots, but there is also the qcow2 image format that supports snapshots on Directory storages. If you installed using the 4.2 iso and left the default settings, you should already have an LVM-Thin storage configured as "local-lvm". Otherwise, you need to create one of the aforementioned storages yourself (either by adding a new empty disk, or by shrinking existing stuff to get free space). The manual setup for ZFS and Ceph is described in the wiki, for LVM Thin you can just refer to the pvcreate, vgcreate and lvcreate manpages or the LVM documentation.
 
  • Like
Reactions: Ovidiu

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!