Doubts for Proxmox installation and Disks

Saahib

Active Member
May 2, 2021
88
3
28
Hi,

I have been evaluating Proxmox from last few days. I did some tests install etc. , followed docs and I have few doubts.

1. First and foremost is choosing disk strategy ,
Currently I have following disks in server :
200GB primary SSD ,
2x2 TB SSD
1TB SATA

I was looking for RAID 1 using 2x2TB SSD using mdadm but found that only ZFS is supported for soft-raid in proxmox.
I can do following :
a. Install OS on 200G primary Disk , use 2x2TB RAID 1 for VM/CT space , 1TB SATA for backups and images/templates.
But in above setup, if something happens to primary drive, whole system will go down.

b. Install OS on RAID 1 of 2x2TB ssd, use 200GB SSD for cache in ZFS and 1TB SATA again for backup.
This way , if one drive goes away, I can get new one and resync without downtime.

In short, please suggest me best possible in above disks setup to avoid downtime in event of disk failure.

2a. I am confused by the disk usage is shown in PROXMOX, on fresh test installation, if I go into Disks section, then under LVM it is showing disk 100% used while "storage" is showing its actual usage.

2b. How LVM is handled, I added new disk on a test system, created a PV (pvcreate) on it, created group and then final created Logical volume, but when I added it from Disks--> LVM , it is showing is 100% used. I then tried, by remove LV from it, then it was showing all empty (actual usage). Is it that DISKS--> LVM shows how much allocation is done on that PV but not actual disk usage ?
 
Last edited:
Hi,

a. Install OS on 200G primary Disk (...)
But in above setup, if something happens to primary drive, whole system will go down.
Yes, but that holds for all operating systems.

b. Install OS on RAID 1 of 2x2TB ssd (...)
This way , if one drive goes away, I can get new one and resync without downtime.
Also yes, but for me not having hypervisor and guest storage separated is a disadvantage.

In short, please suggest me best possible in above disks setup to avoid downtime in event of disk failure.
Get another small disk and have one small mirror for hypervisor and one large mirror for guests.

2a. I am confused by the disk usage is shown in PROXMOX, on fresh test installation, if I go into Disks section, then under LVM it is showing disk 100% used while "storage" is showing its actual usage.
I assume you clicked on Disks->LVM. Then you should also click on LVM-thin because that's what Proxmox VE creates by default as storage.
You can also compare this to the output of the following commands
Code:
lvs
lvdisplay
vgs
vgdisplay
where you should especially look for logical volumes that have as first letter in the attributes a t (indicating thin pools) instead of a V (indicating thin volumes), for example
Code:
  nvmeThinPool                  nvmeGroup twi-aotz-- 920.00g                                           32.93  25.11

2b. How LVM is handled, I added new disk on a test system, created a PV (pvcreate) on it, created group and then final created Logical volume, but when I added it from Disks--> LVM , it is showing is 100% used. I then tried, by remove LV from it, then it was showing all empty (actual usage). Is it that DISKS--> LVM shows how much allocation is done on that PV but not actual disk usage ?
If you just want to quickly have some LVM storage available, then I'd suggest wiping your disk like this
DANGEROUS: DOUBLE CHECK THE LETTERS
Code:
wipefs -a /dev/sdc
dd if=/dev/zero of=/dev/sdc bs=1M count=200
in the GUI go to Disks->LVM-Thin and then the Create: Thinpool button. There you can select an unused disks and set up a storage with it in a single click.
If you have more time, then I'd suggest reading up on thin pools vs "regular" LVM and looking exactly at the output of the previous commands lvs ....
 
Thanks,

Since I posted this a while ago, I have been through almost every part of your docs related to my doubts .
A lot of things are clearer now as compared to when I posted this thread.

The idea of getting small disk for OS mirror never struck me, will see into that . However, have already got things up and in production.

On the other hand, for Disks->LVM, its showing used as what I undertood so far is that here it shows "How much disk is reserved on this Volume Group by Logical Volumes". Its not actual disk usage, that one can see separately when its assigned to STORAGE by visiting its own section. Please confirm if that is the case ie. VG showing how space is occupied by LVs but not actual disk usage. as it quite confusing for new comers.

Other interesting thing I noticed that not all LVM created are mounted but can be used by proxmox. e.g.
Code:
# lsblk
NAME                MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                   8:0    0 186.3G  0 disk
├─sda1                8:1    0  1007K  0 part
├─sda2                8:2    0   512M  0 part
└─sda3                8:3    0 185.8G  0 part
  ├─pve-swap        253:0    0    16G  0 lvm  [SWAP]
  ├─pve-root        253:1    0  46.3G  0 lvm  /
  ├─pve-data_tmeta  253:2    0   1.2G  0 lvm
  │ └─pve-data      253:4    0 113.3G  0 lvm
  └─pve-data_tdata  253:3    0 113.3G  0 lvm
    └─pve-data      253:4    0 113.3G  0 lvm

IN above example, I can create VM/CT on pve-data lv , but since its now mounted anywhere, although I can install CT on it. So there is LV (here pve-data) but I how I access it from cli (proxmox can use and access it for VM/ CT)
 
Please confirm if that is the case ie. VG showing how space is occupied by LVs but not actual disk usage. as it quite confusing for new comers.
Yes, it shows how much space is assigned to LVM. I read that once a week people misunderstanding this.

By the way, it is possible to use mdraid if you just dont got enough RAM to use ZFS. But you need to install Debian Buster using mdraid and add proxmox later. In the wiki there is a article about installing Proxmox ontop of Debian. But keep in mind that Proxmox isn't officially supporting mdraid.
 
Yes, it shows how much space is assigned to LVM. I read that once a week people misunderstanding this.

This thing should be clarified as because of this I had to deeply gone through LVM documentation as I had impression that I am missing something about it. Though learned more about LVM but yet, it is frustrating when you have deadline to get things up.

Yes, it shows how much space is assigned to LVM. I read that once a week people misunderstanding this.

By the way, it is possible to use mdraid if you just dont got enough RAM to use ZFS. But you need to install Debian Buster using mdraid and add proxmox later. In the wiki there is a article about installing Proxmox ontop of Debian. But keep in mind that Proxmox isn't officially supporting mdraid.
Yes, went through docs and lots of discussion here and on reddit. This machine has 32G memory, 18G is being used by ZFS but I suppose that just act as cache, ie. when other process needs more RAM , it get adjusted . Meanwhile I have decided to observe it for a while.

What about LVM space not mounted yet can be used by PROXMOX to store VM/CT ?
 
Last edited:
Thiss machine has 32G memory, 18G is being used by ZFS but I suppose that just act as cache, ie. when other process needs more RAM , it get adjusted . Meanwhile I have decided to observe it for a while.
50% of RAM will be used by default but you can limit that. But you might not want to reduce it below 4 to 8GB because ZFS really needs it. Rule of thumb is 4GB + 1GB RAM per 1TB of Raw storage (or 4GB + 5GB per 1 TB if you enable deduplication).