Proxmox VE 4.1 released!

Unfortunately, LVM thin support does not work for LXC containers. When trying to restore a container to an lvm-thin pool, the web interface gives the following error:

TASK ERROR: unable to create containers on storage type 'lvmthin'

Any news on when will LVM thin be supported by the web interface, and when will it support containers?
 
We've got a few Proxmox 3.4 nodes that I've been waiting patiently for an update to the glusterfs-client (3.5.2) in any later Proxmox release to hopefully fix a terrible "create 1,000,000 open file handles in 2 hours" bug that the 3.5.2 release seems to have.

According to the "pveversion -v" output for Proxmox 4.1 shown here:

http://pve.proxmox.com/wiki/Downloads#Update_a_running_Proxmox_Virtual_Environment_4.x_to_latest_4.1

the 4.1 release is *still* using the glusterfs-client 3.5.2 release. Even worse, the future roadmap here has no mention of glusterfs at all:

http://pve.proxmox.com/wiki/Roadmap#Roadmap

Is there a technical reason Proxmox continues to use a glusterfs release that's now over a year old and has some nasty bugs in it? It should be noted that there actually three stable glusterfs versions that Promox could look at: 3.5.7, 3.6.7 or 3.7.6 - I'm guessing the higher ones are "better", but who knows? It would be nice if the Proxmox folks at least put a glusterfs-client update in the Roadmap if nothing else...
 
  • Like
Reactions: gkovacs
I do not see any problem using newer gluster version, just use the Debian packages from glusterfs.org.
 
During installation on Intel DC P3600 SSD NVMe PCIe 3.0, setup aborts with:
"unable to get device for partition 1 on device /dev/nvme0n1"
Tried debug mode, however keyboard doesn't respond via iKVM session.
 
Tried installing using BIOS UEFI mode, however "I agree" button is off the screen as shown below ...
Also tried pressing TAB on first screen, however didn't give an option to pass a kernel boot parameter such as:
vga=normal

UEFI proxmox screen resolution.png
 
its a known issue, workaround: do not use uefi mode.
 
Hello,

I've created DRBD9 storage as described here: https://pve.proxmox.com/wiki/DRBD9

Everything works well, but on the web interface I see wrong storage size (see attached image).
The correct size should be 500GB.

Code:
# lvs

  LV               VG       Attr       LSize   Pool         Origin Data%  Meta%  Move Log Cpy%Sync Convert
  .drbdctrl_0      drbdpool -wi-ao----   4.00m                                                           
  .drbdctrl_1      drbdpool -wi-ao----   4.00m                                                           
  drbdthinpool     drbdpool twi-aotz-- 500.00g                     19.82  10.11                           
  vm-100-disk-1_00 drbdpool Vwi-aotz--  20.01g drbdthinpool        45.46                                 
  vm-100-disk-2_00 drbdpool Vwi-aotz--  90.02g drbdthinpool        100.00                                 
  data             pve      -wi-ao----  10.00g                                                           
  root             pve      -wi-ao----   5.00g                                                           
  swap             pve      -wi-ao----   8.00g
 

Attachments

  • Screenshot-Proxmox Virtual Environment - Google Chrome.png
    Screenshot-Proxmox Virtual Environment - Google Chrome.png
    54.3 KB · Views: 13
Tried installing using BIOS UEFI mode, however "I agree" button is off the screen as shown below ...
Also tried pressing TAB on first screen, however didn't give an option to pass a kernel boot parameter such as:
vga=normal

View attachment 3283
No press the tab key on first screen. Only enter is working. then you come to the next page.
 
Couldn't you install debian in UEFI, then upgrade to proxmox ?
Tried that, Proxmox doesn't seem to recognise the NVMe SSD:
proxmox nvme boot.png

Have checked that the PV (Physical Volume), VG (Volume Group), LV (Logical Volumes) exist in Debian:
Code:
pvs
  PV             VG   Fmt  Attr PSize PFree
  /dev/nvme0n1p2 pve  lvm2 a--  1.09t 904.07g

vgs
  VG   #PV #LV #SN Attr   VSize VFree
  pve    1   2   0 wz--n- 1.09t 904.07g

lvs
  LV   VG   Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root pve  -wi-ao---- 173.50g
etc. ...
 
which nvme disks are you using ?

make/model


edit:

nvm found it:
Intel DC P3600 SSD NVMe PCIe 3.0

we use a bunch of pcie NVME at work and they run with no issues. (we do not have UEFI as far as i know), types are:
Samsung SSD 950 Pro 512GB, M.2 (MZ-V5P512BW)
 
Last edited:
  • Like
Reactions: raymov
Does proxmox plan to support LXCFS?

If you have 64 CPUs on the host, but you only want to give this container access to four of those CPUs, then you need to make sure that that container can only see four of those CPUs. That’s what LXCFS provides. It’s a virtual file system inside of the container that provides the necessary portion of /proc
 
This is only possible when pinning the container to certain cores (which is described in the lxc-users post you linked). This is usually a bad idea, because your load is not balanced over all available cores: if you have three containers A, B, C and pin A and B to core 1, C to core 2, then A and B might have a high load and can only use core 1, while C is doing nothing and core 2 is idling.

Using cpu.shares (what PVE does) allows you to use your host's resources fully, but still limit how much (relatively speaking) of the available resources each container uses.

Even with cpu.sets, each container does not get it's own "virtual cores" - its view is limited to the pinned physical cores, which is probably not what you want anyway.
 
I have a proxmox ve 4.0 Cluster, so if i update one Server in the cluster is it posible to live-migrate kvm vm´s from a 4.0 server to a 4.1 Server?
I want to upgrade my cluster like this:
- migrate all vms from one server to all others so that no vm is running on the upgradable server
- upgrade it to 4.1
- migrate the vm´s back to the upgraded server

--> repeat this for every server in the cluster

Is this posible ?
Thanks for your help!
Greetings Dominik
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!