Proxmox VE 4.0 beta2 released!

when the upgrade script for 3.4 to 4.0 beta2 will be available?

i have the same question because i would like to test the beta2 on my current system (which is not in production yet)

Im running a 5-node 2cpu proxmox 3.4 cluster (enterprise subscriptions) with CEPH on each (11x6TB OSD per node) in total 55 OSD's with about 40 VM's spreaded over those nodes

i would like to join the beta program an contribute my experiences. why i want to upgrade asap?
i was not very happy with the ceph install from 3.4 - With the latest hardware i just had poor performance and even timeouts which crashed some vm's while recovering OSD or just deep scrubbing.

i configured my ass off and learned a lot - but the real kickoff in performance and also finally the first HEALTH cluster was by just updating the kernel to pve-kernel-3.10.0-11

somehow i think it would not be bad to upgrade the kernel to 4.x soon - ceph gonna love it - and security updates don't need reboots anymore ?!

can't wait for an upgrade script which does this for me without reinstall

thank you so much for all your afford in advance

@dietmar: how is the pve gui developed? there is no webserver anymore. i would love to dig the GUI a little and contribute some changes. is it perl? python? php?
 
Last edited:
@dietmar: how is the pve gui developed? there is no webserver anymore. i would love to dig the GUI a little and contribute some changes. is it perl? python? php?

There is no standard webserver like apache anymore, actually since a lot of time. We use our own proxy, which is in the pve-manager package.
The proxy is written in perl and the webgui is written with the extendedJS framework.

https://git.proxmox.com/?p=pve-manager.git;a=summary
http://pve.proxmox.com/wiki/Developer_Documentation
 
Would it be possible to show the current IP address of the VM / container in this status box? i.imgur.com/WpHOMcw.png (I'm sorry, I did not have permission to post the full link)
 
Last edited:
A container can have more that one IP address, so it is not really clear what to show there...
Oh yes I forgot the possibility of multiple IP addresses, i see this on a other software they show the address of the first interface and the other in a small window. i.imgur.com/SapwmB8.png In addition, on the network tab only "dhcp" will display in the IP address column, how about to show additional the assigned DHCP IP address? Something like i.imgur.com/vbg3ydS.png
 
Just upgraded to 4.0 beta 2 from beta 1, and I've lost all access to my kvm machines, but I can launch new containers. KVM machine shows as started but Console says "Guest has not initiated display yet.", also am unable to ping and ssh to the machine, so maybe network issue?

Has anybody else experienced this?
 
I experienced the same thing after the upgrade. Disable ACPI on the VM and it will boot.
 
Sorry if this has been discussed, but I couldn't find a definitive answer to the following question.

What kind of local storage would be possible to use in the stable release in order to live backup both containers (with quota support) and virtual machines, without the need of stopping/suspending them?
Will be possible, for example, to live backup an LXC container with ext4? Or I will need to use ZFS/LVM?

Thanks!
 
hello, i am replying to state that the source code is not current on git. as well as to request a simple .patch for the linux kernel for the pve changes. as well as any advice you may have in regards to compiling all related projects from scratch.
also it would be nice to have a logic diagram of topology and requirements.
in any event i really look forward to the pve-kernel.patch for linux kernel 4.2 thanks in advance.
 
Last edited:
Will ZFS 0.6.5 reach PVE4b2 soon? It looks like they've made lots of improvements and bugfixes.
By the way, why there is a "pve" ZFS package instead of using the ones provided by ZoL project itself?
 
By the way, why there is a "pve" ZFS package instead of using the ones provided by ZoL project itself?

We want to have full control, so that we can include patches fast. But we basically use the packages provided by Zol - we
just compile them ourselves and include it into our repository. That way we have a well defined set of packages, and we can test
those packages before we release something (which is difficult if we would use external repositories).
 
...
i was not very happy with the ceph install from 3.4 - With the latest hardware i just had poor performance and even timeouts which crashed some vm's while recovering OSD or just deep scrubbing.
Hi,
this sounds like you need to fine tune some parameters.

which values do you have for following options? (are the options also active? look e.g. with "ceph --admin-daemon /var/run/ceph/ceph-osd.2.asok config show | grep threads" )
Code:
osd max backfills = 1
osd recovery max active = 1
osd_op_threads = 4
osd_disk_threads = 1
If you use the cfq-sheduler you can alos try the following because deep scrubbing
Code:
osd_disk_thread_ioprio_class  = idle
osd_disk_thread_ioprio_priority = 7
Which network-connection do you use (1GB/s, 10GB/s)?
Do you have an seperate network for the ceph-network?
What kind of CPU do you use on the hosts?
Which ceph-version is active?
How full are your OSDs? How many PGs are in the pools?
Which filesystem do you use on the OSDs?
How looks "ceph osd perf"?
Do you use Journal on SSDs? If yes, what kind of SSDs?


Udo

PS: because of off-topic you should open an new thread about this!
 
Last edited:
Hello,

I am trying to get a couple things going here and a seem to be stuck or maybe I am just missing something.
Here's my setup:
OS: VE 4.0 beta2
Nodes: 8 (clustered)
ISOs/templates: Stored on NFS
Backups: Stored on NFS
HA groups: set with varying nodes
Current Containers: 4 (w/HA enabled and set to the groups)

I hope that is enough information, but what I am trying to figure out is how to keep these highly available. I checked the fencing and it says to use watchdog based fencing, but when I tried restarting one of the nodes, they do not start on the other node. Do I need a and iscsi drive for syncing?
 
Hi,

thank you for the great job!

Would it be possible to add a choice for LXC-Disk format to save them not only as raw images, but alternatively in is some dynamically growing format, e.g. qcow2?

Best regards,
yarick123
 
We want to have full control, so that we can include patches fast. But we basically use the packages provided by Zol - we
just compile them ourselves and include it into our repository. That way we have a well defined set of packages, and we can test
those packages before we release something (which is difficult if we would use external repositories).

Can you push 0.6.5.1, as 0.6.4[.x] has at least two data corruption issues (first one includes losing a pool):
- https://github.com/zfsonlinux/zfs/issues/3757
- https://github.com/zfsonlinux/zfs/pull/3798
 
ZFS 0.6.5 is already in our pvetest repo (4.0 beta).
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!