Proxmox VE 4.1 released!

Normally yes, but ideally you test it at your specific setup once (if its really really critical).

Be sure that on the upgraded node the kvm/qemu and qemu-server packages are installed at least in the following versions:
  • pve-qemu-kvm >= 2.5-7
  • qemu-server >= 4.0-59
That should always be the case if you make a full dist-upgrade as packages with those or newer versions are available in all our repositories.
Then it should work.
It wont work with older packages on the upgraded host!
 
Normally yes, but ideally you test it at your specific setup once (if its really really critical).

Be sure that on the upgraded node the kvm/qemu and qemu-server packages are installed at least in the following versions:
  • pve-qemu-kvm >= 2.5-7
  • qemu-server >= 4.0-59
That should always be the case if you make a full dist-upgrade as packages with those or newer versions are available in all our repositories.
Then it should work.
It wont work with older packages on the upgraded host!

Ok, just to clearify, .. if i update a host and the new version is newer than ...
  • pve-qemu-kvm >= 2.5-7
  • qemu-server >= 4.0-59
i can live-migrate older versions, for example a Server with, qemu-server: 4.0-35, and pve-qemu-kvm: 2.4-12, to the new server.

Greetings
Dominik
 
i can live-migrate older versions, for example a Server with, qemu-server: 4.0-35, and pve-qemu-kvm: 2.4-12, to the new server.

That can get problematic. Those aren't the latest packages verison from 4.0. You can upgrade qemu-server also on the source host this should make the chance bigger.
But with pve-qemu-kvm: 2.4-12 you probably don't succeed, I'm afraid.

You can make a test VM and test it so you see if it works for you at your setup.
Else a maintenance window would be needed, regarding the VM size it should not be offline for more then a few minutes. (ca 1 min to shutdown + a few seconds to relocate + 1-3 min to fully start)
 
When i Update a proxmox-Server ( qemu/pve-qemu) i have to restart the whole server to get this working ?!
So at the moment i have 4 proxmox Server in cluster with pve 4.0.-22 and one with pve 4.0.16. So i also cannot live-migrate between them.
So i thought i have to upgrade the old one to the new version and then migrate to it? Otherwise i have noch chance to update my cluster without downtime? (cause i only can get to the newest 4.x version with downtime ?)

Is there any common way to stay up-to-date with live-migrations and without downtimes ?

Greetings and many thanks so far!!
Dominik
 
When i Update a proxmox-Server ( qemu/pve-qemu) i have to restart the whole server to get this working ?!

If the update includes a kernel update then yes.

Is there any common way to stay up-to-date with live-migrations and without downtimes ?

Your idea is the common way and normally it works, but with version jumps of qemu/kvm it can get you in problems sometimes.

But I've found the solution for you, have frogotten that, sorry, do the following:

On all nodes install qemu-server in version >= 4.0-59 this wont affect the running VMs and is needed to successfully live migrate a VM to a updated host!

Then do as you proposed, move all VMs away from one node, update this node (with pve-qemu-kvm in 2.5-8) reboot it (should be no problem with no VMs) and then migrate the VMs to the updated host from one node, then start again.

Again, I'd test it with a test VM, to be sure, but I successfully simulated your scenario on my test cluster.
 
Just now,fresh install :)
Code:
root@pve:/var/log# pct help
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
    LANGUAGE = (unset),
    LC_ALL = (unset),
    LC_PAPER = "bg_BG.UTF-8",
    LC_ADDRESS = "bg_BG.UTF-8",
    LC_MONETARY = "bg_BG.UTF-8",
    LC_NUMERIC = "bg_BG.UTF-8",
    LC_TELEPHONE = "bg_BG.UTF-8",
    LC_IDENTIFICATION = "bg_BG.UTF-8",
    LC_MEASUREMENT = "bg_BG.UTF-8",
    LC_TIME = "bg_BG.UTF-8",
    LC_NAME = "bg_BG.UTF-8",
    LANG = "en_US.UTF-8"
    are supported and installed on your system.
perl: warning: Falling back to a fallback locale ("en_US.UTF-8").
USAGE: pct <COMMAND> [ARGS] [OPTIONS]
       pct create <vmid> <ostemplate> [OPTIONS]
       pct destroy <vmid>
       pct list
       pct migrate <vmid> <target> [OPTIONS]
       pct resize <vmid> <disk> <size> [OPTIONS]
       pct restore <vmid> <ostemplate> [OPTIONS]
       pct template <vmid>

       pct config <vmid>
       pct set <vmid> [OPTIONS]

       pct delsnapshot <vmid> <snapname> [OPTIONS]
       pct listsnapshot <vmid>
       pct rollback <vmid> <snapname>
       pct snapshot <vmid> <snapname> [OPTIONS]

       pct resume <vmid>
       pct shutdown <vmid> [OPTIONS]
       pct start <vmid>
       pct stop <vmid>
       pct suspend <vmid>

       pct console <vmid>
       pct enter <vmid>
       pct exec <vmid> [<extra-args>]
       pct fsck <vmid> [OPTIONS]
       pct unlock <vmid>

       pct help [<cmd>] [OPTIONS]
 
Hi,

please make a new thread for this topic.
 
we use a bunch of pcie NVME at work and they run with no issues. (we do not have UEFI as far as i know), types are:
Samsung SSD 950 Pro 512GB, M.2 (MZ-V5P512BW)

No issues even when installing? Tried this with UEFI enabled or disabled on an MSI H110I and the installer does not recognise the NVMe.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!