Proxmox VE 5.0 released!

Status
Not open for further replies.
yes, that the way you should test in your testlab.
 
  • Like
Reactions: BloodyIron
I
Not sure what you really do in our community. Your posting style is not what we want here. Please accept the valid answer, and do not ask again and again, this is just a big waste of (my) time.

I'm here to ask.
I did a perfect legitimate question.
Seems that what is different from your thinking or your ideas is simply stupid and should not be asked.
(Like with mdadm, I did a question but you reply by saying that mdadm lead to data loss. Keep in mind that current ZFS implementation lead to data loss much more than mdadm, as stated also on zfsonlinux GitHub. Between two scrubs you could hit an URE and , in case of disk failure, you loose data)

There is not only black and white

You can't justify an issue with the installer with "USB sticks are cheaper" forcing all users to buy a dedicated stick
For multiple reasons not everyone are able (or comfortable) to use multiple sticks
 
Please ready my answer again:

"This is on our todo list, but a lot of work and not top priority... "
 
In general it is always possible to live migrate from a lower version of qemu to a higher version so your step guide should work.
 
Alessandro : I understand that Tom can be a bit irritated. Saying that the multioot lack is known, identified and on the todo list, de facto is an answer. Maybe not the one you would hear, sorry for that. Personnaly I am following the PVE product since beginning and they have implemented a ton of functionnalities :) But they have a company to tun, and can't spend some time on a secondary (I think it is secondary, maybe not for you) problem concerning 10% of customers.
Anyway, when I first hosted my PVE server at online.net, I wanted to do installation myself, and I ran exactly the problem you expose. I managed to use a multi-boot software to install PVE : Easy2Boot.
Example of use here : http://rmprepusb.blogspot.fr/2014/03/add-proxmox-isos-to-easy2boot.html
==> run the installation script by hand
BR
Gautier.
 
Please ready my answer again:

"This is on our todo list, but a lot of work and not top priority... "

I've read this and it's ok, I'm waiting.
But I've also read the proposal to buy a dedicated stick due to a bug in the installer. This isn't really a useful response...
 
So soon after stretch release… it surpassed my expectations! https://forum.proxmox.com/threads/proxmox-ve-5-0-beta2-released.34853/page-3#post-172535.
Among the new goodies, I enjoy Storage Replication. Since there is a table there,
https://pve.proxmox.com/wiki/Storage_Replication#_supported_storage_types, I guess you should add more backends, in the future.

Is BTRFS on the radar ? I guess this is a general framework, that should be adapted to the snapshot and remote send/receive incremental workflow of btrfs.
 
Dear Promox Team,

first of all thanks for the great work, as always!

Is it normal the live-migration performance is way slower then before? Im using local discs and
"qm migrate 107 hostnamehere --online --with-local-disks" for example to migrate a vm. The copy itselfs is fullspeed (gbit) as before, but the downtime time seem to increase alot.

2017-07-05 04:52:15 starting online/live migration on unix:/run/qemu-server/107.migrate
2017-07-05 04:52:15 migrate_set_speed: 8589934592
2017-07-05 04:52:15 migrate_set_downtime: 0.1
2017-07-05 04:52:15 set migration_caps
2017-07-05 04:52:15 set cachesize: 53687091
2017-07-05 04:52:15 start migrate command to unix:/run/qemu-server/107.migrate
2017-07-05 04:52:17 migration status: active (transferred 235529221, remaining 56500224), total 554508288)
2017-07-05 04:52:17 migration xbzrle cachesize: 33554432 transferred 0 pages 0 cachemiss 0 overflow 0
2017-07-05 04:52:19 migration speed: 2.02 MB/s - downtime 71 ms
2017-07-05 04:52:19 migration status: completed
drive-virtio0: transferred: 42954326016 bytes remaining: 0 bytes total: 42954326016 bytes progression: 100.00 % busy: 0 ready: 1
all mirroring jobs are ready
drive-virtio0: Completing block job...
drive-virtio0: Completed successfully.
drive-virtio0 : finished
2017-07-05 04:52:31 # /usr/bin/ssh -o 'BatchMode=yes' -o 'HostKeyAlias=hostnamehere' root@internal-ip-here pvesr set-state 107 \''{}'\'
2017-07-05 04:52:36 migration finished successfully (duration 00:04:34)

It says 71 ms but its more like 5-10 secounds.

Kind regards
 
The ability to install directly from the ISO without burning it on a cd (or fully write to the USB drive) is still unsupported

This force users to dedicate an USB stick to pve while all other major distro are able to install from the iso file

Any change to get this and a working console in the following days? If you change the terminal, the whole install crashed because it can't find the X screen on the newer terminal

Stop it. Immediately.
 
The news about the new Replication system GUI is great, but neither this announcement, The Storage Replication wiki page, nor the Upgrade from 4.x to 5.0 wiki page mention how to handle the situation where you are already using pve-zsync. Will pve-zsync jobs get converted to the new replication type jobs automatically? Will pve-zsync keep working post-upgrade the same, with manual invention needed if you want those to become the new type of replication job? Or will pve-zsync jobs fail completely, and have to be removed as a part of the upgrade?

Dive in, let us know what you find out, and contribute to the Wiki.

You could buy a license that includes support and go that route as too: https://www.proxmox.com/en/proxmox-ve/pricing

-RB
 
I try to configure HA Cluster on Debian Stretch and Proxmox VE 5.0-10/0d270679, but I can not find examples (or docs) of fencing-devices configuratiion in yaml-type configs. We need more words "config"! I need to configure (yeah!) IMPI (ILO) fencing device, but i have that oldest info only. How I can test it, if I have no idea how to config that?
 
Hi all

Thanks for the new PVE release.

Just want to notify that fixing a max swap size (eg 4 for 4GB) on installation GUI is not working. When you click next nothing happens.

Regards,
 
The news about the new Replication system GUI is great, but neither this announcement, The Storage Replication wiki page, nor the Upgrade from 4.x to 5.0 wiki page mention how to handle the situation where you are already using pve-zsync. Will pve-zsync jobs get converted to the new replication type jobs automatically? Will pve-zsync keep working post-upgrade the same, with manual invention needed if you want those to become the new type of replication job? Or will pve-zsync jobs fail completely, and have to be removed as a part of the upgrade?

I will update the pve-zsync wiki for this kind of Question.
 
  • Like
Reactions: svennd
Hi, thank's for Proxmox V5!.

I've upgraded from 4.4 to 5.0, everything looked fine during upgrade but after restart I can't accesss gui, but yes SSH.

THX!
 
Hi PVE team, thanks for this new version!

I'll just point out that your download page title still cites 4.1 ISO version.... (see attached screenshot)

Marco
 

Attachments

  • pvetitle.png
    pvetitle.png
    69.7 KB · Views: 28
Hi PVE team, thanks for this new version!

I'll just point out that your download page title still cites 4.1 ISO version.... (see attached screenshot)

Marco

thx for reporting, fixed.
 
Hi, thank's for Proxmox V5!.

I've upgraded from 4.4 to 5.0, everything looked fine during upgrade but after restart I can't accesss gui, but yes SSH.

THX!

pls open a new thread with more details about your problem.
 
Status
Not open for further replies.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!