Proxmox VE 6.0 released!

spirit

Well-Known Member
Apr 2, 2010
3,428
143
63
www.odiso.com
The short story:
In proxmox 6 fallocate: fallocate failed: Operation not supported

The long story:
Sometimes I can't start my virtual machine because Proxmox says that there is not enough memory space (and it is very strange because I have a server with 16GB of RAM and my KVM is configured to use only 8GB) .

I installed Proxmox v6 in raid-0 zfs and noticed that no swap file was created (and it is very very strange!).

I thought of creating one in the hope of solving the problem but I encountered another error: it is not possible to use the "fallocate" command.

To reproduce the problem of not starting the KVM it is sufficient to do so:

  1. launch Proxmox;
  2. make a copy from one server to another with the "scp" command of a backup file of the kvm of at least 10 GB file;
  3. restore the kvm on the server in use;
  4. start the kvm and you'll meet the error;
Best regards,
E. Bruno.
zfs will use half of memory by default for his buffer cache. (and linux kernel can't retreive it like classic linux buffer memory)

https://pve.proxmox.com/wiki/ZFS_on_Linux#_limit_zfs_memory_usage
 

Tommmii

Member
Jun 11, 2019
32
2
8
48
until someone more knowledgable answers : that particular file is not of much importance for your install, i believe it is responsible for displaying the welcome message on the console after reboot (smt like : you can now connect to the webinterface at https://192.168.0.1:8006 ).
Fine to keep the existing file.
 

gyq

New Member
Aug 10, 2019
2
0
1
22
Unable to install proxmox ve6.0
When I installed proxmox6.0 on the Huawei RH2288V3 server, I always stayed on the "END USER LICENSE AGREEMENT (EULA)" interface. I could not click the "i agree" button, and the installation could not be continued. I have collected some error messages. What should I do?
But test version 5.4, can install it normally!
 

Attachments

shantanu

Member
Mar 30, 2012
101
6
18
@gyq just an idea, can you try using the keyboard to see if that works, Alt+(underlined letter).

Of course you will have to run the installer somewhere else to see what you have to press in the real server
 

kulimboy

New Member
Aug 14, 2019
4
0
1
38
I installed fresh Proxmox ve v6 but import zfs pool from 5.4. When run zpool status -t it show "trim unsupported" on my zfs pool1 consist of Crucial MX500 SSD. Why not supported?

root@pve1:~# zpool status -t
pool: pool1
state: ONLINE
scan: scrub repaired 0B in 0 days 01:03:32 with 0 errors on Sun Aug 11 01:27:33 2019
config:

NAME STATE READ WRITE CKSUM
pool1 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
sdb ONLINE 0 0 0 (trim unsupported)
sdc ONLINE 0 0 0 (trim unsupported)
mirror-1 ONLINE 0 0 0
sdd ONLINE 0 0 0 (trim unsupported)
sde ONLINE 0 0 0 (trim unsupported)
mirror-2 ONLINE 0 0 0
sdf ONLINE 0 0 0 (trim unsupported)
sdg ONLINE 0 0 0 (trim unsupported)

errors: No known data errors

pool: rpool
state: ONLINE
scan: none requested
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-INTEL_SSDSC2BB160G4T_BTWL409602P5160MGN-part3 ONLINE 0 0 0 (100% trimmed, completed at Wed 14 Aug 2019 09:20:59 AM +08)
ata-INTEL_SSDSC2BB160G4T_BTWL411200WV160MGN-part3 ONLINE 0 0 0 (100% trimmed, completed at Wed 14 Aug 2019 09:20:59 AM +08)

errors: No known data errors
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
1,806
172
63
I installed fresh Proxmox ve v6 but import zfs pool from 5.4. When run zpool status -t it show "trim unsupported" on my zfs pool1 consist of Crucial MX500 SSD. Why not supported?
Please open a fresh thread for specific problems!

Apart from that: see this thread for a potential explanation: [URL]https://zfsonlinux.topicbox.com/groups/zfs-discuss/Tcc78c13dd9db7ae8-M5d8b18dcd2b271fa9975888a[/URL]
 
  • Like
Reactions: kulimboy

kulimboy

New Member
Aug 14, 2019
4
0
1
38
@Stoiko Ivanov Thanks for the explanation. It seems that consumer based SSD will not have TRIM enable for LSI HBA in IT mode. Only enterprise SSD will work.

Sorry for not open a fresh thread.
 

Celestialdeath99

New Member
Apr 11, 2019
3
0
1
20
Since upgrading to 6.0 I have noticed that my 4-node cluster has become unstable. At random times up to 3 of the 4 nodes will suddenly drop all connections and appear as if they have rebooted. However inspecting my machines iDRAC logs no reboots or power issues occur. It would appear that this may be related to https://bugzilla.proxmox.com/show_bug.cgi?id=2326 as what is described there is exactly what is happening to my cluster as well. Does anyone know if there are any fixes for this yet? I need my cluster to be stable again, its really starting to annoy me that it keeps randomly restarting!!

All 4 of my nodes have the following packages

Code:
proxmox-ve: 6.0-2 (running kernel: 5.0.18-1-pve)
pve-manager: 6.0-5 (running version: 6.0-5/f8a710d7)
pve-kernel-5.0: 6.0-6
pve-kernel-helper: 6.0-6
pve-kernel-4.15: 5.4-6
pve-kernel-5.0.18-1-pve: 5.0.18-3
pve-kernel-5.0.15-1-pve: 5.0.15-1
pve-kernel-4.15.18-18-pve: 4.15.18-44
pve-kernel-4.15.18-12-pve: 4.15.18-36
ceph: 14.2.1-pve2
ceph-fuse: 14.2.1-pve2
corosync: 3.0.2-pve2
criu: 3.11-3
glusterfs-client: 5.5-3
ksm-control-daemon: 1.3-1
libjs-extjs: 6.0.1-10
libknet1: 1.10-pve2
libpve-access-control: 6.0-2
libpve-apiclient-perl: 3.0-2
libpve-common-perl: 6.0-3
libpve-guest-common-perl: 3.0-1
libpve-http-server-perl: 3.0-2
libpve-storage-perl: 6.0-7
libqb0: 1.0.5-1
lvm2: 2.03.02-pve3
lxc-pve: 3.1.0-63
lxcfs: 3.0.3-pve60
novnc-pve: 1.0.0-60
openvswitch-switch: 2.10.0+2018.08.28+git.8ca7c82b7d+ds1-12
proxmox-mini-journalreader: 1.1-1
proxmox-widget-toolkit: 2.0-5
pve-cluster: 6.0-5
pve-container: 3.0-5
pve-docs: 6.0-4
pve-edk2-firmware: 2.20190614-1
pve-firewall: 4.0-7
pve-firmware: 3.0-2
pve-ha-manager: 3.0-2
pve-i18n: 2.0-2
pve-qemu-kvm: 4.0.0-5
pve-xtermjs: 3.13.2-1
qemu-server: 6.0-7
smartmontools: 7.0-pve2
spiceterm: 3.1-1
vncterm: 1.6-1
zfsutils-linux: 0.8.1-pve1
 

zaxx

New Member
Dec 28, 2018
5
0
1
37
After upgrading to 6.0, most part of VMs started going to HA error status on different hosts every 2-3 days
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
1,806
172
63
After upgrading to 6.0, most part of VMs started going to HA error status on different hosts every 2-3 days
Check the logs (especially for messages from corosync and pve-cluster/pmxcfs) - if this does not lead to a solution please open a new thread (with the logs attached/pasted in code-tags) - Thanks!
 

Stoiko Ivanov

Proxmox Staff Member
Staff member
May 2, 2018
1,806
172
63
Hi , I have a problem with instalation, at the finish of install, i had error.
Someone have any idea?
sorry my english.
Thanks
please create a new thread. Thanks
Try to run the installer in debug mode and post the output in the last debug-shell
 

Kaboom

New Member
Mar 5, 2019
4
0
1
48
We need to update xx nodes so that will take a long time (currently running Debian, Proxmox 5.4, Corosync 2 with Ceph).

I was thinking to update Corosync to v3 on all nodes first (at the same time), this will keep everything running right? Then start the Proxmox update per node incl Ceph, updating Ceph feels a bit dangerous.

It's not a problem if we are down at night for lets say an hour.

Is this the best way to follow? What do you recommend to do if we are talking about a large number of nodes.

Thanks in advance!
 

t.lamprecht

Proxmox Staff Member
Staff member
Jul 28, 2015
1,388
201
63
South Tyrol/Italy
I was thinking to update Corosync to v3 on all nodes first (at the same time), this will keep everything running right?
Yes, if your network backing corosync is stable and not loaded to much - which should be the case, as else the current one would show issues already. But still, test it first, e.g., in a (virtual) test setup.

Then start the Proxmox update per node incl Ceph, updating Ceph feels a bit dangerous.
Do not update Proxmox VE and Ceph in one go. First Proxmox VE, and only then, once PVE was update to 6.0 in the cluster, all nodes restarted, all healthy for a bit do the update from Ceph Luminous to Ceph Nautilus.

Is this the best way to follow? What do you recommend to do if we are talking about a large number of nodes.
Just be sure to follow our Upgrade Guide: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#In-place_upgrade
Do not miss steps or change order, if you're not 100% sure.
There's not much to change for big clusters. Just be sure to go slowly one by one and first to a test upgrade on a test machine or virtual PVE cluster to get a feeling.
 
Last edited:
  • Like
Reactions: Kaboom

Marco Trevisan

New Member
Nov 6, 2018
5
3
3
45
Hi Proxmox,

I just want to say THANKS TO YOU for the new 6.0 release!

Yesterday I successfully upgraded a 5.4 cluster (3 nodes) to 6.0-7.
Followed the upgrade instruction closely and had ZERO issues. That cluster went across all the upgrades from 5.0 to 6.0 and never went down because of upgrades.
We are using local zfs replication and were suffering from the missing TRIM function.
After upgrading to 6.0, I issued "zpool upgrade <poolname>" and "zpool trim <poolname>".
Hopefully the attached screenshot will pass, anyway IOwait time dropped to almost zero. The whole cluster is benefiting a lot from the upgrade.

Best regards,
Marco

Screenshot 2019-09-13 11.44.39.png
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!