Proxmox VE 6.0 released!

Hashwagon

New Member
Jul 26, 2019
3
0
1
31
Hello,

I fresh installed proxmox-ve_6.0-1 with ZFS on root as a RAID1 (two SSDs). After installation, upon boot, I'm met with a "no bootable device available" message. UEFI boot is enabled in system settings. After reading this thread I see suggestions of running 'pve-efiboot-tool refresh'. How am I supposed to do this if I can't boot into the system? I'm going to attempt to run this in a TTY before completing the install. Any better suggestions?

Thank you for everyone who put this release together!
 
Last edited:
May 9, 2017
37
0
6
33
Hello!

Following the wifi on: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

I got error on this step:
For the no-subscription repository see Package Repositories. It can be something like:

sed -i -e 's/stretch/buster/g' /etc/apt/sources.list.d/pve-install-repo.list


I got this error:
sed: can't read /etc/apt/sources.list.d/pve-install-repo.list: No such file or directory

Any idea to fix that?

And, im not using Ceph, but anyways, i need to execute this step?
(Ceph only) Replace ceph.com repositories with proxmox.com ceph repositories
 

udo

Well-Known Member
Apr 22, 2009
5,855
159
63
Ahrensburg; Germany
Hello!

Following the wifi on: https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0

I got error on this step:
For the no-subscription repository see Package Repositories. It can be something like:

sed -i -e 's/stretch/buster/g' /etc/apt/sources.list.d/pve-install-repo.list


I got this error:
sed: can't read /etc/apt/sources.list.d/pve-install-repo.list: No such file or directory

Any idea to fix that?

And, im not using Ceph, but anyways, i need to execute this step?
(Ceph only) Replace ceph.com repositories with proxmox.com ceph repositories
Hi Lucas,
the trick is "can be something like"... mean, you have source lists, which uses oldstable stretch, wich must replaced by buster.

to find your files use
Code:
grep -r stretch /etc/apt/
sed is an streamline-editor and the command simply replace stretch with buster in the file (like vi command s for substitute and g for global (all)).

Udo
 
May 9, 2017
37
0
6
33
Hi Lucas,
the trick is "can be something like"... mean, you have source lists, which uses oldstable stretch, wich must replaced by buster.

to find your files use
Code:
grep -r stretch /etc/apt/
sed is an streamline-editor and the command simply replace stretch with buster in the file (like vi command s for substitute and g for global (all)).

Udo
Ok, but my 5.4 is already with buster, so, i cant understand why i need to replace that again, my sources.list:

deb http://ftp.br.debian.org/debian buster main contrib

# security updates
deb http://security.debian.org buster/updates main contrib
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,399
528
113

Tommmii

Member
Jun 11, 2019
32
2
8
48
Hello everyone !!
yesterday I did the upgrade from 5.4 to 6, no errors during the process.
But after the reboot i got :

Code:
root@pve:~# systemctl status zfs-mount.service
● zfs-mount.service - Mount ZFS filesystems
   Loaded: loaded (/lib/systemd/system/zfs-mount.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Sun 2019-07-28 13:36:09 CEST; 25s ago
     Docs: man:zfs(8)
  Process: 1429 ExecStart=/sbin/zfs mount -a (code=exited, status=1/FAILURE)
 Main PID: 1429 (code=exited, status=1/FAILURE)

Jul 28 13:36:09 pve systemd[1]: Starting Mount ZFS filesystems...
Jul 28 13:36:09 pve zfs[1429]: cannot mount '/zfs-pool': directory is not empty
Jul 28 13:36:09 pve systemd[1]: zfs-mount.service: Main process exited, code=exited, status=1/FAILURE
Jul 28 13:36:09 pve systemd[1]: zfs-mount.service: Failed with result 'exit-code'.
Jul 28 13:36:09 pve systemd[1]: Failed to start Mount ZFS filesystems.

I had a bit of a further look :

Code:
root@pve:~# zfs list -r -o name,mountpoint,mounted
NAME                                      MOUNTPOINT                   MOUNTED
zfs-pool                                  /zfs-pool                         no
zfs-pool/iso                              /zfs-pool/iso                     no
zfs-pool/share                            /zfs-pool/share                   no
zfs-pool/subvol-100-disk-0                /zfs-pool/subvol-100-disk-0       no
zfs-pool/subvol-102-disk-0                /zfs-pool/subvol-102-disk-0       no
zfs-pool/subvol-103-disk-0                /zfs-pool/subvol-103-disk-0       no
zfs-pool/subvol-104-disk-0                /zfs-pool/subvol-104-disk-0       no
zfs-pool/subvol-300-disk-0                /zfs-pool/subvol-300-disk-0       no
zfs-pool/vm-disks                         /zfs-pool/vm-disks                no
zfs-pool/vm-disks/vm-101-disk-0           -                                  -
zfs-pool/vm-disks/vm-105-disk-0           -                                  -
zfs-pool/vm-disks/vm-200-disk-0           -                                  -
zfs-pool/vm-disks/vm-200-state-stable     -                                  -
zfs-pool/vm-disks/vm-201-disk-0           -                                  -
zfs-pool/vm-disks/vm-206-disk-0           -                                  -
zfs-pool/vm-disks/vm-333-disk-0           -                                  -
zfs-pool/vm-disks/vm-334-disk-1           -                                  -
zfs-pool/vm-disks/vm-335-disk-0           -                                  -
zfs-pool/vm-disks/vm-335-state-preupdate  -                                  -
Did Proxmox manage to write to the mount points before zfs was active ??
How do I recover from this ?

EDIT : this looks like a solution.
 
Last edited:

skhristich

New Member
Jun 19, 2019
1
0
1
29
To help you with this question we need a little bit more information, please create a ticket here CL zendesk and technical experts will help you asap.
Marketing coordinator KernelCare
 

matten

New Member
Jan 19, 2013
3
0
1
Proxmox Release 6:

Our cluster with 4 server works first perfect.

But since yesterday we have the following problem on 3 of 4 server:
()
starting apt-get update
Hit:1 ftp.de.debian.org/debian buster InRelease
Hit:2 ftp.de.debian.org/debian buster-updates InRelease
Hit:3 download.proxmox.com/debian/ceph-nautilus buster InRelease
Hit:4 security.debian.org buster/updates InRelease
Err:5 enterprise.proxmox.com/debian/pve buster InRelease
401 Unauthorized [IP: 212.224.123.70 443]
Reading package lists...
E: Failed to fetch enterprise.proxmox.com/debian/pve/dists/buster/InRelease 401 Unauthorized [IP: 212.224.123.70 443]
E: The repository enterprise.proxmox.com/debian/pve buster InRelease is no longer signed.
TASK ERROR: command 'apt-get update' failed: exit code 100

What shall we do???
 

Kage

New Member
Mar 29, 2016
25
2
3
29
Hi,

after upgrade from latest 5.x to 6.x , VMs on hosts do not have access to network, whole is on ovs bridges, any clue ?

Best Regards
 

Alwin

Proxmox Staff Member
Staff member
Aug 1, 2017
2,672
233
63
@Kage, please open up a new thread. This will increase the chances for help.
 

emanuelebruno

Member
May 1, 2012
114
0
16
Catania
emanuelebruno.it
The short story:
In proxmox 6 fallocate: fallocate failed: Operation not supported

The long story:
Sometimes I can't start my virtual machine because Proxmox says that there is not enough memory space (and it is very strange because I have a server with 16GB of RAM and my KVM is configured to use only 8GB) .

I installed Proxmox v6 in raid-0 zfs and noticed that no swap file was created (and it is very very strange!).

I thought of creating one in the hope of solving the problem but I encountered another error: it is not possible to use the "fallocate" command.

To reproduce the problem of not starting the KVM it is sufficient to do so:

  1. launch Proxmox;
  2. make a copy from one server to another with the "scp" command of a backup file of the kvm of at least 10 GB file;
  3. restore the kvm on the server in use;
  4. start the kvm and you'll meet the error;
Best regards,
E. Bruno.
 
Jan 17, 2019
4
1
3
44
The short story:
Sometimes I can't start my virtual machine because Proxmox says that there is not enough memory space (and it is very strange because I have a server with 16GB of RAM and my KVM is configured to use only 8GB) .
.
Could be memory fragmentation. KVM needs a contiguous piece of memory. If it's fragmented then you'll need to defrag.

You could try "sysctl vm.compact_memory=1".
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!