Would you please include RAID-Z2 in this test? My configuration is a 6-disk RAID-Z2 with Xeon 4110. Also, I'm seeing no difference in regular striped volumes, but the RAID-Z2 fio seqwrite performance is less than half.We internally measured the difference between a 5.0 kernel with the "FPU/SIMD symbol exports" and without them, they both where pretty similar (i.e., difference was mostly measurement noise) but were in general somewhat noticeable slower than with the 4.15 kernel, we continuously check ZFS and Kernel for possible improvements regarding this. Note also that the upcoming ZFS 0.8.2 will again have SIMD support and thus any possible performance loss from that area should soon be gone again.
Thanks. I tried that but it doesn't do anything.assuming your system is still in this state, you can kill PID 3535 and see if the rest of the update proceeds. it will likely tell you that (at least) lxc-pve failed to be completely upgraded, which should be done with "apt install -f"
if you don't have any containers running, can you try stopping the two lxc services (lxc-monitord and lxc-net) with 'systemctl stop' and then repeating the 'dpkg --configure -a'Thanks. I tried that but it doesn't do anything.
I killed apt and dpkg, but trying to finish it with dpkg --configure -a runs into the exact same problem again.
I do.if you don't have any containers running, can you try stopping the two lxc services (lxc-monitord and lxc-net) with 'systemctl stop' and then repeating the 'dpkg --configure -a'
Errors were encountered while processing: lxc-pve pve-container pve-manager proxmox-ve pve-ha-manager qemu-server
Figured out the problems and mostly my fault for not paying close attention to certain issues after the Proxmox upgrade from version 5.4 to 6. Even though my test environment upgraded without issues but it didn't have the 10 gig network cards which is what being used for Ceph-Cluster. I still haven't upgraded ceph to Nautilus yet on my test environment.Please edit your ceph.conf (/etc/pve/ceph.conf) it still has the "keyring" entry in the global section. Remove it from there and move it down to the client section so that it looks like:
This is something we warn you if you use our "pve5to6" checklist script.Code:
[client] keyring = /etc/pve/priv/$cluster.$name.keyring
Please, all of you who upgrade: read the upgrade docs closely and use the script to check basic stuff!
I am running Intel DA X520 I went from 5.4 -> 6.0-1 Beta -> 6.0-4 Release and do not have any issues, or firmware warnings in my syslog. Though I do run firmware checks and updates quarterly and had done so just before the install.I’m using X520 in my nodes - not upgraded yet though. Inclined now to wait until you work this out!
https://pve.proxmox.com/wiki/Upgrade_from_5.x_to_6.0#Actions_step-by-stepI need to do two point in update steps? (
1. Cluster: always upgrade to Corosync 3 first
2. Update Ceph Luminous to Nautilus)
No, this is normally just an issue if you really want to keepo it running and have a cluster, as then you can (live) migrate it away. The node needs to be rebooted after the upgrade, so sooner or later the vm will stop through the host node reboot if it's not moved. If this is a single node it's no issue.second question: my firewall is a VM (VyOS) if i stop it (the checker say warning if a VM is running) no internet and i can not update. so it is a big problem that i update the system with running VM?
Please do not use "/etc/rc.local" that's not our recommendation (if it even works).Just installed clear 6.0 root on zfs with UEFI boot and trying to limit ARC size. The way i did it (add "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then "update-initramfs -u" then reboot) does not work anymore with UEFI boot.
Command "echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_max" do its job after some hours of uptime.
In beta thread i was advised to implement it via /etc/rc.local. The question is how to do it? I am not linux guru and afraid to break something =)
What is the right way to limit ARC size on pve6 root on zfs with uefi boot?
I know, and that's fine if it works for you. But as it's only executed at the end of a multiuser "runlevel" (i.e., the compat runlevel systemd target) it can still be a bit late if VMs CTs got already started and made great use of the ARC, with my linked way the ZFS ARC is already limited from the initial boot stage./etc/rc.local is very old startup script
Yes, currently all three repos are in sync. Also pvetest will always be newer (or at least as equal new) than no-subscription, so this is expected in general.I switched from pvetest to pve-no-subscription, no new Package Updates.
My Version is 6.0-4/2a719255, this is the actual final version?