no pve command there ....
There is and never was a pve command, at least nothing shipped from us...
What did that command do, maybe we can help remembering the correct name.
no pve command there ....
There is and never was a pve command, at least nothing shipped from us...
What did that command do, maybe we can help remembering the correct name.
no pve command there ....
We internally measured the difference between a 5.0 kernel with the "FPU/SIMD symbol exports" and without them, they both where pretty similar (i.e., difference was mostly measurement noise) but were in general somewhat noticeable slower than with the 4.15 kernel, we continuously check ZFS and Kernel for possible improvements regarding this. Note also that the upcoming ZFS 0.8.2 will again have SIMD support and thus any possible performance loss from that area should soon be gone again.
Thanks. I tried that but it doesn't do anything.assuming your system is still in this state, you can kill PID 3535 and see if the rest of the update proceeds. it will likely tell you that (at least) lxc-pve failed to be completely upgraded, which should be done with "apt install -f"
Thanks. I tried that but it doesn't do anything.
I killed apt and dpkg, but trying to finish it with dpkg --configure -a runs into the exact same problem again.
I do.if you don't have any containers running, can you try stopping the two lxc services (lxc-monitord and lxc-net) with 'systemctl stop' and then repeating the 'dpkg --configure -a'
Errors were encountered while processing:
lxc-pve
pve-container
pve-manager
proxmox-ve
pve-ha-manager
qemu-server
Please edit your ceph.conf (/etc/pve/ceph.conf) it still has the "keyring" entry in the global section. Remove it from there and move it down to the client section so that it looks like:
Code:[client] keyring = /etc/pve/priv/$cluster.$name.keyring
This is something we warn you if you use our "pve5to6" checklist script.
Please, all of you who upgrade: read the upgrade docs closely and use the script to check basic stuff!
I am running Intel DA X520 I went from 5.4 -> 6.0-1 Beta -> 6.0-4 Release and do not have any issues, or firmware warnings in my syslog. Though I do run firmware checks and updates quarterly and had done so just before the install.I’m using X520 in my nodes - not upgraded yet though. Inclined now to wait until you work this out!
I need to do two point in update steps? (
1. Cluster: always upgrade to Corosync 3 first
2. Update Ceph Luminous to Nautilus)
second question: my firewall is a VM (VyOS) if i stop it (the checker say warning if a VM is running) no internet and i can not update. so it is a big problem that i update the system with running VM?
Just installed clear 6.0 root on zfs with UEFI boot and trying to limit ARC size. The way i did it (add "options zfs zfs_arc_max=2147483648" to "/etc/modprobe.d/zfs.conf" then "update-initramfs -u" then reboot) does not work anymore with UEFI boot.
Command "echo 2147483648 > /sys/module/zfs/parameters/zfs_arc_max" do its job after some hours of uptime.
In beta thread i was advised to implement it via /etc/rc.local. The question is how to do it? I am not linux guru and afraid to break something =)
What is the right way to limit ARC size on pve6 root on zfs with uefi boot?
pve-efiboot-tool refresh
/etc/rc.local is very old startup script
I switched from pvetest to pve-no-subscription, no new Package Updates.
My Version is 6.0-4/2a719255, this is the actual final version?