Hi,
we would update our pve-cluster to pve7 and like to reduce the reboots to one for each node.
Ceph is allready on 14.2.22.
The howto say: "We assume that all nodes are on the latest Proxmox VE 6.3 (or higher)", which isn't on most nodes in our cluster.
Is it still posible to upgrade ceph to...
Hi,
this bug is back in pve7!
qm start 100
memory size (51200) must be aligned to 2048 for hotplugging
echo "51200/2048." | bc
25
pveversion
pve-manager/7.0-11/63d82f4e (running kernel: 5.11.22-4-pve)
If I add 1024, I can start the VM…
I've tested this to know, if we need "options vhost...
Hi,
depends of the kind of test-device.
pveperf
CPU BOGOMIPS: 102194.56
REGEX/SECOND: 2689264
HD SIZE: 12988.55 GB (rpool/ROOT/pve-1)
FSYNCS/SECOND: 6859.94
DNS EXT: 57.96 ms
DNS INT: 1.17 ms
zfs create -V 32GB rpool/iotest
mkfs.ext4...
Hi,
only an guess - do you have any volumes connected with the idrac (or remote console)?
Do you have disabled Sata* and Iscsi in the bios?
* I assume you don't use the internal sata-port due an Perc?!
Is your bios up to date?
Udo
Hi,
kann es sein, dass Du eigentlich ein Memory-Problem hast? Würde ich wegen der ksm-Meldung vermuten.
ZFS nimmt per default den halben RAM, es sei denn, du setzt zfs_arc_min + zfs_arc_max in /etc/modprobe.d/zfs.conf (und machst danach ein "update-initramfs -u" + reboot).
Hier ein Beispiel...
Hi,
you can boot an rescue-system and rename the lvm volumegroup (like vgrename pve pve-old).
If you want your installation back, you can renamed pve-old back.
Udo
Hi Wolfgang,
unfortunality this don't help in any case.
I have an system with an raid1 (very simple 24/7-SSDs on a Dell PERC H730 Mini) and after the issue occour, I reboot, assign the lvm-profile and reboot again.
11 days later the same issue happens again…
Looks, that I must switch from...
Hi,
I had an strange effect today. On an new system (supermicro amd server) with an Intel quad port X710 the NIC don't appear after the updates from today (but the updates has nothing to do with kernel/firmware?!)
Commandline: apt dist-upgrade
Install: libyaml-libyaml-perl:amd64...
Hi Wolfgang,
any news on this topic?
I've now two hosts with pinned qemu-kvm (both host have the same hardware/config).
With pve-qemu-kvm 5.0.0-9 it's stable, but I think that's not an solution for a long time.
Udo
Hi,
yes this work, but you can have issues with live-migration due the different cpu types.
But mostly it's works with the kvm64 cpu for the VM too.
Udo
Hi,
yes I can do that this evening.
Then booting the actual pve-kernel-5.4.65-1-pve?
I assume it's save to downgrade pve-qemu-kvm_5.0.0-9 with still running VMs before reboot?
Udo
Hi Wolfgang,
after the raid-extension was done, I updated yesterday evening the bios and boot the "old" kernel 5.3.13-3-pve and all looks well.
But till today at 6:50h only - because at this time (6:30) many daily-jobs starts and do IO.
Curiously the mostly IO is done by the zfs-raid1 (nvme)...
Hi Wolfgang,
not realy - the special thing was an 100GB-Raidvolume for the proxmox-system and the other space for an big lvm-storage. But due the extension I had to migrate and delete the system-raid (but the issue starts just after reboot, where the system-raid still on the raidgroup).
The...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.