ok thanks.
do we have any problems with 5.13 Kernel? all VMs running fine.
we have microcode updated : microcode updated early to new patch_level=0x0830107a
so we can still be on kernel 5.13 or?
we do not like to reboot our proxmox again now. many customers ;)
we have
apt update
apt upgrade
and now:
Linux 5.13.19-6-pve #1 SMP PVE 5.13.19-15 (Tue, 29 Mar 2022 15:59:50 +0200)
PVE-Manager-Version
pve-manager/7.4-16/0f39f621
seems old kernel 2022. Is there no newer?
ok, to be claear, so with Proxmox "default" it is not enough to do "apt upgrade" and reboot the proxmox host.
we first have to install the amd64-microcode package?
thanks but:
Ok, so microcode updates NEVER done by proxmox by default? so older microcode updates were not installed, although we have reboot the host?
"it was only disabled by default in kernel 5.19"
Ok we have Linux version 5.13.19, so " /sys/devices/system/cpu/microcode/reload" would work...
ok , thanks.
what are the steps for?
there is no amd64-microcode
and n apt install amd64-microcode
btw: we have the path /sys/devices/system/cpu/microcode/reload using Proxmox 7.1
if we update the microcode d owe have to reboot the hypervisor (proxox host) and/or any VM?
or it is enough to run
echo 1 > /sys/devices/system/cpu/microcode/reload on proxmox hypervisor?
Wir haben erfolgreich von cluster26 -> cluster27 migriert.
Zurück geht es aber nicht:
2022-03-22 10:51:19 starting migration of VM 100000 to node 'cluster26' (192.168.0.26)
2022-03-22 10:51:19 found local disk 'zfs_local:vm-100000-disk-0' (in current VM config)
2022-03-22 10:51:19 copying local...
"du musst von allen nodes zu allen anderen nodes ohne interaktion ssh machen koennen: ssh <IP> oder ssh <hostname> sollte direkt einloggen."
das kann ich. ich kann mich problemlos von zb cluster26 auf cluster27 per ssh IP und HOSTNAME einloggen und umgekehrt.
Wir haben einen node cluster27 unserem Cluster hinzugeführt (Proxmox 7.1-10)
Wenn wir uns in das GUI über cluster27 einloggen klappt alles.
Wenn wir uns aber über das GUI von cluster26 (Proxmox 6.3 ) einloggen und dann eine Konsole von einer VM auf cluster27 öffnen wollen, erhalten wir den...
we lost one server of 6 nodes cluster. after reboot the node:
root@cluster24:~# pvecm status
Cluster information
-------------------
Name: cluster
Config Version: 29
Transport: knet
Secure auth: on
Quorum information
------------------
Date: Tue Nov 23...
We have the same problem with moving the datastore.
how can we fix it?
TASK ERROR: zfs error: cannot create 'zfs/subvol-122-disk-0': use 'none' to disable quota/refquota
Kann damit -wenn ungepatchet- ein "root User" aus seinem LXC ausbrechen und so den Hypervisor (Proxmox) kompromittieren?
Wir lassen unsere VMs (LXC) als unpriviligierten Container laufen.
ok we change the type of SCSI Controler to LS and back to VirtIO, now with rescue boot we can do
vgchange -ay
but it says refusing activation of partial LV centos/root
reboot centos7 normal still hung on dracut with
/dev/centos/root not found
same error. suddenly we yust reboot proxmox and than this error occurse.
what can we do?
centos7 rescue did not find any disk.
we move the disk to qcow2 format and mount it from the proxmox host (/var/liv/vz/....)...partitions are there and data are there.
btw:
many thanks
in rescue
blkid did...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.