Search results

  1. T

    Zenbleed - CVE-2023-20593

    Thanks. Sure, but we have Kernelcare. This helps get some upgrades. we will upgrade the "dist" asap.
  2. T

    Zenbleed - CVE-2023-20593

    ok thanks. do we have any problems with 5.13 Kernel? all VMs running fine. we have microcode updated : microcode updated early to new patch_level=0x0830107a so we can still be on kernel 5.13 or? we do not like to reboot our proxmox again now. many customers ;)
  3. T

    Zenbleed - CVE-2023-20593

    we have apt update apt upgrade and now: Linux 5.13.19-6-pve #1 SMP PVE 5.13.19-15 (Tue, 29 Mar 2022 15:59:50 +0200) PVE-Manager-Version pve-manager/7.4-16/0f39f621 seems old kernel 2022. Is there no newer?
  4. T

    Zenbleed - CVE-2023-20593

    ok, to be claear, so with Proxmox "default" it is not enough to do "apt upgrade" and reboot the proxmox host. we first have to install the amd64-microcode package?
  5. T

    Zenbleed - CVE-2023-20593

    thanks but: Ok, so microcode updates NEVER done by proxmox by default? so older microcode updates were not installed, although we have reboot the host? "it was only disabled by default in kernel 5.19" Ok we have Linux version 5.13.19, so " /sys/devices/system/cpu/microcode/reload" would work...
  6. T

    Zenbleed - CVE-2023-20593

    ok , thanks. what are the steps for? there is no amd64-microcode and n apt install amd64-microcode btw: we have the path /sys/devices/system/cpu/microcode/reload using Proxmox 7.1
  7. T

    Zenbleed - CVE-2023-20593

    if we update the microcode d owe have to reboot the hypervisor (proxox host) and/or any VM? or it is enough to run echo 1 > /sys/devices/system/cpu/microcode/reload on proxmox hypervisor?
  8. T

    offline migration schlägt fehl: failed: got signal 13

    Wir haben erfolgreich von cluster26 -> cluster27 migriert. Zurück geht es aber nicht: 2022-03-22 10:51:19 starting migration of VM 100000 to node 'cluster26' (192.168.0.26) 2022-03-22 10:51:19 found local disk 'zfs_local:vm-100000-disk-0' (in current VM config) 2022-03-22 10:51:19 copying local...
  9. T

    VNC Console wift Host key verification failed.

    "du musst von allen nodes zu allen anderen nodes ohne interaktion ssh machen koennen: ssh <IP> oder ssh <hostname> sollte direkt einloggen." das kann ich. ich kann mich problemlos von zb cluster26 auf cluster27 per ssh IP und HOSTNAME einloggen und umgekehrt.
  10. T

    VNC Console wift Host key verification failed.

    Wir haben einen node cluster27 unserem Cluster hinzugeführt (Proxmox 7.1-10) Wenn wir uns in das GUI über cluster27 einloggen klappt alles. Wenn wir uns aber über das GUI von cluster26 (Proxmox 6.3 ) einloggen und dann eine Konsole von einer VM auf cluster27 öffnen wollen, erhalten wir den...
  11. T

    cluster crashed / cpg_send_message retried 100 times one node is red

    cluster24:~# pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.78-1-pve) pve-manager: 6.3-2 (running version: 6.3-2/22f57405) pve-kernel-5.4: 6.3-2 pve-kernel-helper: 6.3-2 pve-kernel-5.4.78-1-pve: 5.4.78-1 pve-kernel-5.4.55-1-pve: 5.4.55-1 pve-kernel-4.15: 5.4-19 pve-kernel-4.15.18-30-pve...
  12. T

    cluster crashed / cpg_send_message retried 100 times one node is red

    Nobody? We set this cluster to standalone to have the VMs up. But seems this node is corrupt and we can not get it into the cluster again.
  13. T

    cluster crashed / cpg_send_message retried 100 times one node is red

    we lost one server of 6 nodes cluster. after reboot the node: root@cluster24:~# pvecm status Cluster information ------------------- Name: cluster Config Version: 29 Transport: knet Secure auth: on Quorum information ------------------ Date: Tue Nov 23...
  14. T

    Restore LXC from PBS fails: Use 'none' to disable quota/refquota

    thanks Ok we set refquota and quota for the original datastore, then it was possibel re move the volume.
  15. T

    Restore LXC from PBS fails: Use 'none' to disable quota/refquota

    We have the same problem with moving the datastore. how can we fix it? TASK ERROR: zfs error: cannot create 'zfs/subvol-122-disk-0': use 'none' to disable quota/refquota
  16. T

    [SOLVED] Sicherheitslücke im Kernel CVE-2021-33909

    Kann damit -wenn ungepatchet- ein "root User" aus seinem LXC ausbrechen und so den Hypervisor (Proxmox) kompromittieren? Wir lassen unsere VMs (LXC) als unpriviligierten Container laufen.
  17. T

    [SOLVED] /dev/centos/root does not exist after migrating CentOS7 from vmware to proxmox VE

    we got it: we do vgextend --restoremissing <volume group> <physical volume> this works. confusing because lvdisplay shows no missing PV
  18. T

    [SOLVED] /dev/centos/root does not exist after migrating CentOS7 from vmware to proxmox VE

    ok we change the type of SCSI Controler to LS and back to VirtIO, now with rescue boot we can do vgchange -ay but it says refusing activation of partial LV centos/root reboot centos7 normal still hung on dracut with /dev/centos/root not found
  19. T

    [SOLVED] /dev/centos/root does not exist after migrating CentOS7 from vmware to proxmox VE

    same error. suddenly we yust reboot proxmox and than this error occurse. what can we do? centos7 rescue did not find any disk. we move the disk to qcow2 format and mount it from the proxmox host (/var/liv/vz/....)...partitions are there and data are there. btw: many thanks in rescue blkid did...
  20. T

    Linux VLAN Interface ohne neustart (reboot)

    Thanks. with ifup and ifdown it works without reboot.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!