Search results

  1. offline migration schlägt fehl: failed: got signal 13

    Wir haben erfolgreich von cluster26 -> cluster27 migriert. Zurück geht es aber nicht: 2022-03-22 10:51:19 starting migration of VM 100000 to node 'cluster26' (192.168.0.26) 2022-03-22 10:51:19 found local disk 'zfs_local:vm-100000-disk-0' (in current VM config) 2022-03-22 10:51:19 copying local...
  2. VNC Console wift Host key verification failed.

    "du musst von allen nodes zu allen anderen nodes ohne interaktion ssh machen koennen: ssh <IP> oder ssh <hostname> sollte direkt einloggen." das kann ich. ich kann mich problemlos von zb cluster26 auf cluster27 per ssh IP und HOSTNAME einloggen und umgekehrt.
  3. VNC Console wift Host key verification failed.

    Wir haben einen node cluster27 unserem Cluster hinzugeführt (Proxmox 7.1-10) Wenn wir uns in das GUI über cluster27 einloggen klappt alles. Wenn wir uns aber über das GUI von cluster26 (Proxmox 6.3 ) einloggen und dann eine Konsole von einer VM auf cluster27 öffnen wollen, erhalten wir den...
  4. cluster crashed / cpg_send_message retried 100 times one node is red

    cluster24:~# pveversion -v proxmox-ve: 6.3-1 (running kernel: 5.4.78-1-pve) pve-manager: 6.3-2 (running version: 6.3-2/22f57405) pve-kernel-5.4: 6.3-2 pve-kernel-helper: 6.3-2 pve-kernel-5.4.78-1-pve: 5.4.78-1 pve-kernel-5.4.55-1-pve: 5.4.55-1 pve-kernel-4.15: 5.4-19 pve-kernel-4.15.18-30-pve...
  5. cluster crashed / cpg_send_message retried 100 times one node is red

    Nobody? We set this cluster to standalone to have the VMs up. But seems this node is corrupt and we can not get it into the cluster again.
  6. cluster crashed / cpg_send_message retried 100 times one node is red

    we lost one server of 6 nodes cluster. after reboot the node: root@cluster24:~# pvecm status Cluster information ------------------- Name: cluster Config Version: 29 Transport: knet Secure auth: on Quorum information ------------------ Date: Tue Nov 23...
  7. Restore LXC from PBS fails: Use 'none' to disable quota/refquota

    thanks Ok we set refquota and quota for the original datastore, then it was possibel re move the volume.
  8. Restore LXC from PBS fails: Use 'none' to disable quota/refquota

    We have the same problem with moving the datastore. how can we fix it? TASK ERROR: zfs error: cannot create 'zfs/subvol-122-disk-0': use 'none' to disable quota/refquota
  9. [SOLVED] Sicherheitslücke im Kernel CVE-2021-33909

    Kann damit -wenn ungepatchet- ein "root User" aus seinem LXC ausbrechen und so den Hypervisor (Proxmox) kompromittieren? Wir lassen unsere VMs (LXC) als unpriviligierten Container laufen.
  10. [SOLVED] /dev/centos/root does not exist after migrating CentOS7 from vmware to proxmox VE

    we got it: we do vgextend --restoremissing <volume group> <physical volume> this works. confusing because lvdisplay shows no missing PV
  11. [SOLVED] /dev/centos/root does not exist after migrating CentOS7 from vmware to proxmox VE

    ok we change the type of SCSI Controler to LS and back to VirtIO, now with rescue boot we can do vgchange -ay but it says refusing activation of partial LV centos/root reboot centos7 normal still hung on dracut with /dev/centos/root not found
  12. [SOLVED] /dev/centos/root does not exist after migrating CentOS7 from vmware to proxmox VE

    same error. suddenly we yust reboot proxmox and than this error occurse. what can we do? centos7 rescue did not find any disk. we move the disk to qcow2 format and mount it from the proxmox host (/var/liv/vz/....)...partitions are there and data are there. btw: many thanks in rescue blkid did...
  13. Linux VLAN Interface ohne neustart (reboot)

    Thanks. with ifup and ifdown it works without reboot.
  14. Linux VLAN Interface ohne neustart (reboot)

    Liebe Community, wir haben einen BareMetall Server mit 2 NICs. Die eine ist im Uplink eno1 und die 2. ist ein weiteres Netz. eno2 Wir haben im Proxmox eno1 / vmbr0 zu diesem eno2 / vmbr1 zu diesem. Nun benötigen wir noch ein Interface eno2.11 um darauf eine vmbr2 anzulegen...
  15. After configure metric server, VMs show status unknown.

    Ok thanks. This "?-issue" still exists. ok now we know. Thanks a lot.
  16. After configure metric server, VMs show status unknown.

    thats the case. we like do add a external InfluxDB Server. But we want not to "crash" the cluster if the influxDB Server is down (f.e. by maintenaince or other errors).
  17. Install Proxmox in an OVH Vrack

    Thanks, but that's not the case. we can use vlans, but we can not us it via public ips
  18. Install Proxmox in an OVH Vrack

    Thanks, Yes we do. Like described it works. BUT: proxmox node: eth0 -> vmbr0 with its unique public ip of proxmox node like 100.100.100.5 (not connectet to vRACK, MAC OVH mac binding direct to this hardware server). eth1 -> vmbr1 is vRACK interface ip 192.168.1.5. Other public ip block...
  19. Install Proxmox in an OVH Vrack

    Thanks. Our setup works. But we like to vlan our additional public IPs. Proxmox cluster is 192.168.1.0/24 with 6 Nodes connectet via vRACK to each other. Corosync works on that network. Because "the problem" is: we create a VM with NIC eth0 on vmbr1. normaly the customer get an ip of our public...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!