Hi,
thank you for your reply. We already tested these options without success. We found the main reason yesterday.
The master unit of our juniper virtual chassis seems to have had an issue with some ports. We could "fixed" this with changing the routing engine to 2nd member and back again...
Hi,
we have installed a fresh proxmox 8.1 Server with 4 x 10Gbit NIC and 2 x 1Gbit NIC
2 x 10Gbit as bond for storage
2 x 10Gbit as bond for VMS IP
2 x 1Gbit as bond for Proxmox-MGMT
cat /etc/network/interfaces
# network interface settings; autogenerated
# Please do NOT modify this file...
Hi,
after upgrading our 7.2 node to 7.4 with Kernel pve-kernel-5.15.102-1-pve the node is not able to boot again.
It breaks through initiating network tasks on bond0, messages on screen shows problems with bonding links.
Our mgmt IP from the node is pingable, but no access to ssh or gui is...
Hi,
i have one question. We have a proxmox cluster installed on server with 1 disk for OS (didnt recognized that one was missing on delivery)
Now we have installed a second disk to the proxmox pve host and want to add it like a raid1.
Current situation:
Device Start End...
Hi Robert,
its possible, even if you already have the other two ceph clusters configured. If you need some assistence on this dont hesitate to contact me with more details.
Best regards,
Volker
We could shutdown the VMs, migrate the config and restart them. Powercycle of node also works, we will investigate the issue and observe the behaviour the next days.
Best regards,
Volker
Yes, to move configs i know about. The hope was that there is a way to do a livemigration in such a state.
Other plan is to shutdown instances and restart them on another node with config move.
Hi,
in our cluster, pve node05 is marked with a "?" in Proxmox Gui.
The VMS on that node are still running and available, but i even cant access per ssh or gui to that server.
Is there a way to migrate running vms to another node while the server is not available via ssh or mgmt gui?
icmp to...
Hi, we have the same problem here:
before reboot:
6: eno1: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
link/ether 04:42:1a:1a:ae:18 brd ff:ff:ff:ff:ff:ff
altname enp130s0f0
After reboot
6: eth4: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group...
Out thought was to easy move vms between both clusters without changing too much in the config file.
Perhapos we should wait for proxmox multi cluster tool :-)
Hmm, seems i cant reproduce it at the moment and it works as it should. Its some weeks ago, perhaps i miss something. Think this Post can be closed. Sorry :-)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.