No, first restart pve-03 and then wait round about 30 seconds, then pve-01 and pve-02 restart automatically... It is a little bit anoying and drives me round the bent... cause this is a productive system...
If you only restart pve-01 everything ist fine, all other nodes are online. Same happens...
The nodes connected to an 10GBe switch with configured trunks. So the cable shouldn't be the problem. Ceph is working without issues only monitors seems to restart nodes.
Changing from
iface bond1 inet static
address 10.10.10.1/24
to
iface bond1 inet static
address 10.10.10.1
netmask 255.255.255.0
takes no effect, all nodes reboot, same as before. Any other suggestions?
Mhhh, really? it is working until the upgrade for mor than 1 year. i do not think it is really the matter, cause proxmox use debian buster under the hood.
Here is the network config of pve-01 (same for pve-02/pve-03)
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or...
Hello,
after update the proxmox nodes few days ago with latest CEPH-Version, something strange happens. If a node is rebooted, all HA cluster nodes are rebooted.
In the log I saw something like that on Node that is not rebooted:
Jul 23 13:36:28 hyperx-01 ceph-mon[2793]: 2020-07-23...
Hello,
thanks for the hint, but following line will cause an error:
cpu: host,flags=-vmx
Error:
flags: value does not match the regex pattern vm xxx - unable to parse value of 'cpu' - format error
Also the ui does not show all cpu types, as the command:
qemu-system-x86_64 -cpu ?
So I ended...
Thanks, but i just want to pass the Intel Xeon Gold 5138 into the VMs. Is ther way to diasble the vmx for vms like an additional parameter Cpu=host -vmx or something else?
Hello,
on a 3 node Ceph Cluster with Proxmox 6, I got following error while live migrating vm with cpu type host and turned on nested virtualization for physical nodes. On Proxmox 5.4.x we have no probs. All physical server are the same. On Proxmox 6.x we got following error:
start migrate...
Hallo,
benutze einen USBToIp Converter, funktioniert wunderbar. Silex sind hier problemlose Geräte für die Intergration und bestens geeignet für die Datev-Dongels. Habe selbst den SILEX 510 im Einsatz für die Datev Usb Dongels.
Hello,
we migrated a ceph Proxmox cluster from 5.4.1 to 6.0 with 3 nodes. Everything works fine, but live migration no longer works with nested virtualization activated and VMs with CPU type host. The three nodes are the same physical machines, nvmes, cpus, ram and so on.
When I migrate a vm...
No, I think it is on your proxmox. Go to point datacenter in left Sidebarmenu, select firewall -> Options then check that following is set to:
Input Policy -> ACCEPT
Output Policy -> ACCEPT