Well, it should be 8972 bytes of payload that should work. Seems like something else in the network might not be able to handle larger payloads. Are you in a VLAN? Maybe some other layers in the infrastructure that you don't control or know about that might add additional headers, pushing the...
This has been discussed a few times already, a Proxmox VE cluster needs to be able to use SSH with the root user between the nodes. Therefore, the most that you can do is to set it to prohibit-password.
Live migration is one of the use cases as you experienced.
We currently have a feature request open to make it possible to use the hostname provided by the DHCP server https://bugzilla.proxmox.com/show_bug.cgi?id=5811
Thanks for bringing this up again. No, I haven't gotten around to look more into that setting.
Do you have any rough numbers? Also, to put them into context, the disks and network speed would be nice to know :)
Du hast VMs (qm list) und LXC Container (pct list). VMs verwenden ZVOLs für die Diskimages. Bei Containern werden "normale" Dateisystemdatasets verwendet. Die werden ganz gewöhnlich gemounted. Den Mountpoint siehst du in der letzten Spalte im zfs list Output.
Du siehst bei ZFS den Unterschied...
Laut zfs list gibts keine volume (zvol) datasets. Alle haben einen mountpoint. Somit gibt es keinen Bedarf, die volume datasets in /dev/zvol/{pool}/... darzustellen.
Haben die anderen Proxmox VE Server VMs am ZFS liegen? Dann haben sie zvols. Die hier hat nur einen CT, welcher auf ZFS ein...
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#_ct_change_detection_mode
In our tests it was a fraction of the old option. E.g. ~10min -> ~3min on one of my personal hosts. In other situations, the gain might be even more.
If Corosync had all three networks configured as links, chances are low, that all became unusable. But you can check the journal for the Corosync logs to see what happened.
journalctl -u corosync
It will log when it loses the connection to another host on a link.
There will be a rebalance. If your cluster is already running at its performance limit, this might push it over to the point where you see performance issues. But if you are at that point, then it is high time to upgrade the cluster or reduce load, as it should be able to handle such situations...
I never had any experience with HDDs. With SSDs one definitely wants to use the cache of the disk if they have PLP. Then they can ACK the sync writes a lot faster.
I remember watching a talk about the experience gained by CERN running Ceph, and they do mention, to enable the cache of the disk...
Can you please post the zfs list output in Blocks? The editor has buttons for that as well. Otherwise it is barely readable as the spacing is all messed up.
Did you enable "Thin provision" in the storage config?
The output of zpool status could also be of use.
Könnt ihr von Hand einer VM auf dem Storage mal eine Disk anlegen? Ich weiß gerade nicht, wie gut das qm disk import damit umgeht, wenn das Zielstorage evtl. in einem komischen Zustand ist.
Die alten Server haben keine OSDs oder?
Dann sollten diese Nodes mit dem Netzwerk auch entsprechend konfiguriert sein, dass sie schnell am Ceph Public Netzwerk hängen. Ansonsten ja, installieren, dem Proxmox VE Cluster beitreten und idealerweise die Ceph Pakete installieren. Dann können diese...
Well, changing the SSH port will break more than just access to the host shell via the web UI. Live migrations are one such thing I can think of.
Can you work around that in some situations and override the default port to connect to? Probably yes. Will it be supported from our side if you run...
Change it back to the expected default?
Changing the port to a non-default one is not the great addition to security. Not exposing it to the wide internet would be the better approach if that was the reason you changed it in the first place.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.