AFAIK this is safe. Best would be to remove the VM/CT config file from /etc/pve of the old cluster.
You may encounter some issues with the virtual hardware version on the new cluster. VMs (especially Windows) may be picky.
at this point it may be worthwhile to see how your network is set up.
Do you want to post the content of your /etc/network/interfaces for your nodes, and describe how they are physically interconnected?
looking at your layout... you are BRAVE. I wouldnt go to production with such a lopsided deployment, and without any room to self heal.
brave is a... diplomatic word.
This is not unusual in such a small cluster with such a low number of PGs.
The CRUSH algorithm just doe snot have enough pieces to distribute the data evenly.
You should increase the number of PGs so that you have at least 100 per OSD.
Ditched the drive, subbed in a couple smaller HDDs to scrape by until I get a replacement drive, and everything eventually balanced back out beautifully. Pools now use only one type of drive instead of a mix. Thank you.
Der ssh Zugriff für root ist bei den meisten Distributionen standardmäßig deaktiviert! Kann man aber natürlich aktivieren.
Welches Template verwendet du?
nun wir sehen alle nun über unsere Glaskugel deinen Rechner vor uns, können uns die Netzwerkkonfigurationen anschauen und uns ein Bild über deine dynamischen Setups machen. Besser kann man so einen Proxmox VE Setup nicht dokumentieren. :cool:
Thanks for taking the time to review my questions and providing the additional clarity, I will go back to the drawing board, learn some more and rethink the approach.
your config is unworkable.
While you didnt provide your actual crush rules, I can already see they can never be satisfied.
Consider: you have 3 nodes.
node pve2 15.25TB HDD, 1.83TB SSD
node pve3 7.27TB HDD, 0.7TB SSD
node pve4 0.5 HDD, 0.9 SSD...