AFAIK this is safe. Best would be to remove the VM/CT config file from /etc/pve of the old cluster.
You may encounter some issues with the virtual hardware version on the new cluster. VMs (especially Windows) may be picky.
Hi all,
We are currently running an outdated PVE cluster (version 6.4) consisting of 5 nodes. All VMs and containers are using an external NFS share for both disk storage and backups, mounted at /mnt/pve/NFS.
We’ve recently acquired 2 new nodes...
Dann ist es ja noch einfacher. Beim erzeugen einfach passende IDs vergeben. Wenn du deine Abhängigkeiten kennst, kannst du dir ja passende Nummernkreise schaffen.
You need at least 3 nodes, but you can attach more als you like.
Enterprise SSDs with *real* Power Loss Protection is a must-have ! For Example Micron 7400 or Samsung pm9a3. PLP is important for latency - non PLp drives will have a latency...
Carefully choose your SSDs. We had a case with non-enterprise SSDs which had to be replaced to guarantee a stable setup.
In addition: give ceph its own seperate network to avoid problems during backups oder other "high load"-cases.
One advantage of Ceph is its flexibility. The goal of my "FabU" was to mention some aspects and pitfalls. Not more.
That "38" is the sum of Ram of my example in the cluster. My point was that each and every daemon - be it OSD/MON/MGR or MDS -...
Basically CEPH works like a Software Raid Controller over several nodes. That means, no extra layer (Hardware Raid) between drives <-> OS should be involved.
CEPH automatically uses all OSDs for data and redundancey. as soon as you add more...
Hi @Sarlis Dimitris ,
It’s reassuring to hear “this is what you should do,” but reality is rarely that simple. There are companies running multi-petabyte Ceph clusters without issues, while others have had their "weekends ruined" by Ceph...
Why not test the concrete behavior? Just create a test-Container, add some storage, run backup/restore, "zfs rename" the virtual disk, start the Container, and-so-on. This approach comes for free and teaches you the behavior of your actual...
While we appreciate people creating educational content for our projects in general, please do not "pitch" your videos in every thread, especially not such as this one here. It's old and fully answered, so really no need for necro-bumping and not...
You're on the right track with a 3-node cluster setup. Based on my experience with similar Proxmox HA deployments, here are a few key points to consider:
Ceph is great for HA, but only when it's used with JBOD disks (not hardware RAID). If...
You need at least 3 nodes, but you can attach more als you like.
Enterprise SSDs with *real* Power Loss Protection is a must-have ! For Example Micron 7400 or Samsung pm9a3. PLP is important for latency - non PLp drives will have a latency...
One advantage of Ceph is its flexibility. The goal of my "FabU" was to mention some aspects and pitfalls. Not more.
That "38" is the sum of Ram of my example in the cluster. My point was that each and every daemon - be it OSD/MON/MGR or MDS -...
Can you give this script a go? There are certain edge cases where the VM may shutdown (e.g. manually shutting the VM via a command within the VM) which causes the hook not to run. When the VM starts, the script belo, will always unbind the vfio...