I am with @guruevi ; for the AMD system I would recommend to add some more disks (HDDs plus two small but high quality SSDs for a Special Device) and install a PBS. Backups from the beginning are often an oversight...
What do you need exactly? That is some ancient hardware, but you can still run Proxmox on them, use Ceph and be on your way. You’ll need more RAM to actually run a modern VM on some of those and spinning disks aren’t going to give you great...
You can have only one solution: either build two independent pools or have a Special Device. For rotating rust I do really, really, really recommend having a Special Device.
What I would not do is to install the OS on a non-redundant device...
You should not overcommit RAM. It just does not work well.
If you need to relax that problem a little bit you may look at zram; I prefer this over a static swap-file:
~# apt show zram-tools
Description: utilities for working with zram
zram...
That's another FAQ, definitely.
The first is the 8*8=64 TB your devices bring into the game.
You have configured a RaidZ2, so two of those are used for redundancy. This leaves 8*6=48 TB, theoretically.
From 48 down to 45.37 is not really far...
No, that's not what is happening. It just works perfectly fine with 2 nodes, because it still has quorum (PVE as well as CEPH), yet you are in a degraded state, because you don't have 3 copies of your data, just 2. In the 4 nodes left out of 5...
As far as i understand it, if you have a 3 node ceph cluster and you loose one node ceph will go into a read only mode because there is no node left to copy data to (if you use 3/2) to keep the redundancy up.
At least that's what some community...
Correct, the resources in the cluster should not stop working.
Are MONs running on each node? That's the requirement to keep up two --> "OK".
Three is the absolute minimum. As soon as anything "bad" happens with one node you are degraded. And...
64cores/100%*14%usage=9, so as long as you have a load that correspondence as high as your cpu usage it's perfectly fine as your running machines (vm or lxc) eating normally your resources. Even your I/O delay is said is just fine with 0,32%...
You could copy the file /etc/pve/qemu-server/150.conf to /etc/pve/qemu-server/107.conf and then run qm disk rescan --vmid 107. That will add the vm-107-disk-0 to VM 107 as unused. You can then use the Proxmox web GUI to connect the unused drive...
A QDevice can be removed at any time with the command (the package corosync-qdevice don't need to be deinstalled):
pvecm qdevice remove
and it can be reinstaled with: pvecm qdevice setup <QDEVICE-IP> -f
Check after with the cmd's:
pvecm status...
Hello, this question is answered in the documentation.
https://docs.ceph.com/en/squid/rados/operations/stretch-mode/ :
When stretch mode is enabled, PGs will become active only when they peer across CRUSH datacenter``s (or across whichever CRUSH...
From everything you sent, one thing stands out on that screenshot you attached: The Peer RTT (bottom right) for your third node spikes up to over 3 seconds all of a sudden, correlating with the increase in traffic and the Raft proposal...
Grundsätzlich klingt das ein wenig seltsam was da gelaufen ist.
Schau mal nach ob du noch /etc/pve/nodes/{alte nodes}/qemu-server Ordner hast und dort nicht die configs noch da sind. Dann kanns du sie mit mv in den richtigen schieben.
No, must be something on your end. I would suggest to check your network.
That depends not on the guest but on the used storage technology. Please share what you used there. screenshots of the UI or cmd line output (CODE tags) of the contents of...
Okay litte update after a bit more testing. I found out that my system brakes after the post script run. After a bit more digging I found out that I had some realy old comunity scripts installed I totaly forgot about. after cleaning up and a...
Eventuell hat ein anderer Mitleser eine bessere Idee, aber mir fällt keine ein. Was hingegen möglicherweise klappen kann ist dies:
ps auxww | grep kvm
Das zeigt die lange, lange Kommandozeile deiner noch laufenden VM. Wenn gar nix hilft...
dding the first MB of the disk may not be enough. With GPT, we also have a copy of the partition table at the end. If possible use blkdiscard on the blockdevice (e.g. blkdiscard /dev/sda)