hi the thing that scrare us the most using HA is not been sure about this below question:
Let say a VM fail for X reason but all 5 PVE node are active. or the VM try to move for X reason to another PVE node but fail to do so.
Will HA kill the Node that server correctly let say all others vm...
During Verification job , PVE host have hard time accessing the Datastore of a PBS server. as the pbs take all available i/o.
in case a customer want to restore a VM we would to probably mannualy kill the verification job per our observation when
having hardtime to only connect to see backups...
yes that is correct
let say for example :
*/5 * * * * root qm status 104 | grep -q stopped || qm start 104
i dont want to put this job on each node if possible. as the VM may move to another host over time.
hi currently we use :
/etc/crontab
to configure a cron on each host , and the file is in /etc/pve/cron/cronxyz.cron
is there a way to have PVE cluster service runnning this job and not individual host ?
It's not desired.
Put a mx record weight of 10 for server 1 and 20 for server 2 etc
Otherwise configure load balancing on your gateway but make sure the traffic of each server is going trough the same helo messages and outbound ip
Ceph speed depend of network capacity between each of the node who host osd.
We need more details about the tested cluster to help.
If for example you have a vm running on the same network for cluster storage and monitors you lose capacity there too.
I suggest you to have at least 4 10gb...
per my reading erasure codng seem to be interesting and allow to preserve more usable storage , do you have any advice on using it with proxmox ?
otherwise a 3/2 will always cost around 60-70% of usable storage no matter the size of the cluster per my calculation i am wrong ?
i know also alot of...
Thx for explanations.
Do size of 3 mean 3 copy + active data or a total of 3 ?
So if I gave a pool of 4 osd and are configured to 3 , a copy exists on all 4 or 3 of 4?
Sorry in advance
i read that ceph can inforce having copy on different host not only different osd
is this rule by default in Proxnox ? ( the Number refer to OSD and or host ?
or its a rule we need to add to the config file ?
@gurubert i think i was wrong about my understanding of CEPH ,
i tough a pool was replicating 100% of a time on all OSD by default.
this is incorrect right ?
if i have 2/2 , the data is on 3 OSD not 4 ? ( active splitted on 2 OSD and a 2 copy on 2 other OSD for a total of 3 ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.