Hi we have large a 48tb 2u server that we would like to use as nfs target for veeam but would like to have pbs running on it for some servers .
As pbs do not allow nfs server service we dont want to spend on 2 units for now. Do you think running pbs as a vm on a zfs truenas scale array will do it ?
Hi thx for the excellent explanation.
Is there a way to define a preferred master ?
So i can select nodes not part of a HA as prefered master or it willl not improve any potential failure
hi Fabian, will try to send you Node2 detail today. Node 2 was restarted 1 hours earlier to test the installed Ceph drive on it sucessfuly , i was to install Ceph on node 1 , so i moved 2 small mikrotik router and a VM of 20GB that is a virtual gateway that we use for test only.
you mention not...
hi alex
actually we think we got a issue similar to this :
https://forum.proxmox.com/threads/whats-the-actual-official-way-to-reboot-nodes-in-an-ha-enabled-cluster.97575/
we think Node 1 failed due to Ram issue, but Node 3 started acting wierd. based on your Comment at 11h13 am the Node 3...
Hi Thomas
can you give me a quick advice on this topic :
https://forum.proxmox.com/threads/is-nodes-inside-a-ha-group-the-only-ones-to-fences.122428/
we faced a similar situation and are trying to see if we can '' half use HA '' but not been scared of a HA bug can crash 15 nodes..
hi fabian,
i seem to have faced the same situation here concerning the thread we are discussing . possible ?
https://forum.proxmox.com/threads/3-node-cluster-crash-and-sent-more-than-1000-email-in-5-minutes.122354/#post-531832
hi let say we have a 16 node cluster
and we have a HA group for a Few VM on 3 Nodes.
1 of those node Fail , and similar to what we faced recently , another crash for no reason .
will the fencing from HA mecanism will only try to fence the Nodes part of this specific Group , or HA might try...
@fabian
i suspect that Node 1 as crached during a vm transfer because of ram error
but the node 3 gone crasy as you will see and for ever sent email and log of trying to fence again and again the VM , all that inside of 3-5 min
my 3 node cluster than completly crashed cause for some reason...
@fabian can i send you the log in PV , in never seen anything like that. 1 blade crached and the other returned thousand of lines trying to HA and keep retying his comands and sending email in a infinite loop unti li think the OS crached and that blade rebooted to but buch like becaue of a bug...
Hi one of our smalls Cluster crashed and i cant explain why yet even when reading the log.
we are running 7.3.3 .
i was looking to maintenance node 1 , i moved a VM to node 2, ZFS local to ZFS local.
our HA was active , so it tried to move it back right away after the task to Node1.
that...
we are working with Filelevel actually so your idea of Snapshot is not bad at all.
so we can Daily Rsync and keep a a 7 days SNAP on a NFS target + the full image ?
is there a way to schedule automated snapshot ?
will try to setup a LAB as i think a few years ago we faced a issue trying to get GFS2 working on a shared SAS enclosure. DLM i think was not able to handle the 2 network and i think that was documented to as a limitation but cant find anything related..
The issue with zfs Replication is when you have more than 2 - 3 node it get ridiculus to consume that many lost storage ( 1:3 )
With ceph we lose the benefit of having vm accessing the data trough shared SAS connectivity.
So far the best way we found is doing shared sas with GFS2 . But we now...
@fiona thx for your return,
let say a fresh copy as been restored from a 500gb VM and is now sleeping, can the nexts daily scripted restore , reupload the vm from scratch or we may be able to manage to use the incremental backups instead ?
can we pay for that kind of script your...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.