In essence the issue was that when mounting the datastore it was set up on the root-level, and not on the wanted NameSpace.
Corrected this, and adapted the sync-job to reflect new source namespace.
- Glowsome
Hi thankyou for your analysis.
With your indication i have run down the complete setup again and found some discrepancies in my setup.
Those are now (should be) corrected.
Jobs will be running tonight/overnight) so if all goes well it should be corrected.
- Glowsome
Tried GFS2 as filesystem ?
I have it in use for shared storage on my setup - not ISCSI , but SAS -shared attatched storage via MSA2040 storage.
Somwhere in the howto's i did document my findings about this.
But as ISCSI is just a different (shared) way of offering storage to a node the...
[ Personal opinion ]
Its hard to really give a tailored advice on what hardware you need to run vm X and LXC Y in combination with requirements Z.
For general sizing guidelines (apart from what you are planning to run on ProxMox) please see the documentation of Proxmox itself.
Not to sound...
Hello all,
On my PBS i have a Sync-job running that does it's job, however it also creates an underlying namespace which i cannot seem to get rid of.
So the scenario as i set it up originally might have caused this, but now (from my point of view) i can no longer get rid of this.
for your...
So to come back to how i've set it up now with the suggestions being made (not honoring all):
- Both PVE instances will backup each day (3 day history) in a traditional way.
- Both PVE instances will backup each day (7 day history) via Local PBS on their local backup store.
- PBS remote will be...
Bare Metal is not an option, IMHO as that would only introduce a SPOF.
Now to be exact in my experience, i have done pass-through on LXC's regarding storage, but never on a VM, as the underlying storage is there, and want to utilize it as such. (we are talking about a Cluster here, tested...
So after having met with PBS and wildly excited about it i have a question as to "best setup"
At current i have One PBS box, but have two separate PVE installations, one remote, one local.
I have separated the backups with NameSpaces, so there is no collision possible regarding...
Hi there,
Again going top-down as to your questions:
if a node crashes / or is poison-pilled/STONITH'ed the rest functions without issues after.
The crashed node gets removed from a/the lockspace, and thus is no longer apart of it.
I have tested it by just hard-resetting a node, and...
Hi there,
To go top-down in answering your questions:
No
Its stable as far as i can tell, i have not had any issues with it going down on me, nor locks, nor FS-corruption.
(i mean if i were having above i would have searched for solutions and reported/updated the tutorial i wrote)
As you...
Turns out it was the FUSE feature being in use on the LXC.
As soon as i took it off - or downed the affected host - after reading Forum posts and docs and the issues with it - backups went fine.
For reference as to where i got my answers : This forum Post
In my case we are talking about a...
For your info, i have not yet gone into the deep regarding actual guest-management.
I just require (for now) managing the nodes of my cluster.
- Glowsome
I am experiencing the same on a (just now created) new LXC container.
Running latest PVE8:
proxmox-ve: 8.0.2 (running kernel: 6.2.16-15-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-4
pve-kernel-5.13...
Conditions:
- PVE8.04 / latest
- iso/templates are mounted as a separate LVM volume under /data/iso ( seen as iso storage in PVE UI)
- logged in as federated OIDC user ( with Enterprise admin privileges)
Behavior experienced:
- Uploaded new iso to /data/iso
- iso was correctly added
- temp...
In reading your situation i do not see a risk, as you are placing a/the firewall outside/ in front of of the whole cluster setup.
Meaning you are not running into an infinite loop when your pfsense ( as a cluster resource) is not up, but is needed to be up for all nodes to reach quorum.
I will make a report about this, as imho if a field/option is optional, then it should be optional in the GUI, and not confuse someone (less skilled) in manually making these changes over the command-line.
They should be able to create a Linux Bond via the GUI.
As you point out ... if one has...
The thing i stumbled over is the fact that in the GUI the field is treated as a 'mandatory', and not optional if i interpret your reply correctly.
IMHO ( and that was why i opened the thread) it should also be an optional field there.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.