In my (pre-) PMG -setup i had postfix do header-checks via main.cf
header_checks =
regexp:/etc/postfix/header_checks
And every time a mail that originates from an unwanted list makes it thru i add it to this file in the below format
/^List-Unsubscribe:.*thefightback\.info.*/ REJECT...
I installed PMG (LXC template available on PVE ) recently, and whenever a restart/reload is performed of postfix the following is seen in the log:
2024-09-06T23:18:31.639328+02:00 pmg postfix[1673160]: Postfix is using backwards-compatible default settings
2024-09-06T23:18:31.639808+02:00 pmg...
i have not changed my setup since the beginning of the/this post.
So the answer is : No
The reason i used GFS2 in combo with LVM is that i actually offer a filesystem to ProxMox.
In essence the issue was that when mounting the datastore it was set up on the root-level, and not on the wanted NameSpace.
Corrected this, and adapted the sync-job to reflect new source namespace.
- Glowsome
Hi thankyou for your analysis.
With your indication i have run down the complete setup again and found some discrepancies in my setup.
Those are now (should be) corrected.
Jobs will be running tonight/overnight) so if all goes well it should be corrected.
- Glowsome
Tried GFS2 as filesystem ?
I have it in use for shared storage on my setup - not ISCSI , but SAS -shared attatched storage via MSA2040 storage.
Somwhere in the howto's i did document my findings about this.
But as ISCSI is just a different (shared) way of offering storage to a node the...
[ Personal opinion ]
Its hard to really give a tailored advice on what hardware you need to run vm X and LXC Y in combination with requirements Z.
For general sizing guidelines (apart from what you are planning to run on ProxMox) please see the documentation of Proxmox itself.
Not to sound...
Hello all,
On my PBS i have a Sync-job running that does it's job, however it also creates an underlying namespace which i cannot seem to get rid of.
So the scenario as i set it up originally might have caused this, but now (from my point of view) i can no longer get rid of this.
for your...
So to come back to how i've set it up now with the suggestions being made (not honoring all):
- Both PVE instances will backup each day (3 day history) in a traditional way.
- Both PVE instances will backup each day (7 day history) via Local PBS on their local backup store.
- PBS remote will be...
Bare Metal is not an option, IMHO as that would only introduce a SPOF.
Now to be exact in my experience, i have done pass-through on LXC's regarding storage, but never on a VM, as the underlying storage is there, and want to utilize it as such. (we are talking about a Cluster here, tested...
So after having met with PBS and wildly excited about it i have a question as to "best setup"
At current i have One PBS box, but have two separate PVE installations, one remote, one local.
I have separated the backups with NameSpaces, so there is no collision possible regarding...
Hi there,
Again going top-down as to your questions:
if a node crashes / or is poison-pilled/STONITH'ed the rest functions without issues after.
The crashed node gets removed from a/the lockspace, and thus is no longer apart of it.
I have tested it by just hard-resetting a node, and...
Hi there,
To go top-down in answering your questions:
No
Its stable as far as i can tell, i have not had any issues with it going down on me, nor locks, nor FS-corruption.
(i mean if i were having above i would have searched for solutions and reported/updated the tutorial i wrote)
As you...
Turns out it was the FUSE feature being in use on the LXC.
As soon as i took it off - or downed the affected host - after reading Forum posts and docs and the issues with it - backups went fine.
For reference as to where i got my answers : This forum Post
In my case we are talking about a...
For your info, i have not yet gone into the deep regarding actual guest-management.
I just require (for now) managing the nodes of my cluster.
- Glowsome
I am experiencing the same on a (just now created) new LXC container.
Running latest PVE8:
proxmox-ve: 8.0.2 (running kernel: 6.2.16-15-pve)
pve-manager: 8.0.4 (running version: 8.0.4/d258a813cfa6b390)
pve-kernel-6.2: 8.0.5
proxmox-kernel-helper: 8.0.3
pve-kernel-5.15: 7.4-4
pve-kernel-5.13...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.