I run lm-sensors in a container. I can read the host sensors fine but cannot set thresholds from within the container (I currently have to do that on the pve host).
What lxc device node permissions would be needed in order to successfully run sensors -s within the container?
The board's sensor...
I use PMG in an homelab situation for personal domains. I have played with various all-in-one mail solutions (and learnt much from them) as well as build a mailserver stack from scratch. I now know I prefer the least coupled, modular approach. PMG fits this preference quite well. As dedicated...
I can confirm the suggested solution works for me.
I thought of defining the To: header match object as a wildcard <.*@gmail.com>. But then it occurred to me, would that make my PMG installation an open relay for gmail destination addresses?!
I may be wrong but I believe the Match field object only operates on message headers of which 'body' is not one.
For filtering based on the message body, you would need to create some custom spamassassin rules.
Thanks. I will try that.
The email in question comes from a low traffic gmail mailbox, so no blatant spam as such. One recent & quite important mail got scored high though. Incidentally it was an insurance renewal reminder and the email was full of marketing crap and other spammers tricks like...
Something I did, after adding a new disk as lvm-thin backed storage, was to disable vm/ct disk storage for the default local storage (/var/lib/vz) and only use it for iso, container templates, snippets etc. This way, you don't inadvertently create disk volumes in unexpected places.
Are you confusing the thin provisioned pool `data` with individual volumes created within it? This is why I said I like to think of the thin pool (e.g. the `data` volume in this case) as being like a volume group.
That `data` volume is not a regular LV but a thin pool. For want of a better word, it is a container of other LVs. You cannot mount the entire contents of `data` to a single directory.
If you add an additonal disk, you can create a new lvm thin pool and assign it as storage all from the PVE UI...
`data` is a particular type of logical volume (lvm thin). Whereas `root` and `swap` are regular logical volumes that have been formated and mounted.
All are within the same volume group `pve`. I look at it as `data` being a volume group within a logical volume, in which PVE creates LVs assigned...
The root volume (proxmox/debian OS) requires very little space and will be formatted ext4.
See this. It explains how to control the data volume (guest storage), if any, that you want on the system disk.
I am not sure where xfs might be more desirable than ext4. Maybe a further logical volume...
Don't overlook lvm-thin. It looks like you intend passing through real block devices to a guest (NAS?), so may not have large guest storage requirements in the immediate term. You can take atomic snapshots with lvm-thin backed storage. Every time I have revisited ZFS, I have never been able to...
@anoo222 Re config and non-default packages. In the absence of good notes, I have had some success doing something like this:
# on the first system
# backup etc
tar cvzf etc.tgz /etc
# list of installed packages that placed files in etc
dpkg -S /etc | tr ', ' '\n' | sort >packages.old
# On new...
Consider keeping the existing disk's rootfs & swap. Retain or repurpose the existing vz volume on that drive.
Format the new disk, whole drive as an lvmthin volume, and add that as dedicated guest storage. Restore your guests to the new storage.
If you are not delivering outging mail directly but instead using a smarthost, you can configure that directly in the PMG web UI
If that smarthost requires authentication, then AFAIK, you must add the auth config directly in postfix config /etc/pmg/templates/main.cf.in and a supplementary lookup...
@fabian many thanks for identifying the cause.
I'm staying with my snippets on the rootfs for time being. But if I apply the suggested change, it is presumably:
systemctl edit pve-storage.target
#/etc/systemd/system/pve-storage.target.d/override.conf
[Unit]
After=blk-availability.service
Yes, that makes sense! I'd thought it too large to post.
So I disabled all but two VMs and a container.
Lo and behold, the pve-vz volume is umounted before the host begins shutting down guests. I've no guest volumes on that store BTW.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.