@Symbol that is what was understood by @Klaus Steinberger first reply, but a later posted suggested that having the discard option set on the Virtio adaptor was alone enough.
Hence the testing. The confusion appears to be in the PE option and the fstab option both being using the word...
I just did some tests, and with the VirtIO SCSI discard option on, deleting a (large) file from the guest DOES NOT reduce the usage on the host.
When fstrim is run in the guest, the usage is then reduced in the host.
As such, the recipe in #4 is the approach I'm taking.
EDIT: Running fstrim...
implies that the discard option on the VirtIO SCSI disk is independent of the OS program run. This one:
shows that fstrim...
Thank you for the informative reply.
So a potential recipe can be:
Schedule fstrim to run weekly on PE hosts
Use the Virtio SCSI driver for VMs when possible, enabling the discard option each time
Schedule trim (eg fstrim in Linux) to run weekly in guest VMs
Schedule fstrim to run weekly in...
I have read all posts marked with trim but am still a little confused. Currently I have two PE nodes. Both have PE installed to an SSD. One has two further SSDs for VM disk images (using LVM-thin, discard always on); the other has HDDs which are passed through directly to the respective VMs...
@Dorin @drdownload I didn't realise there were replies, so I'm not sure which bits you still need help with. FWIW I noticed a Windows 7 boot screen at some point; I would suggest that later operating systems use UEFI and stuff which will def make a difference, and possibly why a romfile had to...
As above really - email is working great, but I just don't understand how.
As far as I can remember I didn't put in any SMTP settings during the Proxmox install... and I see no relevant config in postfix. Googling suggests that the Proxmox ISO install "makes it work out of the box" so I'm...
On Proxmox 5, I appear to already be able to see SMART data via the webgui. Further I can send email from the CLI:
echo "testing" | mail -s "test" firstname.lastname@example.org
So I suspect that email smart alerts are already working. My question is if there's a way to test it? Or perhaps get a monthly SMART...
Thank you for the tip. I had a look and although it seems nifty, it also seems like overkill - this is for a home lab so doesn't require monitoring per se, just notifications (email will do) if certain usages go above certain thresholds.
My question is mainly if Proxmox has anything built in...
I'm looking for this too - in particular for things like diskspace, CPU, RAM thresholds and the like.
Is this kind of thing built in or do we have to roll our own? Most searches lead to SMART alerts and backup notifications only.
Thank you all for the insight.
I've actually stumbled across a solution here:
when mounting the share (in the host) will present files there with the above uid and gid. These...
It was at creation yes - I included it above for context.
I also heeded the advice here, to set up the mappings before user creation:
Well the idea was to abstract storage away from the containers, so they dont even know they're on a CIFs mount. That way the host would log in once, but configure access via bindmounts.
Seems a little more trouble than its worth though so might go with the container mounts, or maybe NFS via the...
I have mounted a CIFs share in my proxmox host that presents files as owned by foo:users. Foo's id is 1002.
I want to present this share to a unprivileged container, I'm assuming using a bind mount.
The user in the container has id 1000:1000, and creates files like so.
I have added the...
For completion: I've stuck with the default behaviour (both nodes need to be up for anything to work), and use the following command to "force" a quorum in the rare situations I need to start something up:
pvecm expected 1
This seems to be a decent middle ground.