It works if I revert to "Default (Auto)" remove the email address and then change back to "Notification system"
Should probably be considered a UI bug. Changing to "Notification system" should remove previous config in /etc/pve/jobs.cfg
So it turns out that if you previously had email options set in the UI and change "Notification mode" from "Default (Auto)" to "Notification system" the old email address and "mailnotification always" is left in the /etc/pve/jobs.cfg config file.
I have set up an SMTP notification target (Google Workspace) that uses the smtp-relay.gmail.com host and no authentication (src IP verification). When I test using the PVE > Notifications > Test button the test email comes through as "Author <email@address.com". When I use the same notification...
I changed an existing backup schedule and set Schedule to 03:00.
I selected the "Every day 21:00" from the drop down and changed to 03:00 so I had the correct format.
The backup is now running every 3mins instead of daily at 03:00. What have I done wrong here?
/etc/pve/jobs.cfg...
At this point I'm just talking to myself but I have ended up laying out like this:
root@pve01:~#
root@pve01:~#
root@pve01:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
pool0...
So I have now verified that zfs set special_small_blocks=0 rpool/SSD does not send VM HDD blocks to the SSD. By setting special_small_blocks=1M rpool/SSD I was able to send any data copied to /rpool/SSD to the SSD's.
Unfortunately adding the rpool/SSD ZFS storage to Proxmox in the Datacenter >...
How is this working exactly? I created a RAID10 array on spinning SAS drives and added the special device (on enterprise SSD's) with special_small_blocks=0 on a child dataset (pool0/SSD). I created a VM on pool0/SSD but zpool list -v did not show the special device grow by the size of the VM...
So I have rebuilt the PVE server, I created a ZFS mirror on the SSD's during install that was 128GB for rpool. Post install I created the RAID10 array (pool0) on the 6 SAS drives and 2 additional partitions on the SSD's and added them as a special device on the pool0 RAID10 array.
root@pve01:~#...
Thanks - this helps a lot. Because I had installed PVE on the RAID10 as per the first response I will need to rebuild the machine because, as you said, I added the whole SSD devices as the special device mirror.
Re. this.
Does this mean I could create a new dataset on my spinning rust RAID10...
Can anyone give me a hand with this pls? I’m at a loss as to what to do next. Before I build my VM’s I need to get the disk layout correct.
From what I’ve read the SSD’s can be partitioned to allow use as a special device mirror as well as general ZFS data storage but I’m not sure what...
It is the latter, 3 x 2 disk mirrors. I am stuck at the moment trying to understand how to lay out the disks. I have 6 x 1.2TB SAS drives (spinning rust) and 2 x 800gb SATA SSD's (enterprise drives):
root@pve01:~# zpool list -v
NAME SIZE ALLOC FREE CKPOINT...
So I have rebuilt PVE using the 6 x 1.2TB SAS drives in a RAID10 array.
I have added the ZFS special device to the rpool (thats all I had) ZFS pool:
root@pve01:~# zpool add rpool special mirror /dev/sdg /dev/sdh
root@pve01:~# zfs set special_small_blocks=4K rpool/data
Is this correct? Should...
Thanks - the first link is the one I have been reading, still a lot to understand about this. ie.
So if my pool is a RAID10 consisting of 3 x mirrors, can my special volume be a single mirror or does it need to be 3 x mirrors?
There is also no info in the Proxmox doco about what happens if...
Thanks - I've been searching for a few hours now on an easy to understand grounding in special devices. I'll go through the Proxmox doco on the topic again.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.