Based on the comments this bug should only affect small number of servers as it requires large number of heavy disk workloads in parallel for the bug to appear. I can only imagine something like a heavy usage database server with ZFS might get hit by this bug. I do understand that this bug...
I'm trying to get ProtonMail-Bridge working with PMG as it requires auth of username and password which gets passed to port 1025 using local IP of 127.0.0.1 in order for the SMTP portion to work.
I know I can edit the main.cf in postfix but that config file gets over written every time there...
Also, you can update the PBS's fingerprint on the datacenter level as it will push it to all the PVE nodes in the cluster. I recently ran into this when I started using Let's Encrypt certs only to find out I don't need to worry about the fingerprint anymore.
I've ran into this scenario today with one of our 14 nodes with a degraded ZFS array. I've made a habit of checking them daily till I got a hit today. No e-mail notification of the event despite notifications from the nodes about updates and etc.
Looks like I will have to do some digging on...
I agree. ZFS is very robust in storage systems as it offers dedupe, snapshots for backups and data integrity. Even you don't have additional drives to properly create a ZFS array you still can use it on a single drive. Just you won't have redundancy.
Gotta love these video card issues. I wonder if plugging in a dummy load into the video port would fix it till the driver is fixed? Those are cheap on Amazon.
The dummy load would simulate the monitor being on.
I too would love to see a way to selectively suppress certain notifications especially the replication failure notices where it's not really a problem since it usually resolves itself after a period of time.
Guess for now I'll setup e-mail filtering rule to put em in a different folder to be...
You can run your backups every day and when the prune schedule runs it will prune whatever you set it at. In your case if you only want weekly backups then set to keep weekly. Just set it number of weekly you want to keep. It will prune the rest.
I set my prune to run on Saturday nights so...
Pretty cool to see that kind of up time but patches and security updates are more important to me than up time. Plus rebooting the servers is a good way to reset the RAM especially with ZFS.
I am using local ZFS with scheduled replication to simplify my setup. I've tried CEPH in the past and always had issues with VMs running very slowly whenever there is something major going on with CEPH. With local ZFS it only impacts one node vs several. Granted I lose HA but ZFS...
For my Windows 2019 VMs I've practically set everything as defaults including KVM64. IDE with default settings. Keep in mind I am using ZFS so alot of the speed enhancements are handled by that. You can clone your VM to test out some settings.
I've migrated several Windows 2019 VMs over from vmware to ProxMox and most booted up without issues. Strange behavior that 3 VMs out of so many refused to boot properly. For testing I switched them to OVMF (UEFI) and it booted up without issues. Strange part is they never been setup to use...
When I was running ProxMox version 6 four years ago with CEPH it ran fine but the performance wasn't great. Now I'm sure with improvements it's alot better. I will revisit this later on as I have a second cluster running 7.4 to test it on. I will upgrade to 8 some time this year.
I've tried CEPH in the past and it kills the Window VMs in terms of performance. So I switched them to ZFS and they're practically the same performance as ESXi with vSAN. It loads slow at first but once it starts up the reboots of the Windows VMs are almost instant and the performance are...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.