Again, don't conclude something without facts.
Compromised VM cannot access host, that's the point of the virtualization.
Reinstalling host isn't required.
Check journalctl/eventlog of VM/VPS/VDS perhaps there is just a simple shutdown.
Here is...
As said, it's only bots which try credentials.
Not worry, if you have robust password.
As SSH on port 22 is the most used, providers enable "Fail2ban" to mitigate attempts, but nowadays bots has many other ip to continue the job.
As said, only...
What does that mean ?
You need to post facts, like real messages, logs or errors.
it's expected that firewall show scan and login attempts to exposed hosts. it's doesn't mean an attack.
No, I mean self hosted vpn.
it's another way, instead self...
Mixed drives = worst drive wins → ZFS RAIDZ is slowed down to the slowest SSD (the Blue).
Consumer SSDs (Blue) are not made for VMs → low endurance, unstable latency, no power-loss protection.
RAIDZ is already bad for VM I/O → combine that with...
That will not solve the problem which is your cheap disks, just delay a bit more.
I doubt that should enough to mitigate these bad disks for ZFS, perhaps more effective for the WD Red model, but as many posts said :
ZFS require enterprise flash...
As it's written , it's unsafe for VM, not only if power cut , but also if OOM killer or Physical Hard Reset.
CrystalDiskMark, by default, benchs 1 GB of data, which remains in cache, data doesn't hit disk , try with 32 GB and problem should...
As always, it depends, many GB daily written indeed require DC drives.
But, if not ZFS, regular SSD is often enough for PBS as only new data is written, because existing data is never re written.
the originally attached VM config:
scsi0: local-lvm2:vm-121-disk-1,size=1T
The VM configuration you attached contains only one virtual disk/image - "scsi0". It is located on storage called "local-lvm2". You subsequently provided your storage...
Yesterday I noticed that running backups without fleecing is actually a VERY BAD idea since I/O active VMs are stalled, experience I/O timeouts and eventually stop working.
I think the Backup Job assistant can be improved by showing a clear...
Yeah this seems to be an issue in glibc, fixed 11 days ago:
https://sourceware.org/git/?p=glibc.git;a=commit;h=7107bebf19286f42dcb0a97581137a5893c16206
https://debbugs.gnu.org/cgi/bugreport.cgi?bug=79139
Okay I managed to reproduce the issue by producing a file with enough hole and large-enough filled extents. This seems to be an interaction between zfs and glibc (and the buggy behavior seems to be in glibc from initial analysis).
If you intend...