Nein, ist er nicht. Mein "Auslagerungsstorage" ist auf einen anderen Server, für den man andere Zugangsdaten braucht. Außerdem kann der auch so konfiguriert werden, dass er für den primären PBS gar nicht erreichbar ist. Ich mache das aktuell...
Hallo,
das geht mit qm remote-migrate, siehe https://pve.proxmox.com/pve-docs/qm.1.html
Falls die Doku nicht reicht, hier wäre noch ein Beispiel: https://www.thomas-krenn.com/en/wiki/Proxmox_Remote_Migration
Is this just wishful thinking? Or is it a procedure you've verified, backed by evidence and proven results, that anyone would find convincing?
I, for one, would not entrust my systems to the company that proposed this.
Forget btrfs and go with zfs mirror, it's supported in the installer and there is actual official doc on it. Tested and works.
https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_zfs_change_failed_dev...
You confuse lxcs ( Linux containers) with lxd ( Software for managing vms and lxcs) Both lxd ( and it's fork incus) and ProxmoxVE manage vms and lxcs. Lxcs are more lightweight than vms but only work for Linux applications since they run directly...
Was? Das prunen geht eigentlich recht schnell. Die garbage-collection oder verify-Job werden bei hdds aber immer lange dauern, weil beim PBS der Datenbestand in zig kleine Dateien ( Chunks ) aufgeteilt wird und jeder (!) chunk dafür eingelesen...
Grundsätzlich ist das ja auch keine falsche Annahme, da ja ZFS streng genommen kein echter shared Storage ist (eben weil man da immer einen Datenverlust hat). Aber wenn es für den eigenen Zweck reicht, stört das ja nicht ;)
No direct help but only a hint: you need to search for "setup Postfix with authentication on Debian" or similar. You will find a zillion articles; I have no specific one to recommend.
Viele meiner Homelab-Mainboards erlauben das Datum-/Uhrzeit-gesteuerte Einschalten direkt per BIOS-Setting. Andere Kisten kann ich per WOL aufwecken, aus einer crontab heraus. (Oder per schaltbarer Steckdose, "klassisch" oder per "Smart Socket"...
Yeah, that's one of the manifold pitfalls everybody enjoys; glad you made it work again!
There is only one countermeasure: exercise restore from time to time, not just run a backup ;-)
There are situations where stop-mode is recommended, yes...
I moved a lot of data off the pool and used zfs rewrite to rewrite all the files hoping it would recover the reported capacity but none of it worked. The only interesting thing was that when I copied files to another pool the reported size of...
I wrote a little bit about SWAP monitoring here. If you use disk based SWAP anyways I'd recommend ZSWAP.
Note that swapon is basically the "human readable" version of cat /proc/swaps and with sysctl vm.swappiness you don't have to use a pipe and...
What I don't see mentioned here is that just monitoring the swap usage is not as good as monitoring the swap in AND swap out activity. You want to monitor that, because if you only swap in stuff without reading it back soon, that is actually...
the relevant questions to ask:
1. Would you want the failover to occur automatically, or with user intervention?
2. What is an acceptable outage period?
3. What is acceptable in terms of minimum performance (specifically, disk performance)
IF...
Are you sure you know exactly what this does? I am sure, I do not ;-)
My recommendation: return to a default setup, then it will react with the default behavior.
One very low-end approach, with these assumptions:
you have two nearly identical PVE server, not clustered
one is doing its job, the other is turned off
you have sane PBS (with SSDs only, at least for the latest backup) - as you need to have...
How can you know this without trying? Nobody knows your requirements except you.
It's the version 1 of the Datacenter Manager aka their equivalent of VSphere. It already allows migration of vms between different clusters or...