You need a "special" vdev first. And make sure to mirror it, as losing it means ALL data is lost. Metadata alone can't be moved. You will have to rewrite all the data as well. See:
https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954
See the ZFS manual on HW raid controllers for details: https://openzfs.github.io/openzfs-docs/Performance and Tuning/Hardware.html#hardware-raid-controllers
Nein, sind ganz normale Consumer SSDs die halt nur etwas teurer sind weil für NAS vermarktet (Stichpunkt: "Prosumer").
Hast du was ordentliches, dann hat die SSD auch eine Power-loss Protection (PLP) eingebaut, da sonst keine Sync Writes im DRAM gecacht werden können. Und hat man was mit einer...
Yes, if you want something that is simply working, offering a GUI for all tasks and will prevent you from doing something stupid, then it might be better to choose an appliance (like TrueNAS/UnRaid) and not a Linux server (like PVE) that got a way steeper learning curve.
Then you probably want...
Wenn es dich nicht stört dass die dann bei gewissen Operationen 100x und mehr langsamer sind und du die ggf. ein vielfaches so oft ersetzen musst, dann ja. ;)
Wenn das keine Milchmädchenrechung werden soll, dann beim Kauf lieber auf den Preis-Pro-TB-TBW anstatt auf den Preis-Pro-TB-Kapazität...
Ja, das spiegelt man am besten ebenfalls. Und kein Raid ersetzt dir ordentliche Backups, also noch irgendwie manuell oder Third-Party-Scripts deine wichtigen Dateien wie den /etc/pve Ordner sichern.
Kann man auch machen. Nur am besten kein ZFS auf ZFS.
ZFS/PVE requires you to fix stuff via CLI where a typo will wipe all data on that pool. Would recommend to use something else like TrueNAS if you don't want to spend some hours/days to learn how to properly administrate it.
There are some old Supermicro MiniPCs that got ECC but you probably...
Über die CLI oder per SSH mit einem Tool wie WinSCP als GUI. PVE hat keinerlei GUI für das Navigieren durch Dateisysteme. Halt fast so wie du auch bei DOS bzw Windows mit CMD/Powershell durch deine Ordner navigieren würdest ("cd .." etc) ;)
Da wird genau so ein Dateisystem in einem Mountpoint...
Nein. Das eine ist eine Mobile (Laptop) CPU das andere eine Desktop CPU.
Für ZFS nimmt man am besten Datacenter/Enterprise grade SSDs. Da hast du bei M.2 und 4TB nicht viel Auswahl (nur Samsung PM983, PM9A3 oder Micron 7450 PRO) und alle sind 22110 Formfaktor und passen nicht je jeden MiniPC...
The built-in backup function (VZDump): https://pve.proxmox.com/pve-docs/chapter-vzdump.html
Add some empty disk as a directory storage (PVE will wipe it!) or a NFS/SMB share, set "VZDump backups" as a content type for that storage and then that storage should have "backup" tab that allows you to...
Do you have a backup of the /etc/pve/lxc folder of the PVE system disks or an old PBS/VZDump backup of these LXCs? Otherwise you only got the filesystem of the LXCs on that disk but not the configs that define your LXCs.
Depends on how much your data changes. With only minimal changes it for example wouldn't hurt to only run that GC once per day or week even if that would mean there are 18 or 162 hourly backups waiting for removal.
You could use it as another Datastore and make use of a local sync job to copy data between datastores. And then use the maintainance mode when the drive isn't in use. But no, external disks or drive rotation isn't implemented yet. The common way to have offsite backups would be to get another...
It's even better to run two PVE nodes and have one pihole running on each of them. That way not only the storage is redundant but the whole server. ;) See for example here: https://forum.proxmox.com/threads/pi-hole-lxc-with-gravity-sync.109881/#post-645646
Add a SBC as a qdevice for a cluster...
That "Block Size: 16k" is the volblocksize (meanwhile changed defaults from 8K to 16K) and only effects VMs. LXCs will use the recordsize which has to be set via CLI.
Yes that's normal. HDDs are terribly slow. Like 1/500 to 1/25000 of the IOPS performance a good NVMe SSD could handle. Once you hit HDDs with random reads/writes (what you usually will do when using them as a VM storage) the IO delay will go up and your system performance down.
And for good performance, lots of small files and users, reasonable resilvering time with such big and slow drives you probably want a striped mirror for those HDDs.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.