That depends on the data access pattern ;-)
The system drive "C:" will be somewhere else? I would not expect that 15 Windows Servers can run entirely from rotating rust - nowadays. Disclaimer: I am really happy to have ZERO experience with...
Untick the 'Thin provision' setting of your storage (in the PVE web GUI under Datacenter) and newly created virtual disks will not be thin. If you need more specific help then you'll need to share more details about your storage and VM.
In my recognition that is new compile-time default.
Just to be sure: you know you can add that /etc/modprobe.d/zfs.conf by yourself, right? Just don't forget to run " update-initramfs -u ".
Go for mirrored drives. A single RaidZ2 gives you the IOPS of a SINGLE drive; two five-drive RaidZ2 --> ... of two drives.
Five mirrors gives... the IOPS of five of them. OF course this is still slow!
Whatever you put into a ZFS pool the...
Sure.
I am not sure if I would recommend the following, but technically this works too: during installation install on all three devices, configured as RaidZ.
all three disks are bootable!
you max out overall capacity
you get all the goodies...
Ich habe die Sache mal weiter untersucht. Ist wohl mein Lapsus
Ich hatte tatsächlich mal xxd hinterherinstalliert um WOL-Pakete zu analysieren.
Nach Neuinstallation des hosts fällt xxd natürlich nicht von alleine vom Himmel. Ein zurückkopiertes...
Seems I am misunderstood here.
"You can install PVE + Ceph and just not run any VMs." was meant to be a reply to "Option 3: Add 3 nodes of the smallest NVMe/ECC devices".
Some nodes (1 or 2 or even 3) would extend the current cluster, give...
For this you don't even need shared storage you can migrate the even if the storage isn't shared at all ( e.G. LVM/Thin). With ZFS storage replication you would reduce the migration time further:
https://pve.proxmox.com/wiki/Storage_Replication...
Die Datei existiert? Und lässt sich im Container vollständig lesen? (cat /lib/x86_64-linux-gnu/security/pam_mkhomedir.so > /dev/null)
Ansonsten: Platte kaputt? ZFS mit Redundanz vorhanden?
Disclaimer: ich verwende (fast) keine Container.
Check the status of the interfaces corosync-cfgtool -s
If you're unsure which IP belongs to Link0 or Link1 - cat /etc/corosync/corosync.conf
You could also do a tcpdump on each physical adapter and check for corosync traffic on UDP 5405...
According to the docs any filesystem should work as long as it supports the needed POSIX attributes and is supported by the Linux Kernel. So BTRFS or ZFS would in fact work but then you would need to do a manually zfs import/export before and...
Naja, was "dick" ist, hängt auch vom Beobachter ab.
Man kann Uptime Kuma durchaus als beides bezeichnen; ich lasse das extern laufen, um Verfügbarkeit von außen testen zu können - also so, also ob ein Dritter auf meine Dienste zugreift.
Intern...
Kann sein, probier es aus ;-)
Ich verwende meist REISUB --> https://en.wikipedia.org/wiki/Magic_SysRq_key#Uses
Leider kann ich ansonsten nicht wirklich weiterhelfen, sorry.
Warnungen sind nur Warnungen.
Ich kann die "Connection refused" nicht einordnen, sorry. Ich würde rebooten. Aber wenn dann dein Rechner explodiert, will ich nicht schuld sein ;-)
Yes, that was my impression too, after a year of testing (in a homelab) --> https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/
Disclaimer: not using Ceph currently...
You can install PVE + Ceph and just not...
Got it, thanks. But looks like I need to look much deeper into it. Lots that I don't understand/know at this point and it's important to warrant some time to learn it.