Du hast gute Technik für ein Labor. :-)
Für einen produktiven Cluster sind aus meiner Sicht aber drei Knoten zu wenig. Sobald irgendwo ein Problem auftritt, ist das ganze "degraded" - und bleibt es auch dauerhaft, zumindest wenn man die...
For the moment i configured it like this:
HW-Raid-Controller set to JBOD to pass through the disks
ZFS Raid 1 on the two disks via proxmox
Storage on this ZFS Raid
VMs use this storage
Hope this makes sense, at least this way i have snapshots...
Ich verwende kein PMG...
...aber die Basis bildet Debian. Ich verwende: https://manpages.debian.org/trixie/systemd-journal-remote/systemd-journal-upload.8.en.html
(Rsyslog ist ja leider deprecated und passt nicht mehr gut zu systemd.)
Hi,
Windows has: pagefile, hibernation, updates, disks defragmentation, copilot and antivirus activities (if you have it) etc.
If add the software plus some DBs (if used) and you have solid amount of the changed data inside the VM.
Check all...
Nun, in deinem Schnippsel sind nur die Debian-Sources aufgeführt. Es fehlen die Proxmox spezifischen Elemente.
Schau hier: https://pve.proxmox.com/pve-docs/pve-admin-guide.html#sysadmin_no_subscription_repo...
We're pleased to announce the release of Proxmox Backup Server 4.1.
This version is based on Debian 13.2 (“Trixie”), uses Linux kernel 6.17.2-1 as the new stable default, and comes with ZFS 2.3.4 for reliable, enterprise-grade storage and...
Per my testing on several environments previously I found that using ZFS is actually faster (in most cases), due to the fact that the hardware RAID controller only exposes x2 (as it supports 8 drives and 8x x2 is 16; a full PCI-E slot. I also...
ZFS works hard to assure integrity of the data it handles. This usually requires several actual writes per single write command. (Possible "write amplification" plus handling metadata, depending on the pool architecture.)
A stupid Raid...
What I would do in beforehand is to setup a virtual cluster with similar topology (but much smaller) devices. Then one can simulate/train such a delicate operation.
The bad news: to create such a thing may be a complex task in itself. But either...
A node running HA-relevant resources fences itself, if Quorum ist lost. The reboot is "hard", triggered locally. Not by an ssh-command from another node ;-)
Yes. That's what it is meant to do :-)
Correct. I do not use that method often, so it is not relevant to me :-)
You cat set:
root@pve:~# grep ALWAYS /etc/molly-guard/rc
# ALWAYS_QUERY_HOSTNAME
ALWAYS_QUERY_HOSTNAME=true
Then it will ask for...
Hello,
You can do it but I recommend you to not stay for a long time in this configuration. Limit modification in your cluster until you have completed your migration.
Avoid migration from a node with PVE9 to PVE8 nodes and always use web...
Yes, Ceph is great!
Just be aware that there may be pitfalls you should know before starting that journey: https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/
Adding to onslow: qcow2 is a format for vm discs images on file-based storages like directory, nfs, cifs etc. Since this format is restricted to vm images you can achieve snapshots of vms stored on nfs with them but not on lxcs. You could still...
I FIXED IT!!! I had to stop the new node from trying to cluster with systemctl stop pve-cluster then /usr/bin/pmxcfs -l, after this I was able to create the nodes directory and pscp the files from my Windows machine to the new server. After that...
It was a cluster, right? Did you do that on all nodes? If not: start one of the other nodes and use "pmxcfs" local mount to access /etc/pve/nodes/abc/xyz. Remember: cluster configuration is shared between all nodes --> it is/was available on all...