That's a microsoft "issue", there's nothing KVM can do about how windows discovers hardware and installs specific drivers.
It works "out of the box" with any recent Linux distribution.
Yes, the "witness" (only used for quorum) can be a mini PC installed with PM and added to the cluster or a VM. But for obvious reason, the VM bas to be located outside of the cluster its supposed to provide quorum to.
And if you need your 4 nodes for load/performance, you can still add a 5th "light" node in order to have an odd number for the quorum.
It can be a simple Raspberry Pi https://pve.proxmox.com/wiki/Raspberry_Pi_as_third_node or a small VM running Proxmox outside of your 4 nodes cluster (of...
For you first question, yes, my lab@home is based on 3 nodes cluster with one dedicated SSD on each used as OSD.
And If you can boot on USB in legacy mode (not sure PM uses UEFI if not installed on top of Debian) and can install it with your thumb drive as target, it should work.
No, you need to update the file mentioned before switching the SSD in order to match the MAC addresses of the new NICs. If not, ethX will be incremented and won't match their configuration in /etc/network/interfaces.
If the NICs are different (not moving the NICs) don't forget to adapt /etc/udev/rules.d/70-persistent-net.rules to avoid your ethX to match the one your bridges are based on.
Hi Thomas,
I've asked the same thing few months ago, but it's not yet implemented (hope it's still in pipe ...)
https://forum.proxmox.com/threads/add-snapshot-0-option-to-hard-disk.43780/#post-209795
Cheers
I added a corosync ring on a second network on live cluster using this document:
https://pve.proxmox.com/wiki/Separate_Cluster_Network#On_Running_Cluster
Just don't forget to add "rrp_mode: passive" and to increase config_version: ##" to avoid most issues in the procedure.
Once done, just...
How do you want the host to know how guest is using its memory ? Cached or not, there's data in pages so for the host your vm uses 3.3GB, useful or not ..
For HA storage with two NAS, you can configure it at guest level.
For instance, on Linux, just provide one LUN from each NAS and configure software mirroring using mdadm for instance.
I guess something similar is available on windows too...
In any case, this not "CPU high availability". If a CPU or core or thread fails, it will end with a kernel panic. At best the server will restart with a blacklisted CPU.
Just take care when you clone a disk using clonezilla or dd, if LVM is used (which is the case with PM) you'll end with a duplicate LVM volume group that can be a source of issue ...
I'd rather disconnect/mask the "old" RAID before booting on the clone one.
Same advice for OS (not the case here...
Thanks for the detailed answer Thomas.
I don't have to change it often, but I have created my pool as 3/2 and wanted to change it to 3/1 (infra@home) without having to create a new 3/1 pooll and migrate all disk images. Because of the "lack" of Edit button I did it smoothly using CLI (tested on...
Hello,
Is there a reason updating an already created pool size or min_size value is not possible through the GUI ?
The GUI instantaneously reflects changes if "ceph osd pool set" commands are ran from a CEPH node CLI, and nothing "critical" is reported in the log.
So why there's no "Edit"...
If you want your VMs to automatically restart on another node when the one hosting them fails this is HA.
Now if you're manually shutdown a node hosting VMs, those VMs won't be restarted on another node as a "clean shutdown" is not a failure.
Because you need to set your VMs as "HA" resources. But to do so you need at least 3 nodes in your cluster for obvious quorum reason.
So you first need to add a 3rd node in order to be able to take advantage of high availability.
Note that this 3rd node does not need to be powerful or even...
To select a backup destination, you need to first configure a Storage (from Datacenter/Storage) and then assign "VZDump backup file" as valid content for this specific storage.
You can configure a NFS or CIFS storage coming from an export filesystem on your computer for instance. Or if you have...
Got it with purge ...
root@pve1:~# ceph osd purge 1 --yes-i-really-mean-it
purged osd.1
root@pve1:~# ceph osd tree
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 1.36409 root default
-2 0.45470 host pve1
2 hdd...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.