You are asking a lot from a relatively modest host. its doable, but you'll need to temper your "performance" expectations. on that subject-
That will multiply the "performance expectations issue" substantially. In addition, doing a passthrough...
To whom this may concern,
I have finally tracked down the source of an issue I have been having with some windows 10 22H2 vms that I have been trying to deploy recently. Like the subject states, when I attempt to enable the Virtual Machine...
Yes, N-way (stripes of) mirrors typically perform better for VMs than raidZ1/2/3 because they give more IOPS. However, if you already have redundancy by running multiple nodes, redundancy inside each node might be less important. Proxmox itself...
You run backups for your clients/users. They are free to do as they please on the platform you supply them, and you expect your platform to support that as well. Valid case.
What is your goal in posting on the forum, other than venting and...
journalctl -f to the rescue
In our case, this helped me find out that we had a VM one of of our esxi hosts in inventory that did not actually exist on one of our datastores. Proxmox journalctl was repeatedly failing to find the folder for a VM...
QDevice works for the cluster. You could just have 3 Ceph monitors, which is all Proxmox recommends: https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#pve_ceph_monitors
Again: In this thread only other community members are participating. If you don't want to purchase additional subscriptions for migration (which is understandable) but don't want to switch repos to non-enterprise (for whatever reasons) you need...
I am thinking that might be the way to go, however how would I configure that as a ceph witness node so that my storage would still avoid split brain when one data center goes down? Or is it just simply a "fifth" vote what is automatically...
You run backups for your clients/users. They are free to do as they please on the platform you supply them, and you expect your platform to support that as well. Valid case.
What is your goal in posting on the forum, other than venting and...
Hello Proxmox Community,
I am an Infrastructure guy who deals with global-scale virtualization daily. Managing multiple clusters across different regions often leads to "visibility fatigue" and manual inventory headaches.
To solve this, I’ve...
Was ich nicht ganz nachvollziehen kann: Habt ihr bei euch in der Firma keine Monitoringsoftware wie Zabbix, prometheus oder irgendeine nagios-Variante (checkmk, icinga etc) am Start? Die können doch das gleiche (und noch mehr ) überwachen und...
So just like my other guide, this is more for my own records so I can come back and refer to it.
I know there are scripts for these things, but I prefer doing them manually so I can learn along the way.
Maybe it can help someone else in the...
Wenn dein ISP sich an die RIPE Empfehlungen hält, bekommst du ein statischen /48 oder /56 Präfix.
Wenn er das nicht tut, ist dein ISP für'n Arsch.
Für Foren wie Proxmox gilt dann IMHO: name and shame!
Hello, I am trying to mask out one node in my cluster from migrations of vms from crashed hosts. I have a stretched ceph cluster with two data centers of two nodes each. I have a fifth node that acts as a quorum member only and does not have any...
You can make this work on the CLI, see man zpool-create:
The use of differently-sized devices within a single raidz or
mirror group is also flagged as an error unless -f is specified.
Generally it is recommended to go with mirrors and not zRaid 5. (If you can swallow the disk size loss and total disks as mirrors are basicly raid 1.)
This is due to both the complexity it brings to users who are new or less experienced to ZFS...