The documentation mentions “to assign,” but in reality there is no way to explicitly assign a certain number of cores to certain processes.
Unless you do it manually by changing the settings on systemd or in other configuration files.
Maybe the...
The question/issue arises because https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster#_recommendations_for_a_healthy_ceph_cluster specifically says "As a simple rule of thumb, you should assign at least one CPU core (or thread) to...
I guess I should have put <sarcasm> tags on that. No, I don't think it makes any sense. It is a lot of effort for basically no return. It is a silly idea in the vast majority of cases. I'm not saying no one should ever do this, but it is...
Funktioniert das denn bei jeden Storage-Backend? Ich meine mich zu erinnern, dass z.B. qcow-Snapshots (etwa auf NFS-Freigaben) sich nicht loop mounten lassen.
If you configured high-avilability for your VMs you can save some steps. First: If you enable maintenance-mode the VMs in HA will migrate to other nodes, if you configured HA accordingly. So with that you don't need to to a manual migration...
I guess I should have put <sarcasm> tags on that. No, I don't think it makes any sense. It is a lot of effort for basically no return. It is a silly idea in the vast majority of cases. I'm not saying no one should ever do this, but it is...
I get back to this issue because I had a bit of time to investigate it.
It turned out that the problem was that my Windows Server CA was still configured to sign CSR with SHA-1 algorithm. Which seems to be not supported by Proxmox (I am in v9)...
Yes, the Linux kernel scheduler is total garbage and you need to manually tell it what to do. For "performance". Of the hardware I guess, since this scheme will hinder the performance of the admins having to maitain it.
Okay, so if I understand correctly, you believe that assigning CPUs to various processes makes sense.
It would be interesting to understand whether this feature is not directly available in the web GUI because the Proxmox developers haven't...
Danke für die zahlreichen Antworten.
@aaron
Das ist eine interessante Idee. Muss es denn ein anderes Storage sein? Ich habe ja auch die Möglichkeit das gleiche Ceph-Storage auszuwählen. Würde das nicht genauso funktionieren?
That's interesting, do you have a reference? Up to now I always assumded Dell PERC behaves the same as other HW RAID controllers. I tried to google a confirmation for your hint but couldn't find anything.
Could you try to create an New VM on local-lvm either local and see does the same interesting behavior happened again? To exclude it may causes by Pure!
Manually started updates are good to be able to stop if somethink goes off roads.
Migration is just lengthy if not using shared storage for otherwise it's just couple of few minutes for even lots of machines.
Was genau hast du denn da gemacht? Ich habe in 20 Jahren Virtualisierung nur vSphere VMs herunter fahren müssen, wenn der Datastore 100% voll war um Snapshots zu löschen. Bei HyperV oder Proxmox habe ich soetwas noch nie gesehen. Ein Unlock per...