The host has a total of 64GB.
There are 6 VMs running with the following memory allocation. Ballooning and KSM are enabled, and the Min memory is equal to the allocated memory.
VM1 - 4GB
VM2 - 6GB
VM3 - 4GB
VM4 - 16GB
VM5 - 2 GB
VM6 - 12GB
So...
What would you pay for vmware compared to a basic subscription?
People who already have at least basic subscription get the PDM for free. How would you react if the PDM would need another subscription like ProxmoxBackupServer?
I did not dismiss anything, I just try to understand your odd accusations, given that nothing changed for your existing PVE subscriptions.
You still get exactly the same value of our lowest subscription tier you choose to pay for, nothing more...
Ich bin kein Profi. Da bin ich weit weg von :)
Ich finde nur den Trend hin zum blinden Ausführen von Skripten aus dem Internet bedenklich. Man bringt den Leuten erst bei nicht auf jeden Link zu klicken oder unbekannte Dateien auszuführen und...
Also wir haben jetzt alles gemonitored, sowohl PVE als auch PBS:
PVE:
- RAM immer genügend frei
- CPU maximal 40% Auslastung
- disk util %: 2,4
- rawait / wawait: 1,5
PBS:
- RAM immer genügend frei
- CPU maximal 20% Auslastung
- disk util %...
I want to create a LXC or VM via API. Currently working on LXC. BUT I noticed first of all the api doc is plain... bad imo . Anyways.
I NEED to add a rootfs for lxc because ofc I do?! And I am using CEPH for my cluster. So I want to create for...
I use ZRAM myself: https://pve.proxmox.com/wiki/Zram#Alternative_Setup_using_zram-tools
You can also leave some space unallocated on the disk and create a SWAP partition or a SWAP file on a ext4 file system created on it.
you have no redundancy with simple ext4, if your disk crashes and pages are swapped out, your server crashes. that's why i wrote "rundandant disk".
nowadays, typically zfs is being used and using mdraid is discouraged with proxmox
so i wonder...
Hey,
i installed on a mac pro 2019 (7.1) proxmox on nvme. The nvme have 3000 read/write . But i become like 250mb/s . Fsync say 50 on pverpef test. ZFS raid0 , atime=off,ashift 12. 4096 sector nvme disk. Why its are so incredible slow, and what...
thanks for your answer Dominic :-)
I'll open an evolution request on the bugtracker.
At a minimum, it would be desirable to be able to associate each PDM user with a specific token on each remote.
If we have only one token to connect a remote...
It wasn't a proposal, it was a dig directed at the Proxmox team. Unless they've been living under a rock...the SAN setup for the RHEL world has been out there for ages, I'm sure they did/could look at it. The fact that it isn't there already in...
In das gleiche Thema bin ich auch als Umsteiger von vsphere 8.0.3 auf PVE 9.1 gerasselt...nun schaue ich täglich bei nvidia, ob da endlich ein aktualisierter Host-Treiber erscheint...
since swap on zvol still is not in a usable state - what is the recommended proxmoy way to go to have swap on servers with requirement for redundant disk (i.e. raid storage) ?
I would not agree about this situation is being unique, this is the most common and classic model in enterprise modular deployment (non-HCI virtualization) when your hypervisor hosts are connected to the shared storage via two dedicated network...
by the way, there is a new BIOS and EC update (2.02) available fpr the K12 from GMKtec if you want to give it a try:
https://drive.google.com/drive/folders/1y_mgKLJyH9-U1Qrbn-4PJIeIeRZKtCnx?spm=..page_2054333.page_detail_1.1
i have not updated...
I can imagine you will create ZFS RAID1 on TOP of two NFS drive (one per PVE node) but I strongly not recommend this. If you lose your cluster, you lose backups too.
Take this scenario - how you restore your data when you lose whole cluster...
Die Nachricht aus dem Log deutet zumindest darauf hin, dass die beiden Hosts sich nicht sehen können.
Aus deiner ursprünglichen Beschreibung gehe ich davon aus, dass du zwischen den Knoten einen SSH Verbindung mal getestet hast? Nicht, dass das...