We are talking about ProxMox with Ceph (as this is a ProxMox forum), not a Ceph only storage cluster. Sorry, if this wasn't clear in the beginning.
So there are two bridges (each 2x25 GbE LACP), one for the vm traffic (to the outside or in between the vm) and one for storage.
So on the storage...
I know the principle of horizontal scaling ;-)
Actually, I got 8 bays on every node. Using (historically) 2 for a pair of hard disk with its accompanying SSD for WAL/DB and six bays for SSD.
We are moving out other bays with SATA SSD for more and larger SAS. Ending in 4 x SAS SSD per Node and...
Thank you guys, I just wanted to understand and perhaps getting a hint to some documentation or experiences.
But feel free to rate my network. It won't feel hurt, neither do I.
The servers are HPE and equipped with 512 G RAM, 32 Core EPYC. Data and storage networks are separated of course.
This...
Hi there
We got a six server cluster with existing ceph pools.
Now we need to add more disks to one pool and I am unsure what scenario needs more time and/or causes more «turbulances» .
The pool consists of 6 x 2 SAS SSD (3.2 TB and 6.4 TB). We would add another 6 x 2 SAS SSD (6.4 TB)...
Could…, should…, perhaps… are not terms I really like in IT.
Have been running several hypvisors and as you write, usually the hypervisor has the capability to mask cpu features, so a migration will work and if not, it doesn't allow a migration. For sure I would not try to mix Intel and AMD, not...
Hello
We are running PVE 8.1.3 on HPE DL 325/365 Gen 10 servers with different CPUs.
Usually we only use 6 DL 325 with AMD EPYC Rome (7502P and 7502P).
A while ago we added a seventh server which has a Milan 7513 CPU (two sockets).
While doing live migrations for lifecycle (Patching) purpose we...
Can confirm.
We did shut down all vm of the first machine, upgrade the pve-node and restarted. Then migrated the vms of the other two nodes to the patched one without any rcu/problems.
Our Lab is running: 48 x Intel(R) Xeon(R) CPU E5-2690 v3 @ 2.60GHz (2 Sockets)
(HPE DL 360 Gen 9)
We had that RCU too.
Our Production is running on AMD EPYC, so there are no problems to expect? Or better to wait one week with upgrade?
Hello All
I did the upgrade on our cluster (Lab and not PROD for heaven's sake) today, ran into the same problem. Needed to reset all vms (Debian and Alma).
We do run the vm with cpu type default and our cluster has three nodes (HPE DL 360 Gen 9 with Intel(R) Xeon(R) CPU E5-2690 v3.
I know, it's an old topic, but still valid.
Could we add the knowledge to this article found in the Wiki?
And I still don't get the maths behind it.
Is there a max of virtuall dimms?
Could we add 32 * 16 GB DIMM? And it would still be necessairy to add another 1024M?
Ok, so the «Warning»/«Check» in the PVE-GUI is just a hinder for the thick tumb to not erreanously remove the wrong disk file.
The vm-102-disk-0 may not be deleted, this is the actual disk of the actual running vm 102 ;)
I was able to remove the wrong/old images with rbd rm… Thank you.
But...
Hello guys
Today I had a view into the storage/disks of the nodes in our cluster and found some leftovers.
Dunno, how we arrived to have them.
Usually one needs to check the «Delete unreferenced disks» checkbox while deleting a vm. Does this checkbox automatically also include snapshots?
As we...
We got the same problem since the last upgrade of PVE and Ceph. Has there been anyone filing an incident/bug-report. This seems not a regular behaviour.
Ja, 17.2.5-pve1 (vor ca 3 Wochen upgegradet als es verfügbar wurde).
Use hat sich angeglichen.
$ sudo ceph osd df tree class sas-ssd
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS TYPE NAME
-1 83.17984 - 56...
Ja, wir haben die Ratio gesetzt und Ceph schiebt die Daten herum (und die Anwender beschweren sich, dass sie hohe IO Waits haben). :rolleyes:
~$ sudo pveceph pool ls --noborder
Name Size Min Size PG Num min. PG Num Optimal PG Num PG Autoscale Mode PG Autoscale Target Size PG...
$ sudo ceph balancer status
{
"active": true,
"last_optimize_duration": "0:00:00.003595",
"last_optimize_started": "Wed May 3 15:10:50 2023",
"mode": "upmap",
"optimize_result": "Optimization plan created successfully",
"plans": []
}
Vermutlich liegt das daran, dass er...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.