Currently we are using SolusVM with 5 nodes each with:
* 64GB of RAM
* 4x 1TB SSD RAID-10
* 2x CPUs
We're evaluating moving to Proxmox and use Ceph, as for the long-run it is future-proof, more scalable and easier to maintain than SolusVM. Also we're having problems with...
We are migrating our server to a different cloud provider.
While reading a documentation, i read this: "Storage communication should never be on the same network as corosync!".
Our server must have HA and data redundancy/ data high availability (using ceph).
The problem is, our...
Have any other ceph users noticed weirdness with performance graph. Where one read or write
does not seem to reflect real situation? Mine currently shows this and I think that it's a bit off...
Specifically looking at Reads... for +-50 VMs this is weird.
One thing to say, that it was after...
We have small cluster based on 10 servers and 2 storages
We are planning to add 8 node supermicro (https://www.supermicro.com/en/products/system/4U/F618/SYS-F618R2-RTN_.cfm)
it will act as ceph server with 2 major pools:
pool fast based on pcievnme for VM,sqlDB, (based on 4tb 2.5ssds )...
The disks in our PROXMOX cluster are slow. Virtual machines placed not on SSD pool drive work with the disk slowly and spend a lot of time on disk I/O. Virtual machines placed not on HDD pool drive work with the disk wery slowly
Help fix this flaw
I built ceph with 3 node
We use SSD...
Im very new to proxmox but i need to simulate a cluster in my physical [proxmox] server.
Reasons for doing this simulation/test are : test HA, test live migration, test ceph storage, 4 nodes HA amongst others.
Here are my objectives (feel free to tell me anything i missed):
I've run into a particular issue. I have a ClearOS VM in Proxmox acting as a domain controller with roaming profiles for some Windows PCs.
I have a 3TB disk in the Proxmox machine that I'd like to share to the ClearOS VM and other VMs in the future.
At the moment I'm exporting the...
Hello, I have been running 1U Xeon e5 2620 v4 CPU with 4 Sata SSD(Adata SSD 1tb SSD, very slow for ZFS:( ) configured on ZFS mirrored stripe. It run well for me but it doesn't have enough IO for my VM needs.
So I recently we purchased Amd Epyic 7351p with 8 NVME ssd (Intel P4510 1TB) to solve...
I am aware of this: https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0
- We have three (3) identical nodes: 256Gb of RAM, 4Tb of HD, ... same on each node
- Each node is running proxmox 4.4-24 with CEPH enabled
- We do not have any shared storage, all VMs are on nodes' hard drives
ich versuche gerade eine VMware-Appliance in ein Proxmox System zu konvertieren.
In der Anleitung https://pve.proxmox.com/wiki/Migration_of_servers_to_Proxmox_VE#Prepare_the_disk_file steht aber nicht, wie das bei einem Ceph Cluster geht.
Leider ist die Raw Datei zu groß, um sie noch im...
I am trying to use a persistent volume claim dynamically after defining a storage class to use Ceph Storage on a Proxmox VE 6.0-4 one node cluster.
The persistent volume gets created successfully on ceph storage, but pods are unable to mount it. It throws below error. I am not sure...
after adding an OSD to Ceph it is adviseable to create a relevant entry in Crush map using a weight size depending on disk size.
ceph osd crush set osd.<id> <weight> root=default host=<hostname>
How is the weight defined depending on disk size?
Which algorithm can be...
aktuellen suchen wir jemanden, der für uns die Verwaltung und Überwachung von unserem CEPH-Cluster übernehmen kann.
Dieses umfasst bei uns derzeit fünf Nodes mit Dual E5-CPUs und zwischen 256 und 512 GB RAM pro Node. Alle Nodes sind mit 2x 10 Gbit intern an unterschiedliche...
I have created OSD on HDD w/o putting DB on faster drive.
In order to improve performance I have now a single SSD drive with 3.8TB.
How can I add DB device for every single OSD to this new SSD drive?
Which parameter in ceph.conf defines the size for the DB?
Can you confirm that...
I finished upgrade to Proxmox 6 + Ceph Nautilus on 4 node cluster.
On 2 nodes I have identified that all directories /var/lib/ceph/osd/ceph-<id>/ are empty after rebooting.
Typically the content of this directory is this:
root@ld5508:~# ls -l /var/lib/ceph/osd/ceph-70/
It's great to have a complete open source, cost-free hyperconvergence alternative, especially if you look at the values of VMware (Vsan) and Nutanix.
I already set up an experimental setup with Proxmox and Ceph in a test lab, including posting a video about my experience...