We have 3 identical servers running promox+ceph with 2 HDDs per server as OSDs:
- OS: debian Buster
- proxmox version 6.4-1
- ceph version 14.2.22-pve1 (nautilus)
One OSD went down so we decided to remove it following the ceph documentation here.
Now we have 5 OSD left:
$ sudo ceph osd...
I have three node proxmox cluster:
120GB SSD for OS
512GB nvme for ceph
1GbE network for "external" access
dual 10GbE network (for cluster)
Network is connected as stated here:
I've been testing our Proxmox Ceph cluster and have noticed something interesting. I've been running fio benchmarks against a CephFS mount and within a VM using VirtIO SCSI.
CephFS on /mnt/pve/cephfs -
root@pve03:/mnt/pve/cephfs# fio --name=random-write --ioengine=posixaio --rw=randwrite...
i've 4 nodes proxmox, with CEPH, only three are monitors.
For every node, i've three 3 SSD and 2 HDD (normal) , and two different pools: one for ssd and one for hdd. Now, i'm adding one OSD per node, to add this to existing pool HDD. But it's taking more time that i thinked. This is the...
We are using for many years from XEN to XCP-NG , and recently i stumble in few article and recommendation for Proxmox, in addition looking on the product itself looks so comprehensive in term of features and manageability , especially the Hyper converged options and the fact that...
nachdem ich einen SSD-Pool zu meinem bestehenden HDD-Pool hinzugefügt habe schrumpft der HDD-Pool extrem schnell, so dass vermutlich morgen ein Produktionsausfall bevorsteht.
3-Node-Hyper-Converged-Cluster, (PVE Vers. 6.3-6) mit verteiltem Ceph (Vers...
We have 2 datacenters, one located in each building. We are setting up a new proxmox VE HA
cluster of 4 machines. The idea being if one building goes down for an extended time, the other
2 servers will be able to keep everything up. In this setup each server has 8 ssd's. one ssd...
We are looking to add some services in Europe for clients and have been scouting for suitable hosting space. OVH seems to be one of the only options that offer Proxmox hosting. However, from their many options to select from, we can't quite figure out which is the most suitable one. We would...
I have a cluster with 3 nodes (pve, pve1, pve2)
Here the version information:
root@pve:~# pveversion -v
proxmox-ve: 6.4-1 (running kernel: 5.4.157-1-pve)
pve-manager: 6.4-13 (running version: 6.4-13/9f411e79)
I have recently migrated all VMs on a PVE cluster from a hdd pool to a ssd pool. Now that the hdd pool is empty (no VMs), ceph still reports 31% usage on the pool.
This ceph has been in use for a while now. Upgraded from ceph 12 to 14 to 15.
This is the second cluster where I noticed large...
Hi there, I was trying to do routine clearing out of /var/log files, and typo'd my rm command and deleted everything inside /var/log.
Right now I have tried to recreate the directory structure and files, as well as fixed permissions. However, my servers are still turning to the grey question...
I have just converted a 2 node cluster to use the ceph file store, the migration seemed to go well with writing all the data onto ceph.
When it came time to fire up the VM's I noticed that they just sit there trying to start.
I created some backups on the cephfs (which worked fine) then I tried...
During the free time I had to think how to extend and add new resources (storage) to our Cloud. For the moment I do have stroage base on Ceph - OSD only SSD type.
I was reading docs about Ceph and I can say it's possbile I have even the plan. Problem is I have no idea if the actions I...
to save some space on our ssd-pool i've enabled compression on the pool:
ceph osd pool set VMS compression_algorithm lz4
ceph osd pool set VMS compression_mode aggressive
ceph df detail, i can get some details but cannot verify if it works.
Any hints? Is "ratio" needed as well...
TLDR: Ceph vs ZFS: advantages and disadvantages?
Looking for thoughts on implementing a shared filesystem in a cluster with 3 nodes.
They all serve a mix of websites for clients that should be served with minimal downtime, and some infrastructure nodes such as Ansible and pfSense, as well as...
ich nutze PVE seit über 5 Jahren zu Hause für etliche VMs, bisher mit ZFS. Als Host läuft ein HP DL380P Gen8 mit 56GB RAM.
Das funktioniert soweit einwandfrei.
Nun zu meinem Vorhaben:
Ich habe mich nun etwas in Ceph eingelesen und wollte mal testen ob das als Storage Pool bei...
ich betreibe aktuell einen 3-node ceph cluster den ich nun mit enterprise SSDs aufrüsten möchte um ggfs journal (db/wal) drauf zu parken.
erstmal die specs der 3 nodes.
Xeon E5-2630L v3
3x6TB HDD (folgt noch eine 6TB sowie 3TB HDD)
1xM.2 256GB (ceph pool für VMs)
After adding new node to cluster - all succesfull, I run on some problem when installing and managing Ceph on new node from cluster. This is production Cluster.
The new node is not, monitor, manager and none of the drives was used as OSD. Beacuse when I try to reach interface of...
I'm having what seems to be a network nottleneck.
Context is: one of my clients wants to revamp it's infrastructure and was happy already with PVE servers despite having only local zfs backed images, missing out on the broad possibilities offered by Ceph... I wanted to push him to go...