Hello everyone,
we are running a 4-Node pve cluster with 3 Nodes in a hyper-converged setup with ceph and the 4th Node just for virtualization without its own osds. After creating a VM with a TPM state device on a ceph pool it fails to start with the error message:
rbd: sysfs write failed
TASK...
Hi i have recreated a osd in my hyperconverged cluster. I have a 10Gbit link so it should be really fast to rebalance. But it seems like to rebalance only with some kilobytes:
I have already set
ceph tell 'osd.*' injectargs '--osd-recovery-max-active 4'
ceph tell 'osd.*' injectargs...
Hello I made the mistake of installing Ceph before I made my cluster and now I can not add/remove/start, my second monitor in ceph I am curious if there is a command to remove it that I cannot find.
**Edit: I found the commands but it still will not let me remove it, Can I delete all the ceph...
Well, I've messed up pretty big this time.
With following (again) guide for reinstalling ceph - https://dannyda.com/2021/04/10/how-to-completely-remove-delete-or-reinstall-ceph-and-its-configuration-from-proxmox-ve-pve/
I stopped at 1.16 `rm -r /etc/pve/ceph.conf` (while following steps only...
As the title suggests, I wanna find out what are the performance of other people's clusters.
I'll start:
Node Count: 3
Networking: 1 gbe shared with proxmox
Disks:
3 x 3TB 7200RPM
2 x 1TB 7200RPM
1 x 1TB 5400RPM
using this Benchmarking Tutorial here's my result
[ This follows on from my previous comment on a different thread, https://forum.proxmox.com/threads/pverados-segfault.130628/post-574807 ]
I've just figured out the whats and whys of a problem I've been having trying to create a new VM that uses RBD disks hosted by an external Ceph cluster...
Hello,
We are working on a configuration where we will have 6 nodes spread on two (very close) sites, all linked on the same LAN (25G).
I wanted to know how you would design the solution with CEPH in order to have a working site in case one of the two sites fails ?
So idea is to have site A...
What are some strategies that people use to backup/clone/automate/etc OS disks and configurations?
I've read a number of threads on this topic - varying from clonezilla backups, automated config managers, zfs send cron jobs and many more. Many are outdated and I am curious if there are more...
I've been using a HCI cluster in my home lab built from really small and low-power devices, mostly because they've become potent enough to host these various 24x7 services I've been accumulating.
I've used Mini-ITX Atoms and NUCs and am currently trying to transition a HCI cluster made from...
Hello guys,
once again I encountered the issue rbd error: rbd: listing images failed: (2) No such file or directory (500)
Only that last time I was able to fix it via: rbd rm -p CEPH-POOL-NAME vm-ID-disk-ID
This however results in:Removing image: 0% complete...failed.
rbd: delete error...
I run a 3-node PVE with CEPH.
I migrated all VMs away from node 3, upgraded to the latest CEPH (Quincy) and then started the PVE 7 to 8 upgrade on node 3.
After rebooting node 3 (now PVE 8), everything seemed to work well. So I migrated two VMs, one each from node 1 (still on PVE 7) and node 2...
Hello All,
I am currently working on some hosts that have 8x 600GB 10k SAS drives and am planning on using these to install Proxmox and use the rest for CEPH. Is there a best way to split out these drives by filesystem? Should I use the RAID controller on the server or is it better to use the...
Hi there!
Im trying to learn more about Ceph storage so we can use it in an upcoming installation.
We have a database running on windows server that most of the company relies upon. I was looking into getting a 4 blade server and running proxmox ve on 3 of the blades and pbs on the last blade as...
Hi guys,
i have a single debian with guest-agent, running as a VM on a 3 node-cluster (each node has 2 OSD's building the ceph-cluster (rpool1)).
When i tried to online-migrate the VM i got (you'll notice later that the disk does not lay on the local-storage but on the ceph (pool1), and this...
Good day I was wondering how to get rid of this error.
Jun 28 14:37:01 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData', 'monhost' is not set (assuming pveceph managed cluster)!
Jun 28 14:37:12 pve13 pvestatd[1495]: ignoring custom ceph config for storage 'CephData'...
After months of planning, I came to a conclusion to assemble 3 Proxmox Nodes and cluster them together.
I'm mostly interested in Mini-PC (NUC Style) with dual 2.5GbE LANs but after building 32 Core Epyc Proxmox Node, I'm known to the performance boost with actual server hardware. Anyway, I will...
Hi,
The Dashboard seems to not work with the last version of proxmox (upgrade ou clean install).
I can access to the dashboard a few minutes after the installation of the plugin but quicly the manager service crash.
I already try to reinstall for the host i upgraded and add a clean installed...
I had a failed node, which I replaced, but the MDS (for cephfs) that was on that node is still reported in the GUI as slow. How can I remove that? It's not in ceph.conf or storage.conf
MDS_SLOW_METADATA_IO 1 MDSs report slow metadata IOs
mdssm1(mds.0): 6 slow metadata IOs are blocked > 30 secs...
Hello!
I currently have 4 identical dedicated servers, each with:
2x E5-2680v4
192GB of RAM
6x 1.2TB SSD
2x 10Gbps SFP+
My question is:
What is the recommended setup so that the data is replicated at least once (similar to RAID1 or RAID10, ceph shared storage with 3 nodes) and at the same time...
What is the best approach? that will be easy to install and maintain for future upgrades
currently on pve 7.4 and ceph 16.2.11
now have 4 vms (as testing,poc) but planning to grow to around 50
( i prefer to do it once, then clone the node if it is possible)
for perspective i have the...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.