Hi, thank you for accepting me into this community.
I have set up a new cluster for production (my first non vSPhere cluster) and I am running into a little problem.
Given that we have a 4 node cluster I wanted to add additional quorum following the Instructions from the manual...
I have three nodes in my cluster, each as a ceph mon, mgr, and two mds each. My problem comes when I try to mount on boot. I think the issue is with my /etc/fstab, as pasted below. I tried using the Proxmox storage but it was very inconsistent while fstab works much better.
This setup works to...
I've been wrestling with this issue for over a month now, and I cant seem to get past it.
I have two pgs that havent been scrubbed since June:
$ ceph health detail | grep "not scrubbed since 2024-06"
pg 17.3dc not scrubbed since 2024-06-01T20:46:29.042727-0700
pg 17.137 not scrubbed...
Hello,
I've seen this happen a few times now and it's always right when PBS goes to do backups on the node. The node keeps logging but drops from the cluster and shows offline. Here are the logs:
And then for the rest of the night it just logs these timeouts:
Not sure what logs to pull to...
Hi,
May I know if anyone did follow this documentation? and will this work? Also, is recommended? will affect proxmox function?
https://pve.proxmox.com/wiki/User:Grin/Ceph_Object_Gateway
Thank you.
I am building out a 3 node cluster for home lab use. Each node has (2) 2.5 gb nics which currently plug into 1 gb ports on my switch, (2) SFP+ ports, and (2) 40 gb ports. I currently have the 40 gb configured in a mesh network. I would like to separate everything out to avoid network...
I have a pretty basic need, for a small business I own. Basically one VM that I need to run, and in essence I want HA, in the sense that if the node fails, it's taken over by another proxmox server. I'd prefer it to used shared storage ZFS, and I have this running in lab right now. I've stayed...
Hello everyone,
During the migration of one of my Proxmox nodes from version 7.4-3 to version 8.2-4, when booting with the kernel 6.8.12-1 that comes with this latest version, the following error appears:
libceph: mon1 (1) 192.168.169.20:6789 socket closed ((con state V1_BANNER))
libceph: mon5...
Hello!
I wanted to ask a quick question because i'm not totally sure at the moment and don't want to risk any of my VMs. I was forced to replace a node in my cluster urgently which worked perfectly fine. My ceph is currently in the state of remapping+refilling because of the newly added OSDs...
With a view to buying a Dell server with a Boss-n1 card in raid one for the system part (Proxmox. I don't think there is a problem on that side), I would like to know if the HBA465i backplane that Dell offers me in its quote, will allow me to see the SSD disks under Proxmox to set up a CEPH OSD
I was looking for many other similar questions but could not find any exact answers.
In my case I have three server's ha network on 65 subnets and they are connected to one another using 10GB wire and ports. 1 VM is running on each node and is part of a cluster.
I tried to tune my pacemaker...
tl:dr
changing %i to corresponding name make service mon working.
One of my mons keeps dying, restarting and cannot start again, so I investigate it.
It cannot start due to misconfiguration in /etc/systemd/system/ceph-mon.target.wants/ceph-mon@pve2.service file at "%i" variable, which points to...
Hello,
I am looking for a way to improve the performance of random reads and writes on a virtual machine with windows server 2022. VM configuration:
agent: 1
boot: order=virtio0;ide2;net0;ide0
cores: 6
cpu: qemu64
machine: pc-i440fx-9.0
memory: 16384
meta: creation-qemu=9.0.0,ctime=1724249118...
Hello,
I have a handful of baremetal servers that I would like to migrate to proxmox vms. As I move the data off the servers, I can reuse hardware to add to the proxmox cluster. Each server is currently configured in a raid 5 with 20 ish Terabytes available. All storage is using HDDs and there...
The Headline:
I have managed to kick all 3 of my nodes from the cluster and wipe all configuration for both PVE and CEPH. This is bad. I have configuration backups, I just don't know how to use them.
The longer story:
Prior to this mishap, I had Proxmox installed on mirrored ZFS HDDs. I...
Hello Community. Does anyone have KINGSTON SFYRD 4000G drives in ceph cluster? We have built a cluster on them and are seeing very high latency at low load. There are no network or CPU issues.
Ceph version is 17.2.7, cluster is built on LACP inter 25G network cards, Dell R450 servers, 256Gb ram...
Hello,
For the past two weeks, I've been encountering an issue where I can no longer clone or move a disk to Ceph storage. Here’s the cloning output:
create full clone of drive scsi0 (Ceph-VM-Pool:vm-120-disk-0)
transferred 0.0 B of 32.0 GiB (0.00%)
qemu-img: Could not open...
Hello,
I need to use RBDs with custom object size different from the default 25 (4MB).
While it is possible to create it via command prompt:
rbd -p poolName create vm-297-disk-1 --size 16G --object-size 16K
i don't know how to import it to make it available in LXC in some mount point?
In the process of putting together a plea for help as to how to get my cluster back together (copies of /etc/network/interfaces, /etc/hosts, /etc/corosync/corosync.conf for each of my 3 nodes) I found the mismatches and remembered to increment the config version up one. Now corosync is back...
Hello everyone!
I have a question regarding CEPH on PROXMOX. I have a CEPH cluster in production and would like to rebalance my OSDs since some of them are reaching 90% usage.
My pool was manually set to 512 PGs with the PG Autoscale option OFF, and now I've changed it to PG Autoscale ON.
I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.