I am building out a 3 node cluster for home lab use. Each node has (2) 2.5 gb nics which currently plug into 1 gb ports on my switch, (2) SFP+ ports, and (2) 40 gb ports. I currently have the 40 gb configured in a mesh network. I would like to separate everything out to avoid network...
I have a pretty basic need, for a small business I own. Basically one VM that I need to run, and in essence I want HA, in the sense that if the node fails, it's taken over by another proxmox server. I'd prefer it to used shared storage ZFS, and I have this running in lab right now. I've stayed...
Hello everyone,
During the migration of one of my Proxmox nodes from version 7.4-3 to version 8.2-4, when booting with the kernel 6.8.12-1 that comes with this latest version, the following error appears:
libceph: mon1 (1) 192.168.169.20:6789 socket closed ((con state V1_BANNER))
libceph: mon5...
Hello!
I wanted to ask a quick question because i'm not totally sure at the moment and don't want to risk any of my VMs. I was forced to replace a node in my cluster urgently which worked perfectly fine. My ceph is currently in the state of remapping+refilling because of the newly added OSDs...
With a view to buying a Dell server with a Boss-n1 card in raid one for the system part (Proxmox. I don't think there is a problem on that side), I would like to know if the HBA465i backplane that Dell offers me in its quote, will allow me to see the SSD disks under Proxmox to set up a CEPH OSD
I was looking for many other similar questions but could not find any exact answers.
In my case I have three server's ha network on 65 subnets and they are connected to one another using 10GB wire and ports. 1 VM is running on each node and is part of a cluster.
I tried to tune my pacemaker...
tl:dr
changing %i to corresponding name make service mon working.
One of my mons keeps dying, restarting and cannot start again, so I investigate it.
It cannot start due to misconfiguration in /etc/systemd/system/ceph-mon.target.wants/ceph-mon@pve2.service file at "%i" variable, which points to...
Hello,
I am looking for a way to improve the performance of random reads and writes on a virtual machine with windows server 2022. VM configuration:
agent: 1
boot: order=virtio0;ide2;net0;ide0
cores: 6
cpu: qemu64
machine: pc-i440fx-9.0
memory: 16384
meta: creation-qemu=9.0.0,ctime=1724249118...
Hello,
I have a handful of baremetal servers that I would like to migrate to proxmox vms. As I move the data off the servers, I can reuse hardware to add to the proxmox cluster. Each server is currently configured in a raid 5 with 20 ish Terabytes available. All storage is using HDDs and there...
The Headline:
I have managed to kick all 3 of my nodes from the cluster and wipe all configuration for both PVE and CEPH. This is bad. I have configuration backups, I just don't know how to use them.
The longer story:
Prior to this mishap, I had Proxmox installed on mirrored ZFS HDDs. I...
Hello Community. Does anyone have KINGSTON SFYRD 4000G drives in ceph cluster? We have built a cluster on them and are seeing very high latency at low load. There are no network or CPU issues.
Ceph version is 17.2.7, cluster is built on LACP inter 25G network cards, Dell R450 servers, 256Gb ram...
Hello,
For the past two weeks, I've been encountering an issue where I can no longer clone or move a disk to Ceph storage. Here’s the cloning output:
create full clone of drive scsi0 (Ceph-VM-Pool:vm-120-disk-0)
transferred 0.0 B of 32.0 GiB (0.00%)
qemu-img: Could not open...
Hello,
I need to use RBDs with custom object size different from the default 25 (4MB).
While it is possible to create it via command prompt:
rbd -p poolName create vm-297-disk-1 --size 16G --object-size 16K
i don't know how to import it to make it available in LXC in some mount point?
In the process of putting together a plea for help as to how to get my cluster back together (copies of /etc/network/interfaces, /etc/hosts, /etc/corosync/corosync.conf for each of my 3 nodes) I found the mismatches and remembered to increment the config version up one. Now corosync is back...
Hello everyone!
I have a question regarding CEPH on PROXMOX. I have a CEPH cluster in production and would like to rebalance my OSDs since some of them are reaching 90% usage.
My pool was manually set to 512 PGs with the PG Autoscale option OFF, and now I've changed it to PG Autoscale ON.
I...
Hi
has anyone taken a Dell VxRail (VMware and vSAN) and wiped it, and reprovisioned it as ProxMox with Ceph?
we have a couple of clusters, with Dell servers connected together via Dell 10Gb/100Gb switches - can these be reused or are they restricted in their BIOS etc?
just working out if we...
I love Proxmox's ability to make it super easy to set up a three-node HA cluster with Ceph. I like to use it for my VMs and SQL databases that require a lot of IOPS. This way, if one of the nodes goes down, my SQL DB VM can be quickly redeployed to another node.
That way, if one of the nodes...
Good afternoon.
In my homelab I want to make a 3-node Proxmox cluster with Ceph. I also want to add a 4th separate host with PBS for backups. Each node in the Proxmox cluster will have an SSD for a 3/2 replicated Ceph pool for VM/CT disks. I also want to add a spinning HDD to each node for...
Proxmox Lab Setup - need advise
Some old R730 are hanging around in our office and I would like to make a proxmox ceph cluster lab built under production conditions.
I have following at disposal:
4x R730 8 core Xeon 96GB DDR4 each with 8 SFF slots for disks HBA330 sas controller
6x 480GB...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.