Well, if I try to add a monitor, all I ever get is a timeout.
Otherwise:
# ceph status
cluster:
id: 20b519f3-4988-4ac5-ac3c-7cd352431ebb
health: HEALTH_OK
services:
mon: 1 daemons, quorum 1
mgr: 1(active), standbys: 0, 2, 3
osd: 12 osds: 12 up, 12 in
data...
I saw some threads on something similar but none of them looked like the issue I have. Apologies if I overlooked a thread already covering this.
Anyway:
I cannot create a new monitor on my cluster. Or rather, it gets created but never gets quorum.
I tried doing this via:
- Web UI
- pveceph...
OK, there was still some mismatch with regards to packages. I had used luminous before but with packages from the ceph repo, not pve. After uninstalling with --purge and reinstalling using pveceph install, everything works now.
Hi there,
after upgrading a test-cluster to PVE 5.5, everything worked fine including ceph.
However, if I try to add another osd, it does all the preparation but is unable to start the osd.
what is immediately obvious is this:
The partition layout is different than the old disks:
using gdisk...
Thanks for the feedback. I had consulted https://pve.proxmox.com/wiki/Multicast_notes, yes, but to no avail.
When you say I could do without IGMP snooping, would that not mean falling back to unicast? Or do I have my networking facts all mixed up? :-)
Hi all,
I have the following cluster setup:
3 nodes
a Netgear XS712T 10G switch
node 1 uses port 1 on the switch, node 2 port 2, etc.
port 12 on the switch is my uplink to the internet.
I use vlans to seperate different kinds of traffic.
vlan 5 is for internal PVE traffic with ports 1, 2...
I am not sure I understand the concern?
It is not like you have to have a subscription in order to use Proxmox VE.
Rather: you have to have a subscription to get the (more) stable repo.
There is a big difference.
Hi all, I just tried to enable the 'iothread' flag for a disk on a test VM:
The disk is configured like this (from /etc/pve/qemu-server/100.conf):
scsi0: disks:vm-100-disk-1,cache=writeback,discard=on,size=4G
The scsi-backend is virtio.
The popup allows me to select the checkbox, but clicking...
Hi all,
does anything within proxmox depend on systemd? Or could I remove it and revert to sysvinit? I do that on boxes I administer, but wonder if it is possible on proxmox.
Thanks,
Martin
Re: Move Ceph jounral to SSD?
Here's an additional thought:
The drives are all attached to an Adaptec 8805 SAS Controller capable of using SSDs for caching.
Any idea on how that would compare to putting only the journal on the SSD?
Martin
Re: Move Ceph jounral to SSD?
Thank you, Udo, for the great and detailed reply.
I have ordered my S3700 SSDs and will implement this as you suggested. I'll give feedback once that's done.
It might take a bit because I will have to wait for the right trays so I can put the SSDs in my Supermicro...
Hi all,
I was thinking about adding a (server class) SSD each to my three ceph nodes.
Currently, each node has two OSDs and the journal for each is on the drive itself.
Now, I have a few questions about concepts:
- Can the two journals reside on the same partition on the SSD?
- Or do I have to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.