Thanks a lot for reply @aaron.
We fully understand risk when OSD down\out and we never remove 2 OSD on separate nodes in our "Per node" redundancy policy.
I just asked is it ok remove OSD when Ceph is Healthy, there is no PG degraded state, but re-map\backfills process still running (no...
Hello,
It`s not an issue topic, but I`ll be really appreciate for your answers.
We have 12 Node PVE cluster with 54 Ceph OSD (all SSD).
PVE 6.1.5
Ceph 14.2.5
OSD count 54
PG count 2048
Replica 3
OSD in RAID0 (we know that is not supported configuration, but we have to do so. And BTW OSD in...
Hi,
Sorry for upping old themes, but I have same issue, but with some exceptions.
Fresh installed Proxmox 6-2.4 (with all latest updates) 4 nodes cluster connected through 10G switch, Synology NAS in same broadcast domain and IP network (no FW, no ACL on switch side).
Before cluster...
lucentwolf, do you reboot node after removing partitions? I tried to mount disks after removing partitions and got same error. Only after reboot node all disks are available for Ceph.
Hi lucentwolf, yes, ceph-volume lvm list also didn`t work. I found workaround for my issue (format disk from LiveCD).
Also I found something looks like our issue https://github.com/ceph/ceph/pull/23532
Hope this would be helpfull.
Hello,
Sorry in advance, to avoid new similar topic I`ll write here.
I have issue with ceph-volume lvm zap command too.
I use PVE 6.0.1 with Ceph for testing purpose. Now I have to migrate this cluster to production. I made clear install (PVE + upgrade + Ceph) on each of 4 nodes, create...
You use public IP.
1. Check FQDN in DNS or in hosts file in whmcs.
2. Check connectivity from whmcs to all cluster nodes for all necessary ports via IP and DNS
Your problem is out of scope PVE.
@Alwin Thanks a lot for your reply. We`ll try to do it.
Thanks. Yes, I know about this product. But as you understand backup process not only create VZDump and then backup delta. It is also tape archiving, syncing with disaster site and so on. It`s not easy to make all B&R process by scripts...
Thanks a lot for your time.
Step by step.
1. iSCSI gateways? You mean this one https://docs.ceph.com/docs/mimic/rbd/iscsi-targets/
Does Proxmox support it ok? I can use only software iSCSI. Some nodes cards cant handle HW iSCSI.
2. Ok, but I can't delete existing pool :( I'll try from CLI...
Hello,
Sorry in advance for dummy questions, but i can`t find in docs how to connect VMware 5.5 Hosts to CephFS via NFS. All docs goes to Ceph official site. As it is production cluster i`m afraid to do something very bad.
I have 10 nodes PVE cluster and Ceph cluster with 40 OSD SSD and 10...
@kakao73, @ujiam did you check were Proxmox trying to find Cloud-Init hard drive? When we meet this error on older versions VM-Disk was on Ceph SSD Pool, but in VM Hardware options Cloud-Init disk was on our Dev\Test SAS Pool. We remount correct Cloud-Init disk and VM was able to start.
Which qemu version do you use?
We have a lot of problem with various CloudInit images since qemu qemu-server 6.0.10 to 6.0.17 with 6.0.9 or latest 6.1.1 everything ok. Try to use this versions.
Hi, thanks a lot for reply.
Finally It work`s.
Answer was not in those topic, but it push me to re-check physical network. On of 2 ports of 1Gbe was not properly configured.
Thanks.
After few days of reading I decide to leave corosync network in 10G net.
But I have problem with VLANs.
As I mentioned above I have 2 10G nic without VLAN and 2 1G nics with VLANs configured on the switch.
I create 2 bonds. One with regular Linux bond1 for 10G network, then create vmbr0 with...
Hi All,
I`m going to migrate to Proxmox and I have a question about cluster network.
I have servers with 4 NICs.
2 x 1Gbe for uplinks connected to separate switches in active-standby mode, which connected to Core router and all VMs obtain real IP.
2 x 10Gbe connected to separate switches in...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.