Here is the main ceph config.
auth client required = cephx
auth cluster required = cephx
auth service required = cephx
cluster network = 10.10.11.0/24
public network = 10.10.11.0/24
My goal is again to have a iptables rule at the host level to block vms from accessing...
Apologies i pasted the config from the wrong cluster. We have6 different PVE clusters.
here is the correct one.
[global] auth client required = cephx auth cluster required = cephx auth service required = cephx cluster network = 10.10.11.0/24 fsid = 7b2f35a9-cd80-432b-9ef1-7b35259bc707 keyring...
I feel dumb here but i just cant make this DENY rule work.
I have a 10 Node cluster.
vmbr0 is my WAN
vmbr1 and vmbr2 are my ceph bond/lacp 2x 10Gbit SFP per node.
My vmbr0 is a entire /24 Public Ip range no NAT
vmbr0 is the only interface assigned to each vm in the cluster...
I waited but in the next update from Promox there was no patch still. And i need live migration working again.
The guide you wrote is good but i have some questions.
1. Do i need to be in the directory of file i am patching?
2 . If so where is that file?
I created a file called...
I have a small 3 node PVE cluster all running 5.4-3. My RAM usage is has been growing on my Hosts but i have not added any new data or VM's to the cluster.
Each node has 8x 2TB Drives
and 2x 120GB SSDs WAL/DB and OS
64GB RAM per Host.
Most of the Usage is
cpeh-osd at 6% "PER...
I want to apply this patch but i am not sure how.
and the code
use list_images to check for existence of cloudinit disk instead of
'-e'. this should solve the problem with rbd where the path returned by...
Hi new information on this error.
I found that if i remove the cloud-init drive from the VM in question, I can then live migrate it to any node,.
BUT if i re add the cloud-init drive to the VM after migration the VM will NOT start and gives the error
rbd: create error: (17) File...
My PVE cluster has been running great for many months but i just got around to updating the all the pve nodes to the latest version of PVE 5.4-3 now.
I started live migrating all my VM's to another node and that worked great.
I then set node-out on the the first node and i...
I just set the Default INPUT to ALLOW in PVE firewall and then added a BLOCK-ALL 4v/6vTCP/UDP rule at the end of my Allow list and everything works fine on my cluster now and proper traffic is getting blocked and allowed traffic is allowed.
Thanks so much everyone...
I dont mean to be bugging the communty but i would like some help on this firewall issue i am sure its simple.
I did what wolfgang and i did not any rules for my corosync and ceph interfaces, With the firewall on i can ping on them fine and that part works.
But i still loose quoram and nodes...
I tried just not adding a rule to the corosync and ceph interfaces but my cluster still drops quroum when i enable the firewall. I only have rules on my vmbr0 which is my WAN interface and did not add rules to other interfaces and yet still enable firewall still brakes ceph and...
Sorry i have had a lot of questions on here lately and i really do appreciate the help.
I finally gott my PVE firewall to enable and its working but once i enable the firewall all nodes lose quorum but from the pve GUI i can still access all nodes in the cluster and ceph and...