Hi! Our little hyperconverged cluster was born 2 years ago with 3 nodes and 12 OSDs in total. Nodes have triplicated since then and now the cluster has 10 nodes and 40 OSDs in total: 512 PGs are not enough anymore (we have to rebalance often, for example) and we'd like to increase the PG count...
Hello!
Some days ago I had a node of our production Proxmox VE 5.4-13 7-nodes cluster allocating much more RAM than the sum of the RAM limits configured for the VMs running on it. Using top command I found out that the KVM process of a VM having a limit of 4 GB was allocating 60 GB!!! I'd...
I post our experience in case it can be useful:
We've run a 7 nodes Proxmox PVE v5 HA Cluster with Ceph for a couple of years. Each node has 5 NICs: 2 bonded (master-backup) 10G for LAN, 2 bonded (master-backup) 1G for WAN and last NIC is for running backups, on a separated network.
So Corosync...
Thank you so much, we verified and on 6 out of 7 Proxmox nodes the variables net.bridge.bridge-nf-call-iptables and net.bridge.bridge-nf-call-ip6tables were both set to 0. The only node with those variables set to 1 was the only node on which the firewall worked. It was also the node with the...
Any rule is completely ignored. For example, a machine with no rules and with a DROP input policy (so every incoming port should be filtered) is instead reachable on any port.
I confirm that firewall is enabled on VM's nics (and on their options too). But sysctl returns this instead..what does this mean?
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
Hi! I have a 7 node production cluster with HA and Ceph storage. Every node is running Proxmox 5.4. I recently found out that firewall is not working at all (it was working when we last checked, some months ago): every port of every VM is opened even though per policy should be closed!
It seems...
Thank you very much Alwin, you were right: my data distribution was not balanced. I run
ceph osd reweight-by-utilization
preceded by
ceph osd test-reweight-by-utilization
just to make sure of what was about to happen. Ceph moved some data (not much, just a few gigabytes in a few minutes) and MAX...
Hi! We have a 5 node Proxmox+Ceph cluster (we use the same nodes for computing and distributed storage). We have LZ4 compression enabled, which works pretty good (we're saving more than 16%). My ceph df detail looks like this:
As you can see, I have 15 TB of physical storage (each of the 5...
As the title suggests...I've taken a snapshot of a running KVM machine (including RAM) on my Ceph cluster. It worked, but now I'd like to know how much space it's taking on ceph cluster and if this space will remain the same forever or will grow in the future. Thank you very much!
I had the same issues and solved using "chrony". My 5 nodes production cluster running Proxmox VE 5.2 and Ceph 12.2 has run for 9 months until now with no more "clock skew" problems since I installed chrony. Please see...
I had the same error ("output file is smaller than input file") while importing a vhd disk from external to a VM using Ceph storage target (with command "qm importdisk"). I solved by firstly converting from vhd to raw using "qemu-img convert" and then running "qm importdisk" worked!
I get your point and it does make sense. But I tried to synchronize 3 different nodes with a single NTP source a few hundreds kilometers away from the nodes and they got a time difference of 10 seconds among them: this is impossible, so probably my system had some sort of conflict with ntpd I...
@Alwin I didn't configure chrony at all, leaving all as default. It uses "2.debian.pool.ntp.org" (I see that the sort algorithm works perfectly, since it automatically picks Italian NTP sources (my server infrastructure is in Italy)).
With NTPD I had weird results, with disalignments of several...
I just built a brand new 3-nodes Proxmox 5.1 + Ceph cluster and had severe clock skew problems. timesyncd was not precise enough. Even ntp failed. After a lot of research and testing, I installed chrony and everything is finally stable! Here you are the steps in Proxmox 5.1 to reliably disable...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.