I'm using same configuration in proxmox docs here https://pve.proxmox.com/wiki/Network_Configuration
Use VLAN 5 with bond0 for the Proxmox VE management IP with traditional Linux bridge
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
auto bond0
iface bond0 inet...
Can you clarify a little more
I used this setup for so much years in a heavy workload, I mean terabyte databases, webservers with 20k visits per weeks and so on, without any particular feeling of slowness, I understand that with SSDs I can push 100 times faster, but this doesn't means that I...
Ok, but let's say you have a 50 node cluster and you want to a improve a little the performance, do I have to replace all 50 nodes?
Or on the countrary let's say one server is dead and I replace it with an older one, will the performance impact all the remaining 49?
I have an old 3node ceph cluster with HP gen8 servers, 2x Xeon E5-2680 @ 2.70GHz (turbo 3,50 GHz) 16core and 64GB DDR3 RAM for each node.
We bought some almost new HP gen10 servers with 2x Gold Xeon 6138 @ 2.0GHz (turbo 3,70 GHz) and 128GB DDR4 RAM for each node.
So there is a huge jump in terms...
I think the misunderstanding simply comes from the fact that, in the proxmox gui, the graphic usage refers to the raw space used including the replicas, so 40% - 2.61 TiB of 6.55 TiB means that within my cluster I have 2.61Tib / 3 around 870Gib of real data occupied. So I'm still a long way from...
sorry again, you were clear but there is a part that I'm missing surely due to inexperience, you are talking about single OSD, but the 40% value is related to the total amount of data present in the entire cluster, so how can be possible to have 40% used if size is 3 without any warning?
I have 3 node ceph cluster, each node has 4x600GB OSD and I have just one pool with size 3/2.
I was thinking that over 33% of used storage(I mean just data no replicas) I would have received some warning message, but cluster seems healthy over 40% and everything is green. I'm attaching some...
Probably I'm missing something here.. just to clarify
nodes 1-3 are the only nodes that will have access to the ceph storage. ceph cluster network and ceph public network are on the same 10GB connection. So this will be the speed beetween nodes 1-3. I will migrate VMs only between those 3...
they have local storage , they do not have access to ceph storage, they are nodes just for utilities, like scanner server, nas, backup of workstations and other non critical services. I will live migrate only between 1-3. 4-7 will never migrate, they are all together in one cluster just for...
Just to understand..
of course, but which is the buttleneck? I have 10K SAS mechanical drives in the ceph cluster, is a 10GB connection not enough? Of course I can improve the public LAN to SFP+, but for the amount of traffic in my company is enough a 1GB link per server..
Sure I can use mlag...
Sorry why I have to use so many interfaces? Is my below scheme wrong? I'm not interested to have failover on public lan, because I have many replacements for this switch, and this interface is not affecting the health of the cluster, I can replace the public lan switch without loosing anything...
I'm planning a 7nodes proxmox cluster. Of those 7nodes, 3 will have a ceph shared storage. Each node is equipped with 3x RJ45 and 2x SFP+ network interfaces.
I know that is best to have separated networks for CEPH, PROXMOX CLUSTER and LAN, but I was thinking if is a good Idea to use a setup with...
Sorry for the late answer, anyway is still working after 7 years, I never loosed any data and it was h24 in a production enverinment, proxmox is always updated on latest possible version but my ceph is now 16.2.11 I don't remember this kind of error so probably is something that happened in ceph...
I noticed that into the packages versions I have
ifupdown: not correctly installed
ifupdown2: 3.1.0-1+pmx3
I upgraded to ifupdown2 before upgrading to pve7 to avoid those mac address problems, is this ifupdown: not correctly installed correct? I'm thinking at some network issues during...
I tried to rebuild all OSDs and now I have the new partition schema with one single partition for each OSD. The above loop error is not present anymore but the startup issue is still there.
What I noticed is this message that, when a restarted node comes up again, is continuously repeated until...
Hi @Mikepop recreating all osds fixed the issue? The problem at reboot was similar to this https://forum.proxmox.com/threads/proxmox-ceph-pacific-cluster-becomes-unstable-after-rebooting-a-node.96799/ ? many thanks
Hmm.. searching in nodo2 syslog I foun this symlink loop,
Sep 28 23:38:02 nodo2 systemd-udevd[1854]: sdb2: Failed to update device symlinks: Too many levels of symbolic links
Sep 28 23:38:02 nodo2 systemd-udevd[1852]: sde2: Failed to update device symlinks: Too many levels of symbolic links
Sep...
Any help about this? Yesterday I upgraded to latest kernel and new ceph version with same issue.. I'm attaching the syslog of nodo1 and nodo2 around the reboot of node2
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.