Search results

  1. S

    unknow interface type in gui when using vlan and bond with bridge

    I'm using same configuration in proxmox docs here https://pve.proxmox.com/wiki/Network_Configuration Use VLAN 5 with bond0 for the Proxmox VE management IP with traditional Linux bridge auto lo iface lo inet loopback iface eno1 inet manual iface eno2 inet manual auto bond0 iface bond0 inet...
  2. S

    3node brand new ceph cluster vs 5node mixed ceph cluster

    Can you clarify a little more I used this setup for so much years in a heavy workload, I mean terabyte databases, webservers with 20k visits per weeks and so on, without any particular feeling of slowness, I understand that with SSDs I can push 100 times faster, but this doesn't means that I...
  3. S

    3node brand new ceph cluster vs 5node mixed ceph cluster

    Ok, but let's say you have a 50 node cluster and you want to a improve a little the performance, do I have to replace all 50 nodes? Or on the countrary let's say one server is dead and I replace it with an older one, will the performance impact all the remaining 49?
  4. S

    3node brand new ceph cluster vs 5node mixed ceph cluster

    I have an old 3node ceph cluster with HP gen8 servers, 2x Xeon E5-2680 @ 2.70GHz (turbo 3,50 GHz) 16core and 64GB DDR3 RAM for each node. We bought some almost new HP gen10 servers with 2x Gold Xeon 6138 @ 2.0GHz (turbo 3,70 GHz) and 128GB DDR4 RAM for each node. So there is a huge jump in terms...
  5. S

    Newbie question about ceph gui in proxmox

    Thank you so much for the clarification
  6. S

    Newbie question about ceph gui in proxmox

    I think the misunderstanding simply comes from the fact that, in the proxmox gui, the graphic usage refers to the raw space used including the replicas, so 40% - 2.61 TiB of 6.55 TiB means that within my cluster I have 2.61Tib / 3 around 870Gib of real data occupied. So I'm still a long way from...
  7. S

    Newbie question about ceph gui in proxmox

    sorry again, you were clear but there is a part that I'm missing surely due to inexperience, you are talking about single OSD, but the 40% value is related to the total amount of data present in the entire cluster, so how can be possible to have 40% used if size is 3 without any warning?
  8. S

    Newbie question about ceph gui in proxmox

    I have 3 node ceph cluster, each node has 4x600GB OSD and I have just one pool with size 3/2. I was thinking that over 33% of used storage(I mean just data no replicas) I would have received some warning message, but cluster seems healthy over 40% and everything is green. I'm attaching some...
  9. S

    Is it safe to use same switch for CEPH and CLUSTER networks?

    Probably I'm missing something here.. just to clarify nodes 1-3 are the only nodes that will have access to the ceph storage. ceph cluster network and ceph public network are on the same 10GB connection. So this will be the speed beetween nodes 1-3. I will migrate VMs only between those 3...
  10. S

    Is it safe to use same switch for CEPH and CLUSTER networks?

    they have local storage , they do not have access to ceph storage, they are nodes just for utilities, like scanner server, nas, backup of workstations and other non critical services. I will live migrate only between 1-3. 4-7 will never migrate, they are all together in one cluster just for...
  11. S

    Is it safe to use same switch for CEPH and CLUSTER networks?

    Just to understand.. of course, but which is the buttleneck? I have 10K SAS mechanical drives in the ceph cluster, is a 10GB connection not enough? Of course I can improve the public LAN to SFP+, but for the amount of traffic in my company is enough a 1GB link per server.. Sure I can use mlag...
  12. S

    Is it safe to use same switch for CEPH and CLUSTER networks?

    Sorry why I have to use so many interfaces? Is my below scheme wrong? I'm not interested to have failover on public lan, because I have many replacements for this switch, and this interface is not affecting the health of the cluster, I can replace the public lan switch without loosing anything...
  13. S

    Is it safe to use same switch for CEPH and CLUSTER networks?

    I'm planning a 7nodes proxmox cluster. Of those 7nodes, 3 will have a ceph shared storage. Each node is equipped with 3x RJ45 and 2x SFP+ network interfaces. I know that is best to have separated networks for CEPH, PROXMOX CLUSTER and LAN, but I was thinking if is a good Idea to use a setup with...
  14. S

    [SOLVED] suggestions p420i raid controller with CEPH

    Sorry for the late answer, anyway is still working after 7 years, I never loosed any data and it was h24 in a production enverinment, proxmox is always updated on latest possible version but my ceph is now 16.2.11 I don't remember this kind of error so probably is something that happened in ceph...
  15. S

    Proxmox ceph pacific cluster becomes unstable after rebooting a node

    I noticed that into the packages versions I have ifupdown: not correctly installed ifupdown2: 3.1.0-1+pmx3 I upgraded to ifupdown2 before upgrading to pve7 to avoid those mac address problems, is this ifupdown: not correctly installed correct? I'm thinking at some network issues during...
  16. S

    Proxmox ceph pacific cluster becomes unstable after rebooting a node

    I tried to rebuild all OSDs and now I have the new partition schema with one single partition for each OSD. The above loop error is not present anymore but the startup issue is still there. What I noticed is this message that, when a restarted node comes up again, is continuously repeated until...
  17. S

    Osds down on reboot and Failed to update device symlinks: Too many levels of symbolic links after 6.4 to 7 update

    Hi @Mikepop recreating all osds fixed the issue? The problem at reboot was similar to this https://forum.proxmox.com/threads/proxmox-ceph-pacific-cluster-becomes-unstable-after-rebooting-a-node.96799/ ? many thanks
  18. S

    Proxmox ceph pacific cluster becomes unstable after rebooting a node

    Hmm.. searching in nodo2 syslog I foun this symlink loop, Sep 28 23:38:02 nodo2 systemd-udevd[1854]: sdb2: Failed to update device symlinks: Too many levels of symbolic links Sep 28 23:38:02 nodo2 systemd-udevd[1852]: sde2: Failed to update device symlinks: Too many levels of symbolic links Sep...
  19. S

    Proxmox ceph pacific cluster becomes unstable after rebooting a node

    Any help about this? Yesterday I upgraded to latest kernel and new ceph version with same issue.. I'm attaching the syslog of nodo1 and nodo2 around the reboot of node2

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!