Search results

  1. D

    Proxmox Cluster los node

    Its working now. Looks like the fix from @spirit was correct but it was missing the ssh keys. After ssh in to an other prox server is started to work again. Still think it wierd what happend.
  2. D

    Proxmox Cluster los node

    Hello, No i did not removed the node from the cluster. I did what you suggested, wierd thing is that i can ping all the other nodes but got the following logging: Feb 16 08:59:50 prox-s05 systemd[1]: Stopped Corosync Cluster Engine. Feb 16 08:59:50 prox-s05 systemd[1]: Starting Corosync...
  3. D

    Proxmox Cluster los node

    Hello, I got switch troubles this weekend, after swapping of the network switch everything was fine again. But currently 1 node is out of the cluster: I tried to add it back to the cluster but it got running vm's so thats not possible. Hope you guys can point me to the right solution. Cluster...
  4. D

    Ceph high latency

    I added some more 4tb drives to the load is more balanced. So for other readers yes it possible to have an mixed of ssd's but dont make the 'mix' (so size) too big.
  5. D

    Ceph high latency

    Hello, But in my case all the SSD's are the same type, PM863a. Only different is most of them are 1tb and 5 of them are 4tb. The 4tb having sometimges high latency, problely because they hold more data. We fixed it manually by changing the weight so they hold less data but what is the best step...
  6. D

    Ceph high latency

    Could you explain the classes and what you recommend? And we dont have HDD only SSD's.
  7. D

    Ceph high latency

    The high latency is on all the 4tb disk. SSD mix is possible with ceph but maybe the mix of 20x 1tb and 4x 4tb when you use 17,54tb of the 34,93 to much io for the 4tb. Because when we lower the wiehgt so there is less data on the 4tb disks (so more reads and write on the other 1tb disks) the...
  8. D

    Ceph high latency

    root@prox-s01:~# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 34.9TiB 17.4TiB 17.5TiB 50.21 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS ceph01 3 6.01TiB 77.18 1.78TiB 1593307
  9. D

    Ceph high latency

    root@prox-s01:~# ceph df GLOBAL: SIZE AVAIL RAW USED %RAW USED 34.9TiB 17.4TiB 17.5TiB 50.21 POOLS: NAME ID USED %USED MAX AVAIL OBJECTS ceph01 3 6.01TiB 77.18 1.78TiB 1593307
  10. D

    Ceph high latency

    alle the servers got 2x10gb. 1x 10gb for ceph 1x10gb voor ceph monitor and public data (internet,management en internal).
  11. D

    Ceph high latency

    Hello, We got an cluster of 6 servers. 1 server is beeining used as backup server so there are not vm or ceph is not an member of the ceph cluster. 5 servers got 2x e5-2620v4 256gb ddr and 4x 1tb ssd and 1x 4tb ssd. The latency of the 4tb is very high what will cause that the cluster in total...
  12. D

    DL380 with aoc-stgn-i2s 10gb

    Swapped the card with an HP Intel and its working.
  13. D

    DL380 with aoc-stgn-i2s 10gb

    Hello, Thank you for the reply, the nic is working on the same HP server when HyperV was installed. With proxmox the nic is not working, its installed but wont detect link.
  14. D

    DL380 with aoc-stgn-i2s 10gb

    Hello, We're migration our hyper-v cluster to proxmox. last week did an supermicro with the stgn-i2s 10gb to proxmox and it was working out of the box. on the supermicro the 10gb card is enp5s0f0 and enp5s0f1 yesterday i installed proxmox on an dl380 g9 with the same stgn-i2s 10gb but i did...
  15. D

    New install force MBR not GPT

    There will be no other solution to force a MBR partition table?
  16. D

    New install force MBR not GPT

    We prefer MBR for bootdisk
  17. D

    New install force MBR not GPT

    Hello, I'm setting up a test environment because currently we only host hyper-v based servers. The installations goes well but after the installations I found out the servers are installed with the GPT partion table. I tried the installation with a 250GB,320GB and 500GB disk. All the times...