Search results

  1. jsterr

    [SOLVED] Ceph performance degradation

    Hello! Please post your pvereport if you can or provide more details to the devices used. What ist the output of: Also try to check the single osds and if there is one that has some significant other performance compared to the others: Source...
  2. jsterr

    Individual node in cluster doesn't see shared storage

    Please specify your questions and share some more information (logs, errors, screenshots). If node 0 and 2 see the share, whats different on node1? did you check, switch, cables, vlans etc. what share are we talking bout? NFS?
  3. jsterr

    [TUTORIAL] P2100G bad network performance in (docker) container

    We had some issues with docker on p2100G broadcom nics these days. Container did have network performance of only 1.8 MB/s on a 100Gbit/s nic. We fixed it by turning off a specific offload parameter via ethtool --offload enp1s0f0np0 generic-receive-offload off Dont forget to put a pre-up in...
  4. jsterr

    [SOLVED] Automated Load Balancing Features or Recommended Implementations

    Hi, I read about: https://github.com/cvk98/Proxmox-load-balancer but seems the developer does not have any time to maintain it anymore: Currently theres only a rebelance on ha-start -> cluster-ressource-scheduling with static load option.
  5. jsterr

    [SOLVED] 2 Cluster nach Cluster-"Aufräumarbeiten"

    Wie ist der Status von corosync? Ich suche gerade nach der Anleitung für das starten vom proxmox cluster fielsystem im lokalen modus; pmxcfs -l da gibt es recovery-szenarien, mit denen man splitbrain reparieren kann. * `pvecm status` * `pvecm members` * `journalctl -r` (general logs) *...
  6. jsterr

    Kvm issue cpu Xeon Gold

    Hi Emanuele, can you post the vmconfig, can be done via cli with root@PMX8:~# qm config 300 . Thanks!
  7. jsterr

    Unable to find resource pool configuration

    Hi! Just in case you did not know, pve 7 is eol since july 2024. I would recommend upgrading or setting up pve8 from the beginning (as you are currently starting your proxmox-journey). Your questions: theres no ressource allocation or assignment to users/groups etc. "Ressource Pool" option is...
  8. jsterr

    Cluster reboot after adding new node

    You are using bond0 for to much stuff. bond0 which is ens1f0 ens1f1 is used for the following things: #25G-BOND-FOR DATA NETWORK #VLAN BRIDGE FOR VMS DATA PLANE # PROXMOX - CONTROL NETWORK # PROXMOX INTERNET ACCESS - TEMPORARY # STORAGE INTERNAL # STORAGE EXTERNAL # PROXMOX - OOB MANAGEMENT...
  9. jsterr

    VM Affinity

    Hello, you should use ha-groups for that and pin vms to different hosts. For example: vm1 pinned to host1/2 and vm2 pinned to host 3. or different, depending on what cluster-node-count you have.
  10. jsterr

    Proxmox 8.2.4 Ceph 18.2.2 Error 8 mgr modules have recently crashed

    Hello, pve-ceph is a own deployed ceph variant by proxmox and is not compatible with the ceph dashboard. I would recommend not using it and doing the tasks without dashboard. generating a custome crush rule and multiple devices classes can be easily done via cli. What do you want to archieve, I...
  11. jsterr

    Very Poor Ceph Performance

    This should be related to having only 1Gbit/s in the ceph public and only 10Gbit on ceph cluster network. These days its not best practice anymore to seperate those services, I would recommend to only use the 10Gbit bond for ceph public and cluster. Edit: on 10Gbit doing public and cluster...
  12. jsterr

    Weird double server entry and a entry of a deleted machine

    Please check: https://www.thomas-krenn.com/de/wiki/Hostname_%C3%A4ndern_in_einem_produktiven_Proxmox_Ceph_HCI_Cluster its not that easy to change the hostname, you might need to autotranslate it (as its german). But the commands should help you, as there are many locations, where the new names...
  13. jsterr

    Unable to Remove VM After Interrupted OVF Import on Proxmox

    If the vm is locked (what could be) I dont think theres a way to do this via web-ui.
  14. jsterr

    New 3 Node Cluster Ceph Mon issue

    It would be the best to post your pvereport of all the nodse if you can. make sure sensitive-information is removed if you dont want to show the report to everyone. Usually when there are problems in ceph it has these causes: network is not correctly configured (not working, or not same mtu on...
  15. jsterr

    Unable to Remove VM After Interrupted OVF Import on Proxmox

    You might need to do via cli: qm unlock <VMID> qm destroy <VMID> And then it should work. Is there a lock symbol over the vm-icon in the web-ui?
  16. jsterr

    Cluster reboot after adding new node

    Hi @PaNggNaP without any logs is hard to help. Usually when nodes reboot its because you have problems with corosync, because of having high latency on the link -> which can lead to whole cluster reboots. Please send: root@PMX8:~# journalctl -u corosync.service because you are using pve6...
  17. jsterr

    Whats the best way to give vms access to ceph network?

    I want to mount cephfs inside vms, so they can use it for shared storage, so we need network access for vms to the ceph network. Thats the current network config. I dont want to put ceph on a vmbr by default, but thought about creating some additional devices that can use the bond1. seems like...
  18. jsterr

    Random 6.8.4-2-pve kernel crashes

    Thanks! Any downsides or side-effects you have noticed after enabling?
  19. jsterr

    [SOLVED] Rename a Cluster (Not a Node)

    This works in 2024. Any other place where cluster-name is in use and should be replaced or any side-effects? After changing in corosync-file i restarted corosync on all nodes -> and yes config_version should always be set to value +1
  20. jsterr

    Veeam Support for Proxmox VE

    Today was a demo of the upcoming veeam support for proxmox ve. Does anyone have new information on what can be done and if there are any limitations? I saw a video on linkedin and did a quick review of what I could see from that clip: Also some links there are currently there for more...