Search results

  1. S

    Proxmox/Ceph/Cache Tiering/Bug?

    And is also something that is heading EOL way with CEPH itself, so do be careful.
  2. S

    Web Interface

    If your not using ZFS or anything like that then I'd say atleast 1GB min. By looks of it your also heavily using your SWAP so looks like you are very heavily over committing on RAM.
  3. S

    Web Interface

    That looks to be your problem them, your running very low on RAM only 111MB free. You probably have processes being killed by OOM to keep the whole server crashing. You need to adjust the RAM you have allocated to the VMs to leave more available for the core system and host OS.
  4. S

    Web Interface

    What do the OS logs show? Any resource constraints? What does top and free -m show? Are you using ZFS or anything or just a single disk / mdadm Raid?
  5. S

    Web Interface

    Have you tried just restarting the physical server?
  6. S

    Web Interface

    The fact you also said you're unable to connect to your Windows VM, have you checked network connectivity? Make sure you don't have some form of network issue/outgoing or incoming attack e.t.c
  7. S

    Detect and throttle 100% CPU on VMs

    Proxmox supports sending data to InfluxDB @ https://pve.proxmox.com/wiki/External_Metric_Server which can then be viewed via grafana, you can also set alerts and thresholds to be sent for any metric it receives. I doubt Proxmox would add this kinda level internally, as there is many 3rd party...
  8. S

    Web Interface

    Did you try the restart command? What is the exact error your getting in the browser?
  9. S

    Web Interface

    Try "service pveproxy restart" If not check the output of "service pveproxy status"
  10. S

    [SOLVED] Recover vm template from ceph block storage

    I'm not sure what the exact syntax is for a template .conf file. But if you create a new template, and then login to the node and view the text based conf file you can manually update the storage line to point to one of of your image files existing in rbd. For example on a VM that uses CEPH I...
  11. S

    Max number of hosts recommended for Ceph and PVE hyperconverged

    It is not advised to use the inbuild cache functions within CEPH, they are being slowly EOL and yet to be replaced. Your need to decided based on your storage requirements what you need, you can have multiple pools of storage, so you could have a NVME pool which would use the NVME's and a SATA...
  12. S

    Max number of hosts recommended for Ceph and PVE hyperconverged

    On the network side people will always push towards 25Gbps+ now due to the lower latency, however if you already have a big investment in 10Gbps then Dual 10Gbps should do, depending on how dense your looking to go on each node storage wise? You say NVME so is your plan to be fully NVME based or...
  13. S

    Max number of hosts recommended for Ceph and PVE hyperconverged

    Every piece of software is going to have bugs, alot has been learnt and changed since 2018. They was hit by a few things in rapid concession and maybe had their RAM set lower than suggested on the OSD nodes so was hit by the OOM during recovery which would have delayed things even further. CEPH...
  14. S

    Split temporarily cluster in 2 clusters due to network instability

    You cant Join a node to a cluster that has VMs on it, so your have issue with the rejoining part. What issues are you having with the current cluster that only 14 are connected at once? Errors on corosync e.t.c
  15. S

    Max number of hosts recommended for Ceph and PVE hyperconverged

    Ceph really has no limits, and becomes better as it grows performance / reliability wise. Proxmox with the old Corosync the number that use to float around was 16, however with 6.x and Corosync 3.x this limit seems to have been removed / increased alot. Do you have an exact number of total...
  16. S

    [SOLVED] Logging outgoing IP traffic

    Then its probably something youd want to setup on your main gateway / firewall, and not within Proxmox itself.
  17. S

    Ceph storage and VM in the same proxmox hosts?

    Also you can't easily change an IP between hetzner servers unless you pay for a failover IP, but then each change has to be done via their API or controlpanel. So if you have a HA VM that moved from one server to another it would have no network connectivity till you also completed the IP move...
  18. S

    Ceph storage and VM in the same proxmox hosts?

    Okie, Well for example a 200GB file will take 30 minutes to fully read from your CEPH cluster if the whole 1Gbps NIC was used for nothing else. If 50% of it was used for other network traffic either WAN or LAN it would take a min of 60 minutes to read. If you then go and add ZFS traffic for the...
  19. S

    Ceph storage and VM in the same proxmox hosts?

    Then it really makes sense to host all the images onto the CEPH Storage layer then, your just be limited by the 1Gbps network, but then it seems you don't have huge I/O requirements unless these big files are being changed constantly? Maybe a good idea to list what kinda I/O you expect? Like...
  20. S

    Ceph storage and VM in the same proxmox hosts?

    I thought you wanted to run separate VM's on each host? and not one VM and move across the 3 hosts?