Search results

  1. I

    Proxmox VE Ceph Benchmark 2023/12 - Fast SSDs and network speeds in a Proxmox VE Ceph Reef cluster

    does a benchmark exist with more then 3 nodes for example in the area of 10 nodes? does the throughput scales accordingly for multi client usage?
  2. I

    [SOLVED] Ceph hang in Degraded data redundancy

    Update - full shut down of all ceph nodes solved the issue (one by one did not help)
  3. I

    [SOLVED] Ceph hang in Degraded data redundancy

    There is another change i noticed today: pgs scrub issue. *till now the systems are running and responsive but i dont think is is healy
  4. I

    [SOLVED] Ceph hang in Degraded data redundancy

    ceph osd pool ls detail pool 2 'ceph-lxc' replicated size 3 min_size 2 crush_rule 0 object_hash rjenkins pg_num 128 pgp_num 128 autoscale_mode on last_change 227165 lfor 0/136355/136651 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd removed_snaps_queue...
  5. I

    [SOLVED] Ceph hang in Degraded data redundancy

    ceph -s cluster: id: 8ebca482-f985-4e74-9ff8-35e03a1af15e health: HEALTH_WARN Degraded data redundancy: 1608/62722158 objects degraded (0.003%), 28 pgs degraded, 22 pgs undersized services: mon: 3 daemons, quorum pve-srv2,pve-srv3,pve-srv4 (age 2d) mgr...
  6. I

    [SOLVED] Ceph hang in Degraded data redundancy

    flow: 1 servers had reboot due to power maintenance, 2 (after the reboot) i noticed one server had bad clock sync - fixing the issue and another reboot solved it) the 3. after time sync fixed cluster started to load and rebalance, 4 it hang at error state (data looks ok and everything stable and...
  7. I

    Shutdown of the Hyper-Converged Cluster (CEPH)

    We have a setup of around 30 servers, 4 of them with ceph storage, Unfortunately we have many power outages in our building and the backup battery does not last for long periods , casing entire cluster crash, (server, switches, storages) Most of the time the entire cluster turn up when the...
  8. I

    best approach to set mitigations=off cluster wide?

    it requires reboot to reload the new config for the kernel
  9. I

    Ansible lxc reboot

    i am trying to reboot lxc host as part of an ansible script , however i cannot make it work using ansible, default reboot module did not work: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/reboot_module.html it printed Socket exception: Connection reset by peer (104) and...
  10. I

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    unfortunately i rolled back to kernel 5.15 on all hosts with vms (after rollback no issues at all) we use the servers in our production so i cannot risk another downtime. the affect is only for VM.not for lxc
  11. I

    Proxmox 8.0 / Kernel 6.2.x 100%CPU issue with Windows Server 2019 VMs

    settings mitgation off did not solve the issue, just reduce the occurrence due to more efficient kernel , i have around 10 Ubuntu vms that the error occurs repeatedly under load (while running all nodes at 70% cpu capacity ) in less then an 1 hour i had the error at least on one of the nodes...
  12. I

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    Setting mitigation off reduced the amount of the errors (still the bigger the load the more errors), but going back to kernel 5.15 removed the issue entirely
  13. I

    how i can downgrade proxmox 8.0.4 kernel 6.2 to 5

    I want to try it on a new a new node (fresh installed, and not upgraded from 7.4) the node dont have kernel 5.15.116-1-pve installed is it the flow: proxmox-boot-tool kernel add 5.15.116-1-pve proxmox-boot-tool pin 5.15.116-1-pve proxmox-boot-tool refresh after reboot does the kernet...
  14. I

    how i can downgrade proxmox 8.0.4 kernel 6.2 to 5

    i have stability issues on nodes with high cpu load and i would move back the to kernel was on 7.4. what is the best approach ? i am on pve 8.0.4 kernel 6.2.16-14
  15. I

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    finally i have encounter a side affect, ceph mount inside the vm crashed, (umount and mount -a fixes it 95% of the times)
  16. I

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    like this before on proxmox 7.4 i never had this issue, vm was exactly the same
  17. I

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    sure here: both host and vm have mitigations=off in grub (error was more frequent before settings this configuration) my vm is host for high cpu load when the error occurs proxmox host: version: proxmox-ve: 8.0.2 (running kernel: 6.2.16-14-pve) pve-manager: 8.0.4 (running version...
  18. I

    best approach to set mitigations=off cluster wide?

    should i set mitigations=off in the line: GRUB_CMDLINE_LINUX_DEFAULT="quiet mitigations=off" both on proxmox and vm is it enough?
  19. I

    PVE 8 Upgrade: Kernel 6.2.16-*-pve causing consistent instability not present on 5.15.*-pve

    i was investigation another issue, and left an open ssh conncetion to one of the vms: i have the same error: Message from syslogd@kube-node-11 at Sep 26 15:13:24 ... kernel:[426489.912429] watchdog: BUG: soft lockup - CPU#31 stuck for 22s! [Engine_Simulato:1069183] Message from...
  20. I

    Proxmox in production

    Hey, so we've been using Proxmox in our small business since it was at version 3, and now we're on version 8. I don't know a ton about ESXi, but I've seen folks from other places slowly moving away from it and some hopping onto Proxmox. To be real, Proxmox isn't exactly plug-and-play, and...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!