Search results

  1. S

    Bad performance on VM disk with Ceph

    We made some tests, performances are really better with KRBD, but the questions is: is it safe to be enabled in production (how is the cache managed in case of power outage)? Many thanks
  2. S

    Bad performance on VM disk with Ceph

    I think the default thread is 16. I made the test with a single thread, performance is comparable with the VM: Is it safe to enable writeback mode on VM cache? What kind of side effects could we have in case of power loss or problems with the node running the VM? Also KRBD si safe to be...
  3. S

    Bad performance on VM disk with Ceph

    Already tried KRBD but with no gain in terms of read performance. The VM disk is in Default (no cache) mode.
  4. S

    Bad performance on VM disk with Ceph

    Hi, I made some benchmark on our cluster, for some reason we have very slow disk perfomance inside the VM if compared with rados bench test performed in the same pool. That is the result of rados bench 30 write -b 4M -t 16 -p test_pool And that is a dd write performed inside a VM running on...
  5. S

    Jumbo frames (MTU 9000), bond, vlans and bridges

    I have another question. "auto" should be set on all interfaces? or only on particular one? This could be the cause with MTU not being set? Also: Ip address should be set, as done, on bridge (vmbr) or should be set on bond (tagged with vlan)? I tried the configuration with the two fixes above...
  6. S

    Jumbo frames (MTU 9000), bond, vlans and bridges

    I will try not to set it on the underlying interfaces. I'm checking MTU current size with "ip link show" command. Switch are already set with maximum MTU size.
  7. S

    Ceph manager (active) leaking resources

    Hi, since the last update to Ceph 15.2.14, Ceph Manager (the node with the active instance) is leaking resources: day by day memory consumption costantly increases on that process. Here is the graph of available memory on the active node (yesterday I restarted the Ceph manager on that node, and...
  8. S

    Jumbo frames (MTU 9000), bond, vlans and bridges

    Hi, i'm trying to enable jumbo frames on our infrastructure in order to gain benefit for Ceph performance, unfortunately there is no way to make jumbo frames works in our situation. When I set mtu 9000 on network configuration, interfaces are not going up with that MTU but with the default one...
  9. S

    High I/O wait time on VMs after Proxmox 6 / Ceph Octopus update

    Think no, cluster health is ok and no activity is performing in background. Must be something else maybe.
  10. S

    High I/O wait time on VMs after Proxmox 6 / Ceph Octopus update

    Hi, a few days ago we upgraded our 5 nodes cluster to Proxmox 6 (from 5.4) and Ceph to Octopus (Luminous to Nautilus and after that Nautilus to Octopus). After the upgrade we noticed that all VMs started to raise alerts on our Zabbix monitoring system with reason "Disk I/O is overloaded". This...
  11. S

    Dedicated ceph servers/cluster i/o delay

    Hi, I'm experiencing the same problem... I can see i/O wait spikes on Zabbix also for various VMs on cluster after Proxmox 6 and Ceph upgrade. Here is a sample graph, we made the update when spikes becomes higher: Any idea why? Is there a solution? Thanks
  12. S

    Disable cloud-init auto updates inside VM

    Thanks for the suggestions, I will try to study in deep the problem. The strange thing is that when I installed Ubuntu 18.04, during install process, I disabled automatic upgrades.
  13. S

    Disable cloud-init auto updates inside VM

    Hi, Anyone knows if it is possible to disable cloud unit apt auto upgrades inside VMs? I have many Ubuntu 18.04 VMs configured with cloud init in Proxmox, and kernel updates are started automatically, so after a reboot I find a new kernel version. I don't want so because every time I have to...
  14. S

    [BUG] Backup NFS umount not working

    Lazy unmount failed, command was totally unresponsive. At the end rebooting the nodes solved the situation. I think that this issue must be addressed by Proxmox team, consequences in terms of load and opened files could be serious in a similar situation if not noticed.
  15. S

    [BUG] Backup NFS umount not working

    The storage was originally added trough the Proxmox GUI, I don't know what parameters Proxmox uses behind the GUI.
  16. S

    [BUG] Backup NFS umount not working

    Hi, we noticed this strange issue happening on our Proxmox nodes. On 30/08 we added a new node to our Proxmox cluster (3 nodes before, 4 nodes after); this is the load average graph of the cluster since that date. As you can see, after that date, load has started to increase, constantly day by...
  17. S

    [SOLVED] VM multicast VRRP packets drop

    Problem solved! I disabled STP on switch side, only on Proxmox LAN ports. Drop is now 0. The strange thing is that I have 2 physical machines on the same VLAN, with the same OS (Ubuntu 18.04), with all updates installed, and drop was 0 also before I disabled STP. No idea why.
  18. S

    [SOLVED] VM multicast VRRP packets drop

    Update 2: I tried another packet capture inside the VM. I reduced the traffic at minimum with firewall rules, and the only thing I can't block is spanning tree bpdu packets incoming on net interface. Can it be the cause of packet drop? I attach a screenshot from wireshark view of one of this...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!