Search results

  1. L

    Using SPDK for NVMe OSDs with Ceph Bluestore

    Yes but the slides are 7 months old, so I thought "maybe they've worked on it". But from your answer it seems like it's still untested by the Proxmox Team, so I'll avoid it in production for the moment and thank you very much for your quick answer!
  2. L

    Using SPDK for NVMe OSDs with Ceph Bluestore

    Hello! I'm setting up a NVMe-only Proxmox/Ceph infrastructure. I've just heard about Intel SPDK, which seems to speed a lot Ceph transactions when using fast NVMe drives vs Linux kernel default implementation: https://www.slideshare.net/mobile/DanielleWomboldt/ceph-day-beijing-spdk-for-ceph On...
  3. L

    pveproxy become blocked state and cannot be killed

    Maybe it's not related to your problems, but I want to share my experience, so you double check your storage health: on my 2 nodes cluster running Proxmox VE 3.4 with ZFS and DRBD I just had some I/O troubles leading to pveproxy stuck at 100%. After one hour looking for any possible problem, I...
  4. L

    Allflash Ceph

    That's a very good advice when planning a SSD-backed ceph infrastructure. But how to estimate the Ceph overhead? Let's say that my VMs write 100 GB of effective data per day and my pool has size=2 and uses Ceph 12 with bluestore, with data, WAL and DB on the same device ...how many GBs will be...
  5. L

    [SOLVED] KSM in mixed OpenVZ/KVM environment on Proxmox VE 3.4

    For anyone interested, I changed KSM_THRES_COEF value to 50 in /etc/ksmtuned.conf and then issued a service ksmtuned restart without having to reboot the node. In a couple of minutes KSM began to consolidate my RAM in my OpenVZ/KVM hybrid environment based on Proxmox VE 3.4 without any problem...
  6. L

    cpu steal

    In my experience, with any hypervisor you should never assign all physical cores to a VM. If you have 24 physical cores you'll have much better performance by assigning 20 cores to the VM than 24 (because otherwise I/O stalls will take place and slow down the whole node, for example). You can...
  7. L

    New Proxmox VE 5.0 with Ceph production cluster and upgrade from 3 to 5 nodes

    Thank you! I've read a lot about Ceph Luminous and now I'm asking to myself if the principles that are driving my PVE+Ceph cluster design are still valid with the new storage backend. I've just opened a dedicated topic about this matter: Is a PCI-E SSD-based journal still worth it with Ceph...
  8. L

    Is a PCI-E SSD-based journal still worth it with Ceph Luminous?

    Hi! Thanks to a discussion on another thread today I digged into the Ceph Luminous changelogs. If I understood correctly (please correct me if I'm wrong or if this is not applicable to pveceph) Ceph 12 will have the new storage engine called Bluestore enabled by default instead of XFS and writes...
  9. L

    New Proxmox VE 5.0 with Ceph production cluster and upgrade from 3 to 5 nodes

    Thank you! I need the cluster to be up and running by the end of July. I saw that Ceph 12.1.0 luminous release candidate is just being released so if I'm lucky the stable could be out by that date and I hope Proxmox VE 5.0 too. :)
  10. L

    New Proxmox VE 5.0 with Ceph production cluster and upgrade from 3 to 5 nodes

    Wow, this scares me a bit. Provided that I'd like to remain as "Proxmox-standard" as I can, would you recommend going for Proxmox 4.4 with its stable Ceph and upgrade to Proxmox 5 in the future (but also upgrading Proxmox + Ceph on a production cluster scares me a bit) or what else? Thank you!
  11. L

    New Proxmox VE 5.0 with Ceph production cluster and upgrade from 3 to 5 nodes

    Hello! I'm planning to build a new 3-node production cluster based on Proxmox VE 5.0 (as soon as the stable will be released) with Ceph storage running on the same nodes, as described in the tutorial. The 3 nodes will be identical and will have a 10 Gb internal network (for Ceph and corosync) in...
  12. L

    [SOLVED] KSM in mixed OpenVZ/KVM environment on Proxmox VE 3.4

    Yes, you're absolutely right, an upgrade is scheduled before the end of this year. But unfortunately I need to move some VMs to that host. So the question is: if this hosts runs both VMs and OpenVZ containers, will KSM work at least for the processes running on the VMs or it will not work at...
  13. L

    [SOLVED] KSM in mixed OpenVZ/KVM environment on Proxmox VE 3.4

    Hi! I have Proxmox VE 3.4-3 with kernel 2.6.32-37-pve. My server runs several OpenVZ containers together with KVM virtual machines. At the moment cat /sys/kernel/mm/ksm/pages_sharing returns 0 and that's fine because my RAM usage is under 80% and KSM_THRES_COEF=20 in /etc/ksmtuned.conf. My...
  14. L

    [SOLVED] LXC Backup randomly hangs at suspend

    Hi! I have Proxmox 4.3-3. lxcfs is version 2.0.4-pve1. I have a server with a single LXC container running as a Samba server on ZFS storage on localhost. It does not mount any external filesystem. It is set to backup to local folder storage. If I set the scheduled backup to use suspend mode...
  15. L

    Upgrade DRBD to version 8.4.3 or superior on Proxmox VE 3.4 (DRBD 8.3.13)

    I do use OpenVZ containers. As recommended in the PVE wiki, I followed these steps after installation...
  16. L

    Upgrade DRBD to version 8.4.3 or superior on Proxmox VE 3.4 (DRBD 8.3.13)

    Hi! I have 2 Proxmox VE 3.4 nodes with ZFS local storage mirrored through DRBD (Primary/Primary with 2 volumes) to guarantee complete redundancy. The nodes are linked with a dedicated 10Gbps on SFP+ and DRBD version is 8.3.13 (the version coming with Proxmox VE 3.4). Since I'm experiencing...
  17. L

    OpenVZ IO limits support

    Hello, I have one of my OpenVZ guests eating my I/O and slowing down all the others VPS when a particular process runs. I just saw that OpenVZ recently added I/O limits: https://openvz.org/I/O_limits Proxmox VE 3.4 seems to satisfy the kernel requirement (greater than 2.6.32-042), while vzctl...
  18. L

    Memory usage indication for OpenVZ containers

    I understand your point. I'm not saying that the total memory should be removed, but only that an indication about the active memory should be added, so every sysadmin has all the informations to understand the usage of the nodes. So instead of just displaying "Memory: 98%", a useful indication...
  19. L

    Memory usage indication for OpenVZ containers

    Really, no users are interested in this topic? I now have Proxmox VE 3.4 and it's the same as 2 years ago when I wrote the topic: I have some containers showing 95% memory usage which in reality use less than 50% but I need to SSH-ing into them to figure it out...also graphs in "Summary" GUI is...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!