Yes but the slides are 7 months old, so I thought "maybe they've worked on it". But from your answer it seems like it's still untested by the Proxmox Team, so I'll avoid it in production for the moment and thank you very much for your quick answer!
Hello!
I'm setting up a NVMe-only Proxmox/Ceph infrastructure. I've just heard about Intel SPDK, which seems to speed a lot Ceph transactions when using fast NVMe drives vs Linux kernel default implementation: https://www.slideshare.net/mobile/DanielleWomboldt/ceph-day-beijing-spdk-for-ceph
On...
Maybe it's not related to your problems, but I want to share my experience, so you double check your storage health: on my 2 nodes cluster running Proxmox VE 3.4 with ZFS and DRBD I just had some I/O troubles leading to pveproxy stuck at 100%. After one hour looking for any possible problem, I...
That's a very good advice when planning a SSD-backed ceph infrastructure. But how to estimate the Ceph overhead? Let's say that my VMs write 100 GB of effective data per day and my pool has size=2 and uses Ceph 12 with bluestore, with data, WAL and DB on the same device ...how many GBs will be...
For anyone interested, I changed KSM_THRES_COEF value to 50 in /etc/ksmtuned.conf and then issued a service ksmtuned restart without having to reboot the node. In a couple of minutes KSM began to consolidate my RAM in my OpenVZ/KVM hybrid environment based on Proxmox VE 3.4 without any problem...
In my experience, with any hypervisor you should never assign all physical cores to a VM. If you have 24 physical cores you'll have much better performance by assigning 20 cores to the VM than 24 (because otherwise I/O stalls will take place and slow down the whole node, for example). You can...
Thank you! I've read a lot about Ceph Luminous and now I'm asking to myself if the principles that are driving my PVE+Ceph cluster design are still valid with the new storage backend. I've just opened a dedicated topic about this matter: Is a PCI-E SSD-based journal still worth it with Ceph...
Hi! Thanks to a discussion on another thread today I digged into the Ceph Luminous changelogs. If I understood correctly (please correct me if I'm wrong or if this is not applicable to pveceph) Ceph 12 will have the new storage engine called Bluestore enabled by default instead of XFS and writes...
Thank you! I need the cluster to be up and running by the end of July. I saw that Ceph 12.1.0 luminous release candidate is just being released so if I'm lucky the stable could be out by that date and I hope Proxmox VE 5.0 too. :)
Wow, this scares me a bit. Provided that I'd like to remain as "Proxmox-standard" as I can, would you recommend going for Proxmox 4.4 with its stable Ceph and upgrade to Proxmox 5 in the future (but also upgrading Proxmox + Ceph on a production cluster scares me a bit) or what else?
Thank you!
Hello!
I'm planning to build a new 3-node production cluster based on Proxmox VE 5.0 (as soon as the stable will be released) with Ceph storage running on the same nodes, as described in the tutorial. The 3 nodes will be identical and will have a 10 Gb internal network (for Ceph and corosync) in...
Yes, you're absolutely right, an upgrade is scheduled before the end of this year. But unfortunately I need to move some VMs to that host. So the question is: if this hosts runs both VMs and OpenVZ containers, will KSM work at least for the processes running on the VMs or it will not work at...
Hi! I have Proxmox VE 3.4-3 with kernel 2.6.32-37-pve. My server runs several OpenVZ containers together with KVM virtual machines.
At the moment cat /sys/kernel/mm/ksm/pages_sharing returns 0 and that's fine because my RAM usage is under 80% and KSM_THRES_COEF=20 in /etc/ksmtuned.conf.
My...
Hi! I have Proxmox 4.3-3. lxcfs is version 2.0.4-pve1. I have a server with a single LXC container running as a Samba server on ZFS storage on localhost. It does not mount any external filesystem. It is set to backup to local folder storage. If I set the scheduled backup to use suspend mode...
Hi!
I have 2 Proxmox VE 3.4 nodes with ZFS local storage mirrored through DRBD (Primary/Primary with 2 volumes) to guarantee complete redundancy. The nodes are linked with a dedicated 10Gbps on SFP+ and DRBD version is 8.3.13 (the version coming with Proxmox VE 3.4). Since I'm experiencing...
Hello, I have one of my OpenVZ guests eating my I/O and slowing down all the others VPS when a particular process runs.
I just saw that OpenVZ recently added I/O limits: https://openvz.org/I/O_limits
Proxmox VE 3.4 seems to satisfy the kernel requirement (greater than 2.6.32-042), while vzctl...
I understand your point. I'm not saying that the total memory should be removed, but only that an indication about the active memory should be added, so every sysadmin has all the informations to understand the usage of the nodes. So instead of just displaying "Memory: 98%", a useful indication...
Really, no users are interested in this topic? I now have Proxmox VE 3.4 and it's the same as 2 years ago when I wrote the topic: I have some containers showing 95% memory usage which in reality use less than 50% but I need to SSH-ing into them to figure it out...also graphs in "Summary" GUI is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.