/new
Nevermind, I used the `pveceph` tool to upgrade.
/old
You mean update the Ceph packages on the proxmox nodes? Is there an howto for that? Not that I'm unfamiliar with apt-get or anything, but I'd rather replay somebody else's tested commands to update Ceph in this case.
@fabian That's quite a relief. My reasoning was that even when just using plain librbd/krbd, it could still be the case that PVE 5 expects to be able to enable certain RBD functionality which is not in Jewel.
So, to get a better picture of the referenced requirement in my opening post, why *is*...
We are planning an upgrade of PVE 4.x to 5.x. Is the requirement[1] on Ceph Luminous also in the case of using an external Ceph cluster?
[1] https://pve.proxmox.com/wiki/Upgrade_from_4.x_to_5.0#Upgrade_the_basic_system_to_Debian_Stretch_and_PVE_5.0
The disks I tested with are on default and that is "no-cache" according to the webgui
/update
The client is Windows 2012r2 with latest stable virtio drivers, 0.1.126.
I've been running some benchmarks on Windows whose virtio disks are on Ceph. All is well. At one point I enabled the "krbd" option because I wanted to use the rbd pool for containers as well. Out of curiosity I reran the benchmarks on windows and to my surprise the write speeds are simply out of...
I am using an OVS bridge which is working fine on start. However using `service networking restart` fails to properly setup the OVS bridge. After the network restart, the running VMs and containers respective interfaces are not added to the OVS bridge.
In order to regain network connectivity to...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.