After quite a bit of testing, setting
ms_crc_header = true
but leaving
auth_client_required = none
auth_cluster_required = none
auth_service_required = none
cephx_sign_messages = false
cephx_require_signatures = false
ms_crc_data = false
and my problem goes away.
I guess this is libceph in the kernel, I can reproduce it by:
mount.ceph ceph1,ceph2,ceph3:/ /mnt/isos
Looking at the kernel source code, I *should* be able to turn it off (and should by default by reading the config files)...
I'm trying to use cephfs for isos. Prior to proxmox v6, we just created a mount point with /etc/fstab:
none /mnt/isos fuse.ceph _netdev,defaults 0 0
That worked (and still does).
Now I noticed PVE6 has a cephfs option, so in trying to do that, I receive a timeout when trying to add the...
Looks like the corosync crash should be fixed from this PR: https://github.com/kronosnet/kronosnet/pull/257
And is now part of pvetest ... anyone experiencing the corosync crash, please try it! Direct link here if not on pvetest...
@ahovda ah interesting, we had major issues with OpenVSwitch and finally switched away from it once PVE started supporting vlan-aware native linux bridges (and of course after we figured out how to use them properly, had some issues breaking out vlan interfaces for the host).
Anyhow, once we...
@NoahD that is a 7001 series processor, @andy77 is asking about 7002 series ...
I too am curious, we just had a new HW order approved with 7402P processors and are still on 5.4 since we're tracking the corosync crash bug before we upgrade to v6.
@astnwt you may want to add it to https://bugzilla.proxmox.com/show_bug.cgi?id=2326
I know we are waiting to upgrade to PVE 6 until we see a resolution on this as I haven't seen anyone state that a certain combination of hardware of configuration has been determined to cause this.
I would at least recommend compiling and installing the latest official intel ixgbe driver and seeing if it magically resolves your issue. If so, maybe the proxmox guys can either bundle the newer version, or maybe switch to a newer kernel that already bundles a newer driver...
I should also mention I had located a thread that was identical to my experience on the 4.4 kernel : https://sourceforge.net/p/e1000/mailman/message/35263903/
I'm not saying its for sure the same issue, but I wouldn't be surprised if it is.
I'm pretty sure we saw exactly this sort of behavior ourselves on the embedded SFP+ Intel 10G ports of a set of 3 supermicro servers we have when we upgraded from the 4.4 to 4.15 kernel during a proxmox upgrade. The motherboard they were built into was the X10SDV-TP8F. We technically had...
I'm pretty sure as of Luminous a cache tier is no longer a requirement:
https://ceph.com/community/new-luminous-erasure-coding-rbd-cephfs/
However, the issue I think is the header and metadata must still be stored in a replicated pool, with only the data in the erasure pool. An example rbd...
Hmm, I'm not sure if anyone has really compared the Intel Optane (3D Xpoint) series to NAND-based NVMe drives for ceph. At least according to arstechnica the 900p does have power loss protection https://www.anandtech.com/show/11953/the-intel-optane-ssd-900p-review/3
I'd have to imagine though...
Perhaps something has changed recently.
Browsing http://download.proxmox.com/debian/pve/dists/stretch/pve-no-subscription/binary-amd64/ I don't see any ceph packages. I'm still running Proxmox4 so I can't truly comment from experience. But the ceph packages appear to be here...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.