Aha! There's the "permission denied" error!
(with journalctl -f still running in the background:)
Drat, output too big to post here, see http://pastebin.com/AqF1bGSP
Don't know. Where are they stored? If you're referring to the files that I listed way back in the 5th post to this thread, then yes, they are owned by ceph/ceph.
I merely get:
root@pve1:~# ceph auth list
Error initializing cluster client: Error('error calling conf_read_file: error code 22',)[CODE]
UPDATE: silly me, that was because I was back to a "purged" state. After re-doing the initialization and monitor creation, "ceph auth list" shows me a list...
Missing some details:
first I did
pveceph install -version jewel
then
pveceph init -pg_bits 14 -size 3
then
pveceph createmon
Switching to the GUI, I then created MONs on the other 3 nodes (for a total of 4 MONs in my 4-node cluster).
...all of this worked well up to this point.
using pveceph...
PVE cluster is happy:
root@pve1:~# pvecm status
Quorum information
------------------
Date: Tue Jan 3 06:38:07 2017
Quorum provider: corosync_votequorum
Nodes: 4
Node ID: 0x00000001
Ring ID: 1/212
Quorate: Yes
Votequorum information...
No, this was a fresh install of CEPH.
I recently completed the PVE cluster, finished migrating all local-storage VMs to NFS, went to create CEPH storage and ran into this problem.
I've installed CEPH Jewel, and am having problems with it. I've purged all the CEPH configuration (via "pveceph purge").
Running "pveceph install -version hammer" does not downgrade Jewel to Hammer, although it does alter the apt.sources.d entry. Logically, I then expected to be able to...
Running pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-1-pve), with CEPH jewel installed (via "pveceph install -version jewel"), I find that I cannot create any OSDs.
Digging through journalctl output previously, I saw a "permission denied" error, but of course now I can't find it... it...
I do use ZFS, but I also have the ARC limited to 2GB or 4GB (on 16GB and 28GB servers respectively - I haven't seen the error on any of the 48G nodes yet).
I have been seriously suspicious of ZFS lately, its performance under heavy write conditions is utterly abysmal no matter what tweaking I...
I have since found one potential fix for my performance issues (although it may cause other problems, don't know yet): setting the zfs_arc_lotsfree_percent parameter to zero (0).
Yes, I've discovered that about backups :-(.
However, I do actually still want dedup for containers; all the OS files should dedupe, producing substantial savings. At least in theory.
Whoops, sorry for late reply. Dedup is only turned on for the ../ctdata subvolume, which currently is unused (no containers on this system any more). Oh. And apparently also for backups... which makes sense. I'll try turning them off, but those are the two legitimate use cases for...
I've reconfigured the 10 HDDs + 2 SSDs into a RAID10 setup with a mirrored SLOG.
Everything works great except for sustained write performance. When writing large amounts of data (e.g. during a VM clone or during backups) VMs time out while attempting to write to their virtual disks. (It...
Previously discovered that there's no point in an L2ARC, this server is heavily write-biased; the L2ARC size never got past 6GB with a 32GB ARC, so ... not much value there.
Might be more valuable as really fast VM storage - haven't made up my mind yet.
I know that attaching a ZIL to the RAIDZ3...
I've discovered that the hard way!
I went and got two 3TB SATA drives today, set them up as a mirror, and am zfs-send'ing merrily away to them right now (I only had about 2TB of data).
As snapshots finish transferring, I'll have to start up a few key VMs again and run them live off the external...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.