Search results

  1. A

    Cannot create jewel OSDs in 4.4.5

    Aha! There's the "permission denied" error! (with journalctl -f still running in the background:) Drat, output too big to post here, see http://pastebin.com/AqF1bGSP
  2. A

    Cannot create jewel OSDs in 4.4.5

    That's not working either: root@pve1:~# cat /proc/partitions major minor #blocks name 1 0 65536 ram0 1 1 65536 ram1 1 2 65536 ram2 1 3 65536 ram3 1 4 65536 ram4 1 5 65536 ram5 1 6 65536...
  3. A

    Cannot create jewel OSDs in 4.4.5

    Don't know. Where are they stored? If you're referring to the files that I listed way back in the 5th post to this thread, then yes, they are owned by ceph/ceph.
  4. A

    Cannot create jewel OSDs in 4.4.5

    Journalctl output from "pveceph createosd /dev/sdc -journal_dev /dev/sb": Jan 03 07:04:18 pve1 systemd[1]: [/lib/systemd/system/ceph-mon@.service:24] Unknown lvalue 'TasksMax' in section 'Service' Jan 03 07:04:18 pve1 systemd[1]: [/lib/systemd/system/ceph-osd@.service:18] Unknown lvalue...
  5. A

    Cannot create jewel OSDs in 4.4.5

    I merely get: root@pve1:~# ceph auth list Error initializing cluster client: Error('error calling conf_read_file: error code 22',)[CODE] UPDATE: silly me, that was because I was back to a "purged" state. After re-doing the initialization and monitor creation, "ceph auth list" shows me a list...
  6. A

    Cannot create jewel OSDs in 4.4.5

    Missing some details: first I did pveceph install -version jewel then pveceph init -pg_bits 14 -size 3 then pveceph createmon Switching to the GUI, I then created MONs on the other 3 nodes (for a total of 4 MONs in my 4-node cluster). ...all of this worked well up to this point. using pveceph...
  7. A

    Cannot create jewel OSDs in 4.4.5

    PVE cluster is happy: root@pve1:~# pvecm status Quorum information ------------------ Date: Tue Jan 3 06:38:07 2017 Quorum provider: corosync_votequorum Nodes: 4 Node ID: 0x00000001 Ring ID: 1/212 Quorate: Yes Votequorum information...
  8. A

    Cannot create jewel OSDs in 4.4.5

    Sure... root@pve1:~# ls -ahl /var/lib/ceph/osd total 8.0K drwxr-xr-x 2 ceph ceph 4.0K Dec 9 15:03 . drwxr-x--- 8 ceph ceph 4.0K Dec 30 13:07 .. root@pve1:~# ls -ahl /etc/ceph/ total 16K drwxr-xr-x 2 root root 4.0K Jan 1 10:57 . drwxr-xr-x 98 root root 4.0K Jan 1 09:08 .. -rw------- 1 ceph...
  9. A

    Cannot create jewel OSDs in 4.4.5

    No, this was a fresh install of CEPH. I recently completed the PVE cluster, finished migrating all local-storage VMs to NFS, went to create CEPH storage and ran into this problem.
  10. A

    How to downgrade CEPH from Jewel to Hammer?

    I've installed CEPH Jewel, and am having problems with it. I've purged all the CEPH configuration (via "pveceph purge"). Running "pveceph install -version hammer" does not downgrade Jewel to Hammer, although it does alter the apt.sources.d entry. Logically, I then expected to be able to...
  11. A

    Cannot create jewel OSDs in 4.4.5

    Running pve-manager/4.4-5/c43015a5 (running kernel: 4.4.35-1-pve), with CEPH jewel installed (via "pveceph install -version jewel"), I find that I cannot create any OSDs. Digging through journalctl output previously, I saw a "permission denied" error, but of course now I can't find it... it...
  12. A

    Proxmox 4.4.5 kernel: Out of memory: Kill process 8543 (kvm) score or sacrifice child

    I do use ZFS, but I also have the ARC limited to 2GB or 4GB (on 16GB and 28GB servers respectively - I haven't seen the error on any of the 48G nodes yet). I have been seriously suspicious of ZFS lately, its performance under heavy write conditions is utterly abysmal no matter what tweaking I...
  13. A

    Proxmox 4.4.5 kernel: Out of memory: Kill process 8543 (kvm) score or sacrifice child

    +1 : I've been seeing this on a nightly basis, too, recently. Only since 4.4.x.
  14. A

    Insane load avg, disk timeouts w/ZFS

    I have since found one potential fix for my performance issues (although it may cause other problems, don't know yet): setting the zfs_arc_lotsfree_percent parameter to zero (0).
  15. A

    Insane load avg, disk timeouts w/ZFS

    Yes, I've discovered that about backups :-(. However, I do actually still want dedup for containers; all the OS files should dedupe, producing substantial savings. At least in theory.
  16. A

    Insane load avg, disk timeouts w/ZFS

    Also, on at least one system that exhibits nearly identical behaviour, dedup isn't enabled anywhere. But I'll turn it off anyway.
  17. A

    Insane load avg, disk timeouts w/ZFS

    Whoops, sorry for late reply. Dedup is only turned on for the ../ctdata subvolume, which currently is unused (no containers on this system any more). Oh. And apparently also for backups... which makes sense. I'll try turning them off, but those are the two legitimate use cases for...
  18. A

    Insane load avg, disk timeouts w/ZFS

    I've reconfigured the 10 HDDs + 2 SSDs into a RAID10 setup with a mirrored SLOG. Everything works great except for sustained write performance. When writing large amounts of data (e.g. during a VM clone or during backups) VMs time out while attempting to write to their virtual disks. (It...
  19. A

    Insane load avg, disk timeouts w/ZFS

    Previously discovered that there's no point in an L2ARC, this server is heavily write-biased; the L2ARC size never got past 6GB with a 32GB ARC, so ... not much value there. Might be more valuable as really fast VM storage - haven't made up my mind yet. I know that attaching a ZIL to the RAIDZ3...
  20. A

    Insane load avg, disk timeouts w/ZFS

    I've discovered that the hard way! I went and got two 3TB SATA drives today, set them up as a mirror, and am zfs-send'ing merrily away to them right now (I only had about 2TB of data). As snapshots finish transferring, I'll have to start up a few key VMs again and run them live off the external...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!