Search results

  1. Waschbüsch

    [SOLVED] PVE 5 and Ceph luminous: unable to create monitor

    mon status: { "name": "0", "rank": -1, "state": "synchronizing", "election_epoch": 0, "quorum": [], "features": { "required_con": "144115188077969408", "required_mon": [ "kraken", "luminous" ], "quorum_con": "0"...
  2. Waschbüsch

    [SOLVED] PVE 5 and Ceph luminous: unable to create monitor

    during the attempted creation via pveceph createmon I get: ]2017-09-05 06:34:48.283245 7f1a35f4b700 0 mon.1@0(leader) e82 handle_command mon_command({"format":"plain","prefix":"mon getmap"} v 0) v1 2017-09-05 06:34:48.283295 7f1a35f4b700 0 log_channel(audit) log [DBG] : from='client...
  3. Waschbüsch

    [SOLVED] PVE 5 and Ceph luminous: unable to create monitor

    Fwiw, I upgraded the ceph packages to 12.1.4 via test repo but the situation remains unchanged otherwise.
  4. Waschbüsch

    [SOLVED] PVE 5 and Ceph luminous: unable to create monitor

    Well, if I try to add a monitor, all I ever get is a timeout. Otherwise: # ceph status cluster: id: 20b519f3-4988-4ac5-ac3c-7cd352431ebb health: HEALTH_OK services: mon: 1 daemons, quorum 1 mgr: 1(active), standbys: 0, 2, 3 osd: 12 osds: 12 up, 12 in data...
  5. Waschbüsch

    [SOLVED] PVE 5 and Ceph luminous: unable to create monitor

    I saw some threads on something similar but none of them looked like the issue I have. Apologies if I overlooked a thread already covering this. Anyway: I cannot create a new monitor on my cluster. Or rather, it gets created but never gets quorum. I tried doing this via: - Web UI - pveceph...
  6. Waschbüsch

    Proxmox 5 and ceph luminous: can't create monitor

    What is your setting in /etc/default/ceph? I have seen issues on my Opteron systems when using jemalloc. I'd avise trying without it just to be sure.
  7. Waschbüsch

    after upgrade to PVE 5.0: unable to add osd

    OK, there was still some mismatch with regards to packages. I had used luminous before but with packages from the ceph repo, not pve. After uninstalling with --purge and reinstalling using pveceph install, everything works now.
  8. Waschbüsch

    after upgrade to PVE 5.0: unable to add osd

    Hi there, after upgrading a test-cluster to PVE 5.5, everything worked fine including ceph. However, if I try to add another osd, it does all the preparation but is unable to start the osd. what is immediately obvious is this: The partition layout is different than the old disks: using gdisk...
  9. Waschbüsch

    help needed to enable multicast

    But should omping not show that multicast packets got through? It doesn't. It says 100% package loss and only unicast got through...
  10. Waschbüsch

    help needed to enable multicast

    Thanks for the feedback. I had consulted https://pve.proxmox.com/wiki/Multicast_notes, yes, but to no avail. When you say I could do without IGMP snooping, would that not mean falling back to unicast? Or do I have my networking facts all mixed up? :-)
  11. Waschbüsch

    help needed to enable multicast

    Hi all, I have the following cluster setup: 3 nodes a Netgear XS712T 10G switch node 1 uses port 1 on the switch, node 2 port 2, etc. port 12 on the switch is my uplink to the internet. I use vlans to seperate different kinds of traffic. vlan 5 is for internal PVE traffic with ports 1, 2...
  12. Waschbüsch

    Increasing prices for community edition?

    I am not sure I understand the concern? It is not like you have to have a subscription in order to use Proxmox VE. Rather: you have to have a subscription to get the (more) stable repo. There is a big difference.
  13. Waschbüsch

    iothread option not working?

    Thanks for clearing that up, Dietmar!
  14. Waschbüsch

    iothread option not working?

    Hi all, I just tried to enable the 'iothread' flag for a disk on a test VM: The disk is configured like this (from /etc/pve/qemu-server/100.conf): scsi0: disks:vm-100-disk-1,cache=writeback,discard=on,size=4G The scsi-backend is virtio. The popup allows me to select the checkbox, but clicking...
  15. Waschbüsch

    PVE 4 / Debian Jessie

    I see. I guess I'll just have to live with it, then. :-) Thanks, Dietmar!
  16. Waschbüsch

    PVE 4 / Debian Jessie

    Hi all, does anything within proxmox depend on systemd? Or could I remove it and revert to sysvinit? I do that on boxes I administer, but wonder if it is possible on proxmox. Thanks, Martin
  17. Waschbüsch

    Move Ceph journal to SSD?

    Re: Move Ceph jounral to SSD? Hi Udo, This worked just as advertised. ;-) Thanks again for your help! Martin
  18. Waschbüsch

    Move Ceph journal to SSD?

    Re: Move Ceph jounral to SSD? Here's an additional thought: The drives are all attached to an Adaptec 8805 SAS Controller capable of using SSDs for caching. Any idea on how that would compare to putting only the journal on the SSD? Martin
  19. Waschbüsch

    Move Ceph journal to SSD?

    Re: Move Ceph jounral to SSD? Thank you, Udo, for the great and detailed reply. I have ordered my S3700 SSDs and will implement this as you suggested. I'll give feedback once that's done. It might take a bit because I will have to wait for the right trays so I can put the SSDs in my Supermicro...
  20. Waschbüsch

    Move Ceph journal to SSD?

    Hi all, I was thinking about adding a (server class) SSD each to my three ceph nodes. Currently, each node has two OSDs and the journal for each is on the drive itself. Now, I have a few questions about concepts: - Can the two journals reside on the same partition on the SSD? - Or do I have to...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!