Search results

  1. D

    NFS and Snapshots corrupt my VMs disks

    I just ran into this this morning...
  2. D

    Corrupt Filesystem after snapshot

    This is still an issue! Any news? Any way to work around this issue or recover from it? <D>
  3. D

    Proxmox VE Ceph Benchmark 2018/02

    Ah....thank you Alwin. Yeah....this seems worse now...700ms seems extremely high...if that were ns that would be fantastic. I'll need to run the tests in my environment and compare. I suspect I'm seeing better numbers because if I were at 700ms latency I would certainly be "hearing about it"...
  4. D

    Proxmox VE Ceph Benchmark 2018/02

    Sorry, this is what I'm trying to confirm: The numbers in the report with comma's are throwing me off....what are these in ms? 700ms? 70ms? Thanks, <D>
  5. D

    Proxmox VE Ceph Benchmark 2018/02

    Just want to confirm that in the Benchmark document that the latency numbers are reflected in "Seconds" and that we would need to multiply by 1000 to confirm ms... Example: 0,704943 = approx 70ms latency This seems a bit high for SSD's...
  6. D

    PVE / CEPH Performance Tab Question

    Quick Question: Is the graphical stats shown under the CEPH --> Performance area in the PVE Web GUI an "aggregate of all nodes in the CEPH cluster" or just for the node you are currently viewing? In other words...If I am looking at the IOPS: READ on two nodes and I see 3192 on one node and 4439...
  7. D

    BUG / MISSING FEATURE: Host key verification fails after adding node to existing cluster

    Final summary: If you add a node to an existing cluster that is configured to use a dedicated, separate, cluster network as described in this document: https://pve.proxmox.com/wiki/Separate_Cluster_Network#Adding_nodes_in_the_future Using this command: pvecm add IP-ADDRESS-CLUSTER...
  8. D

    [SOLVED] Proxmox CEPH: Newly added OSD shows DOWN and OUT

    Problem solved: Ran apt-cache show ceph on a working node and the new "stunted" node and verified the version was out of date. Then ran through this to upgrade it: https://pve.proxmox.com/wiki/Ceph_Hammer_to_Jewel Hope this helps someone else. Cheers, <D>
  9. D

    [SOLVED] Proxmox CEPH: Newly added OSD shows DOWN and OUT

    Well...looks like a version mis-match.....for some reason the newly installed node has a debian-hammer jessie main listed in /etc/apt/sources.list.d/ceph.list not sure why... My existing nodes are on Jewel.... .../sigh.
  10. D

    Failed to create OSD

    I just encountered this problem too....I'm tracking the second (same) problem on this thread: https://forum.proxmox.com/threads/proxmox-ceph-newly-added-osd-shows-down-and-out.39130/
  11. D

    [SOLVED] Proxmox CEPH: Newly added OSD shows DOWN and OUT

    Anyone got any suggestions? I have 100k of servers sitting here that I can't leverage right now...any suggestions would be helpful! Thanks!
  12. D

    BUG / MISSING FEATURE: Host key verification fails after adding node to existing cluster

    Well...I'm talking to myself at this point but I can confirm that it does not on any of my new nodes that I'm adding...=)
  13. D

    [SOLVED] Proxmox CEPH: Newly added OSD shows DOWN and OUT

    Need some CEPH help here... I have an existing Proxmox CEPH environment with the following: 5 nodes in a Proxmox environment (PVE v 4.4) 4 of the nodes have SSD based OSDs configured in a single RBD pool (32 total OSD's) 3 of those nodes are MON's I've recently purchased 4 additional...
  14. D

    BUG / MISSING FEATURE: Host key verification fails after adding node to existing cluster

    Can anyone else confirm that using this command to add a new node to an existing cluster actually adds the node's cluster network address to the /etc/pve/priv/known_hosts file? pvecm add <IP addr of a cluster member> -ring0_addr <new nodes ring addr> In my case it does not. Which breaks...
  15. D

    BUG / MISSING FEATURE: Host key verification fails after adding node to existing cluster

    So, here's a summary of where I'm at so far: I created the original cluster following the Proxmox instructions with a single IP range. After further research I found that Proxmox recommends a dedicated Cluster Network, so I followed their instructions to configure corosync on a second ip...
  16. D

    BUG / MISSING FEATURE: Host key verification fails after adding node to existing cluster

    This seems to be relevant.... When creating a dedicated cluster network....does Proxmox not add those keys into /etc/pve/priv/known_hosts? https://pve.proxmox.com/pipermail/pve-devel/2016-November/024109.html <D>
  17. D

    BUG / MISSING FEATURE: Host key verification fails after adding node to existing cluster

    More info: I still don't know why this happened....but I now suspect that this is related to the fact that there doesn't appear to be any ssh relationship for the dedicated cluster network ? Running against my my dedicated cluster network (default behavior) (FAILS): Code: # /usr/bin/ssh -v...
  18. D

    BUG / MISSING FEATURE: Host key verification fails after adding node to existing cluster

    More info: I've noticed that I can ssh from all nodes if I use: root@hostname but it prompts for a key if I use root@FQDN....DNS seems to factor in here...hmmm....