Search results

  1. pvps1

    [SOLVED] pvecm nodes -> 1 node IP, all others hostname (upgrade v4/v5)

    solved with simply restarting services pveproxy and corosync on the v4.4. machine where the guest-image was. it stills shows the IP instead of the hostname but moving/migration works
  2. pvps1

    [SOLVED] pvecm nodes -> 1 node IP, all others hostname (upgrade v4/v5)

    I started upgrading from version 4.4. to 5.4 by adding a new host with version 5.4 plan was to migrate old nodes -> new node. upgrade, migrate back, next node etc... I added the new host (debian stretch + pve repos) with # pvecm add $ip_old_44_node --use_ssh 1 result: Nodeid Votes...
  3. pvps1

    mix local-lvm and local-zfs in cluster?

    thx for the hint. configed the 8th host only to use nfs,ceph and lvm-thin but still cannot see any storage backend. (though the zfs errors in the logs are gone of course). cannot not even see the local storage. pvesm eg. status still hangs (or better runs forever, I can stop it with ctrl-c) did...
  4. pvps1

    mix local-lvm and local-zfs in cluster?

    node8 logs say (as expected): Jul 10 18:24:23 scalpel pvedaemon[1797]: could not activate storage 'local-zfs', zfs error: open3: exec of zpool import -d /dev/disk/by-id/ rpool failed at /usr/share/perl5/PVE/Tools.pm line 411.
  5. pvps1

    mix local-lvm and local-zfs in cluster?

    Hi i've got a new 4.4.15 node installed. it's the only node in the cluster (of 8) with no ZFS but lvm-thin. i cannot see any storage on this node. neither in web-gui nor with pvesm status (hangs). is it possible, that i cannot configure a storage (zfs in this case) that is NOT available on all...
  6. pvps1

    Ceph integration - clock skew

    realized -> it is allways mon.1 that is reported. this is node pn0002 which really has a different time of ~100ms (see last reply). did a manual resync with the internal timeserver and immediatly after that a date +"%T.%3N" again shows between 50 and 100ms difference... the node has no...
  7. pvps1

    Ceph integration - clock skew

    all nodes sync to node dkcpr0001 which is part of the cluster (therefore 1 hop, switched). pr0001 fetches time from some external server
  8. pvps1

    Ceph integration - clock skew

    running cssh: root@dkcpn0001:~# date +"%T.%3N" 14:16:05.967 root@dkcpn0002:~# date +"%T.%3N" 14:16:05.888 root@dkcpn0003:~# date +"%T.%3N" 14:16:05.967 don't know if the time difference can come from cssh runtime or the node's load. all nodes sync to node dkcpr0001 which is part of the...
  9. pvps1

    Ceph integration - clock skew

    the 3 "pn" nodes Linux dkcpn0002 4.4.19-1-pve #1 SMP Wed Sep 14 14:33:50 CEST 2016 x86_64 GNU/Linux the "pr" node (router, no guests, no ceph -> just doing quorum): Linux dkcpr0001 4.4.6-1-pve #1 SMP Thu Apr 21 11:25:40 CEST 2016 x86_64 GNU/Linux all: pve-manager Version: 4.3-3
  10. pvps1

    Ceph integration - clock skew

    No. see (run with cssh on all 4 nodes): root@dkcpn0001:~# cat /etc/timezone ; date Europe/Vienna Thu Oct 27 10:11:24 CEST 2016 root@dkcpn0002:~# cat /etc/timezone ; date Europe/Vienna Thu Oct 27 10:11:24 CEST 2016 root@dkcpn0003:~# cat /etc/timezone ; date Europe/Vienna Thu Oct 27 10:11:24...
  11. pvps1

    Ceph integration - clock skew

    it's regulary, not just after rebooting. e.g. last report was this morning (checks run hourly). all nodes have uptime > 7 days
  12. pvps1

    quorum even node number

    Hi is it a problem having an even node number (4,6,8,....) for quorum? we have a 4 node cluster where nodes are rebooting erratic (when very high load I guess...) if yes, is incrementing quorum_votes: 1 to quorum_votes: 2 on one node the solution? regards Peter
  13. pvps1

    Ceph integration - clock skew

    Hi we have a problem with a 4 node cluster running integrated ceph (meaning nodes are pve and ceph-cluster in one). 3 nodes are ceph mons and osds, 2 of them report: health HEALTH_WARN clock skew detected on mon.1 Monitor clock skew detected we cannot detect why...
  14. pvps1

    PVE 4.2 DRBD9: unable to use DRBD-Device...

    Hi ) for testing I installed a PVE 4.2 Cluster with 3 nodes (1 is for quorum only). both are debian jessie with pve-no-subscription repository. cluster works, quorum ok ) configured drbd9 according to https://pve.proxmox.com/wiki/DRBD9 on 2 nodes (redundancy 2) ) drbd is up and running...