Search results

  1. pvps1

    EOL warnings on "old stable"

    I would like to suggest that on "old stable" (Bullseye, PVE7) there are not "YOU ARE EOL" messages shown on the GUI. I think it's unneccesary stress between IT and customer to force an update where it's not security relevant. imo stable and old-stable should be "warning free" just had an...
  2. pvps1

    annoying forum spam

    i am sure team is aware of the problem but the mass of spam in the forum is annoying. i use rss feed and i see it all :)
  3. pvps1

    WARN: systemd-timesyncd

    When running pmg7to8 we see: WARN: systemd-timesyncd is not the best choice for time-keeping on servers, due to only applying updates on boot. It's recommended to use one of: * chrony * ntpsec * openntpd as this is mentioned in pve too, we checked it (all our servers running...
  4. pvps1

    Dist-Upgrade 5.x -> 6.x -> 7.x

    Hi I have to dist-upgrade a cluster currently running 5.x (already corosync-3). your recommendations are welcome: can I dare to dist-upgrade each host 5->6->7 (not 5->7 but 2 dist-upgrades while the rest of the cluster ist still 5.x)? or should I better go the slow way and dist-up to 6 the...
  5. pvps1

    [SOLVED] pvecm nodes -> 1 node IP, all others hostname (upgrade v4/v5)

    I started upgrading from version 4.4. to 5.4 by adding a new host with version 5.4 plan was to migrate old nodes -> new node. upgrade, migrate back, next node etc... I added the new host (debian stretch + pve repos) with # pvecm add $ip_old_44_node --use_ssh 1 result: Nodeid Votes...
  6. pvps1

    mix local-lvm and local-zfs in cluster?

    Hi i've got a new 4.4.15 node installed. it's the only node in the cluster (of 8) with no ZFS but lvm-thin. i cannot see any storage on this node. neither in web-gui nor with pvesm status (hangs). is it possible, that i cannot configure a storage (zfs in this case) that is NOT available on all...
  7. pvps1

    quorum even node number

    Hi is it a problem having an even node number (4,6,8,....) for quorum? we have a 4 node cluster where nodes are rebooting erratic (when very high load I guess...) if yes, is incrementing quorum_votes: 1 to quorum_votes: 2 on one node the solution? regards Peter
  8. pvps1

    Ceph integration - clock skew

    Hi we have a problem with a 4 node cluster running integrated ceph (meaning nodes are pve and ceph-cluster in one). 3 nodes are ceph mons and osds, 2 of them report: health HEALTH_WARN clock skew detected on mon.1 Monitor clock skew detected we cannot detect why...
  9. pvps1

    PVE 4.2 DRBD9: unable to use DRBD-Device...

    Hi ) for testing I installed a PVE 4.2 Cluster with 3 nodes (1 is for quorum only). both are debian jessie with pve-no-subscription repository. cluster works, quorum ok ) configured drbd9 according to https://pve.proxmox.com/wiki/DRBD9 on 2 nodes (redundancy 2) ) drbd is up and running...