Search results

  1. F

    all nodes red - but quorum - can not find any error

    i could not find the issue. i think it was relate dwith some nfs share maybe.. but why it did not get away. no idea. i solved it updating all the nodes to the newest version. generally NFS shares should allways work (specially when they are hard mounted) after having a problem it is very...
  2. F

    after updating to 3.4 : problem with client; ssl3_read_bytes: ssl handshake failure

    after upgrading to 3.4 logs are full of: Mar 4 23:57:37 node7 pveproxy[19593]: problem with client 192.168.11.8; ssl3_read_bytes: ssl handshake failure Mar 4 23:57:37 node7 pveproxy[19593]: Can't call method "timeout_reset" on an undefined value at /usr/share/perl5/PVE/HTTPServer.pm line 225.
  3. F

    all nodes red - but quorum - can not find any error

    3 servers use lacp. the others are all directly connected. does this problem mean that all server have some (network?) problems or can just 1 server cause the whole cluster not to work any more with pvestatd?
  4. F

    all nodes red - but quorum - can not find any error

    root@node6:~# cat /etc/pve/.members { "nodename": "node6", "version": 2, "cluster": { "name": "cluster01", "version": 11, "nodes": 11, "quorate": 1 }, "nodelist": { "node1": { "id": 1, "online": 1}, "node7": { "id": 2, "online": 1}, "node2": { "id": 3, "online": 1}, "node5": { "id": 4, "online"...
  5. F

    all nodes red - but quorum - can not find any error

    network cabling? you mean that something broke (hardware)? because ping and everything works fine also multicast ping.
  6. F

    all nodes red - but quorum - can not find any error

    now i can find in the log of one server (which is the one i think that makes problems) Mar 4 12:11:55 node6 pvestatd[3224]: status update time (8.769 seconds) Mar 4 12:12:03 node6 pvestatd[3224]: status update time (5.337 seconds) Mar 4 12:12:13 node6 pvestatd[3224]: status update time...
  7. F

    all nodes red - but quorum - can not find any error

    Hello, yesterday all nodes went red. only each node shows itself green. i allready tried to restart cman, pvestatd, pvedaemon - all works without error on every node. but nothing changes. even tried to reboot one node... i also can write to /etc/pve/... all nfs shares (images and backups) are...
  8. F

    if one node looses qorum for seconds all other nodes show red lights

    since some time during backups it happenes that a node looses qorum. (searching for why is another task) the node (which lost qorum) itself afterwards is the only node which show all other nodes in green and with data. i checked that all nodes have qorum at this time. running a...
  9. F

    loosing qorum during backup

    happened again this. night. it is just one specific node. all other nodes make no problem. the load was not very high during the night. also the network which is used for the cluster communication was idle.... i found this in log: Dec 18 01:29:12 node6 pvestatd[737374]: WARNING: unable to...
  10. F

    !! New Cluster build and crash with ceph !!

    glen, your read prformnace is inside a VM? 900mb/sec is really cool! how did you measure that? with big enogh files to bypass cache? i get only around 100mb/sec read and 500mb/sec write inside... ho much did the speed change for you setting blockdev read ahead to 8192?
  11. F

    moving disk from ceph to local lvm fails

    transferred: 279143383040 bytes remaining: 29491200 bytes total: 279172874240 bytes progression: 99.99 % transferred: 279159898112 bytes remaining: 12976128 bytes total: 279172874240 bytes progression: 100.00 % transferred: 279172874240 bytes remaining: 0 bytes total: 279172874240 bytes...
  12. F

    qcow2 file size error

    isnt the max filesize of ext3/4 2TB?
  13. F

    qcow2 file size error

    hi, i can run "qemu-img create -f qcow2 test.qcow2 6T" but "qemu-img create -f qcow2 -o preallocation=metadata test.qcow2 6T" fails with the error above. so the question for me is: what happens if the thin provisoned image reaches more than 2TB??? will it fail?
  14. F

    loosing qorum during backup

    and yes we have a dedicatet network for the backups. the cluster communication is on the other network. and this nework was more or less idle during night.
  15. F

    loosing qorum during backup

    this is the default management node (uses ngnix as reverse proxy to the outer world) pveversion --verbose proxmox-ve-2.6.32: 3.3-138 (running kernel: 3.2.0-4-amd64) pve-manager: 3.3-2 (running version: 3.3-2/995e687e) pve-kernel-2.6.32-33-pve: 2.6.32-138 pve-kernel-2.6.32-26-pve: 2.6.32-114...
  16. F

    loosing qorum during backup

    this node is loosing the qorum: pveversion --verbose proxmox-ve-2.6.32: 3.2-136 (running kernel: 2.6.32-31-pve) pve-manager: 3.3-1 (running version: 3.3-1/a06c9f73) pve-kernel-2.6.32-32-pve: 2.6.32-136 pve-kernel-2.6.32-31-pve: 2.6.32-132 pve-kernel-2.6.32-26-pve: 2.6.32-114 lvm2...
  17. F

    loosing qorum during backup

    we are loosing qorum one onde node during backup. another node which is the default management node shows no statistics for all node machines (and all other nodes red). i have to restart the pvemanager on that node to get statistics and green signs again. 1) question one is why is the node...
  18. F

    [RESOLVED] IPSET: restore failed - firewall cannot anymore

    Re: IPSET: restore failed - firewall cannot anymore thanks! bug is fixed now. bugfix works as expected :-)
  19. F

    [RESOLVED] IPSET: restore failed - firewall cannot anymore

    Re: IPSET: restore failed - firewall cannot anymore ok. when i make really long ipset names then other names are genareated for it and it works. BUT with ipsets with exactly 19 numbers (did not check 18 or 20) allways i will get emtpy ipset even when i enter some ips.-..
  20. F

    [RESOLVED] IPSET: restore failed - firewall cannot anymore

    Re: IPSET: restore failed - firewall cannot anymore ok but the question is then WHY was the ipset emtpy? in the gui i had about 12 ips there.... maybe another bug?