Search results

  1. B

    DYNU DDNS ACME anyone ?

    Thanks! Was tearing my hair out over that :)
  2. B

    Backup stalled/frozen

    Ran into this exact problem myself - Priviliged CT hangs on backup snapshot because of the fuse mount. Works fine if the CT is stopped, or fuse support is disabled. I'd really rather not mount on the server - apart from the uid/gid hassles of bind mounts, I use containers so that I don't have...
  3. B

    Proxmox Backup Server on VPS - Security

    Thanks. interesting VPS host, flexible, but not as insanely difficult to cost as Azure is.
  4. B

    [SOLVED] Incremental backup & VM restore?

    Going by the web ui, I don't think it really works that way - you backup a VM its an incremental based on the last backup.
  5. B

    [SOLVED] Incremental backup & VM restore?

    I think only the first backup is full, and all else are incremental after that.
  6. B

    [SOLVED] Incremental backup & VM restore?

    Add the pbs to the servers storage as normal, go to the backups list for that storage, select a backup and restore.
  7. B

    Proxmox Backup Server on VPS - Security

    What VPS host are you considering? I was looking at testing Azure Cloud
  8. B

    Adding node kills cluster

    Oh man, I feel your pain :( Sorry that I don't have more to add. Probably too late to debug now anyway. Good luck!
  9. B

    Adding node kills cluster

    Hmmm, I had the same problem recently (couple weeks ago) - adding two new nodes, both killed the cluster and rebooted everything. Nodes where joined ok after, though I hade to run update certs on them to do anything. At the time I put it down to the cluster being under load and corosync failing.
  10. B

    Should I enable "SSD Emulation" for ceph images?

    For windows guests would it inform Windows that it should issue trim requests?
  11. B

    PPPoE Over VirtIO 802.1Q VLAN - Multiqueues? (pfSense)

    Bit premature on the "all ok" front :) Having pfSense as the gateway semeed to do something weird with multicast, cluster quorum just vanished :( reverting to our old gateway hardware resolved the issue. Looked into igmp snooping & queries, just couldn't get my head round it. Switched to...
  12. B

    PPPoE Over VirtIO 802.1Q VLAN - Multiqueues? (pfSense)

    I have have successfully configured 3 ADSL2+ modems to work with a pfSense VM Each modem is plugged into our DLINk DGS-1210 Switch (ports 1,2 & 3) Ports 1,2 &3 are on VLANS 101, 102 & 103 respectively. pfSense is driving them via PPPoE vlan interfaces vs 2.4.3.1 KVM 4 Cores One virtio nic...
  13. B

    pveproxy hanging

    proxmox 4.4 nosub repo I dist-upraded two nodes 11-Dec. Now both those nodes have multiple unkillable pveproxy processes. dmesg has many entries of: [50996.416909] INFO: task pveproxy:6798 blocked for more than 120 seconds. [50996.416914] Tainted: P O 4.4.95-1-pve #1...
  14. B

    [SOLVED] VM's losing disk access, file systems getting corrupted

    Well I was interested :) Was it due to corosync sharing a network interface with your storage layer? corosync seems to be very sensitive to lag under high bandwidth usage situations.
  15. B

    [SOLVED] Siimultaneous Spice Client Connections

    But when connect from different clients, they share the connecttion and there is the same one.... Thats what its meant Thats what its meant to do - share the same screen across multiple spice displays. We used it in support so a person could easily observe what the other was doing.
  16. B

    LizardFS anyone?

    I think the write is not confirmed until all chunks are written, not a 100% sure on that though. Also with erasure coding the client writes to all chunkservers simultaneously, rather than the chained writes with std replica.
  17. B

    LizardFS anyone?

    Sourceforge unfortunately: https://sourceforge.net/p/lizardfs/mailman/lizardfs-users/?viewmonth=201612 A lot of discussion is actually via their github issues page: https://github.com/lizardfs/lizardfs/issues They have since started their own forums as well: https://lizardfs.org/forum...
  18. B

    Sheepdog 1.0

    Thanks mir, I interpret that to mean to mean individual VM's could lose data, but the overall cluster will remain intact (I managed to destroy a lizardfs cluster in testing - those master servers are fragile) One thing I only just thought to check is memory usage - with just two 32GB VM's the...
  19. B

    Sheepdog 1.0

    Thats my environment, except for the --nosync :) Any idea of the implications of --nosync? does it mean some VM's could be missing a few writes (after a server crash) or could the actual sheepdog cluster be toast? My main reservations re sheepdog are: - documentation - the user mailing list is...