Search results

  1. V

    New SSD Drive - PE size?

    Hi, We're moving all our VE's off a ceph cluster while we rebuild. The idea was to do a backup/restore to a new PVE node which has a second 1TB drive for storage. This is a brand new install but when we try to restore a snapshot we get - Virtual Environment 6.2-4 TASK ERROR: unable to...
  2. V

    Replace Node v6.2?

    Hi, We have a ceph cluster with five nodes. We have had a hardware failure and I'm about to install a new node. Our existing cluster is running Proxmox v5. I'm wondering if I can install V6 on the new node and add to the existing cluster? I don't really want to risk upgrading the four nodes as...
  3. V

    Ceph Cluster and Stanalone Nodes?

    Out of interest, could we also run replication from a CT that is stored on Ceph to one of these new nodes that don't run ceph? Occasionally we have issues with Ceph so Im thinking of using replication to another node?
  4. V

    Ceph Cluster and Stanalone Nodes?

    Hi Tim, No, they won't have Ceph installed. Basically will be independent installs of Proxmox. I wanted to add them to the proxmox cluster so I could manage all the servers in the same GUI. Regards, James
  5. V

    Ceph Cluster and Stanalone Nodes?

    Hi, Is it safe to add a couple of stand-alone nodes to the proxmox cluster? For example, ceph cluster of 5 nodes and 2 stand-alone? Regards James
  6. V

    LXC CT - xl2tpd

    Hi, Has anyone installed xl2tpd to act as a VPN client on an LXC container? I'm wondering if there is anything I need to do on the node for it to work? Regards, J
  7. V

    Ceph Nodes & PG's

    But does having six osd’s per node affect the PG’s as one node down will cause six OSD’s to go? Is my calculator correct?
  8. V

    Ceph Nodes & PG's

    Having used proxmox for some time, I'm re-building one of my clusters and I would like to make sure I get the PG's correct as it still confuses me. Question 1 In my situation I have five nodes. Each nodes has 5-6 500GB OSD's installed. I used the pg calculator which give me PG value of 1024...
  9. V

    Backup of VM failed - error: rbd: failed to create snapshot:

    Can anyone throw any light on this please as I'm actually considering moving away from Proxmox as we can't backup CT's :(
  10. V

    Backup of VM failed - error: rbd: failed to create snapshot:

    Hi, rbd snap rm ceph-vm/vm-102-disk-1@vzdump Actually solves the issue, but the backups are scheduled overnight and I end up with exactly the same error after a couple of nights.
  11. V

    Backup of VM failed - error: rbd: failed to create snapshot:

    Hi, I am still having this issue. Any help would be appreciated. I have found a few people discuss this error on the web. Is it a bug with Proxmox? It seems the old snapshot can't be removed?
  12. V

    Backup of VM failed - error: rbd: failed to create snapshot:

    Hi, I wonder if anyone can help. I have backups scheduled every night and most of them seem to fail with this following error :- INFO: starting new backup job: vzdump 102 --compress lzo --node cloud3 --mode snapshot --remove 0 --storage VispaNAS INFO: Starting Backup of VM 102 (lxc) INFO...
  13. V

    Container NAT Prerouting?

    Hi All, We’re building a proxy server using squid and need an iptables rule which redirects port 80 to 3128. Is this even possible on a CT as it doesn’t appear to work when we add the iptables rule via command line. Any suggestions would be appreciated. James
  14. V

    Ceph Cluster Reinstallation - OSD's down?

    Hi Alwin, thanks. I did manage to overcome the problem by removing the OSD's, zapping / dd then re-adding. Strange that it didnt work first time around though. ceph osd out 0 service ceph stop osd.0 ceph osd crush remove osd.0 ceph auth del osd.0 ceph osd rm 0 ceph-disk zap /dev/cciss/c0d0 dd...
  15. V

    Ceph Cluster Reinstallation - OSD's down?

    Hi All, I've re-installed a 5 node cluster with 5.1. Each of the 5 nodes has 8 drives; /dev/sda (OS) /dev/sdb (journal ssd) Then six SSD disks for OSD's. /dev/cciss/c0d0 /dev/cciss/c0d1 /dev/cciss/c0d2 /dev/cciss/c0d3 /dev/cciss/c0d4 /dev/cciss/c0d5 I've installed ceph along with the...
  16. V

    Ceph Pool Size?

    Hi Alwin, Ok that explains it. I was thinking that I had the pool configured incorrrectly which was causing the cluster to fail. So the answer is, I need a minimum of five nodes to sustain a two node failure. Thanks for your help. James
  17. V

    Ceph Pool Size?

    Hi, I'm after a little help with Ceph pools as I can't fully understand the calculator. I have four nodes, each node has x6 500Gb drives which are my OSD's. I'm looking to be able to sustain two nodes failing. What would be the recommended pool size & pg num? Regards, James
  18. V

    Dead Node - Removed, GUI Broken

    Hi, Any update on this? I still have the issue and don't know how to resolve?
  19. V

    Dead Node - Removed, GUI Broken

    Hi, root@cloud1:~# cat /etc/pve/.members { "nodename": "cloud1", "version": 22, "cluster": { "name": "vispa", "version": 15, "nodes": 6, "quorate": 1 }, "nodelist": { "storage2": { "id": 7, "online": 1, "ip": "x.x.x.49"}, "cloud1": { "id": 1, "online": 1, "ip": "x.x.x.51"}, "cloud3": {...
  20. V

    Dead Node - Removed, GUI Broken

    Hi, I've had a node from my cluster crash today. I have removed the dead node however there seems to have been some issue with the GUI. My CT's are running and I have quorum but no nodes are shown in the web gui. I can write to /etc/pve fine. I have noticed that the output from ha-manager...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!