Search results

  1. A

    pty issue with ve 4 containers (converted from 3)

    This worked for me. Thanks for posting the fix!
  2. A

    Ceph Authentication Error

    My fault. It was a stupid mistake. I put the shared keys in /etc/pve/priv instead of /etc/pve/priv/ceph
  3. A

    Ceph Authentication Error

    Some more info: pveversion -v proxmox-ve: 4.1-26 (running kernel: 4.2.6-1-pve) pve-manager: 4.1-1 (running version: 4.1-1/2f9650d4) pve-kernel-4.2.6-1-pve: 4.2.6-26 lvm2: 2.02.116-pve2 corosync-pve: 2.3.5-2 libqb0: 0.17.2-1 pve-cluster: 4.0-29 qemu-server: 4.0-41 pve-firmware: 1.1-7...
  4. A

    Ceph Authentication Error

    Hey Everyone, I'm hoping you can help me out. I can't for the life of me get proxmox to connect to a remote ceph cluster. I've followed all of the steps several times. But I keep getting this error: rbd error: rbd: couldn't connect to the cluster! (500). I've read several posts where this was...
  5. A

    Suggestions for SAN Config

    Hi Nils. Open-E looks interesting. Do you connect to it over ISCSI?
  6. A

    Suggestions for SAN Config

    Here's some io info from our busiest production server: root@proxmox:~# iostat -d -x 5 3 Linux 2.6.32-39-pve (proxmox) 02/26/2016 _x86_64_ (24 CPU) Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda...
  7. A

    Suggestions for SAN Config

    Checked it out. Not a lot of documentation. And no one on the IRC channel. Has anyone ever been successful getting OpenVZ/LXC working on Gluster?
  8. A

    Suggestions for SAN Config

    That seems to be the common consensus. Obviously we'll build the NAS to be as reliable as possible, but it seems like the component that would be the single point of failure to the entire cluster should be redundant.
  9. A

    Suggestions for SAN Config

    Thanks again Mir. Do you use redundant NAS's
  10. A

    Suggestions for SAN Config

    Thanks Mir. Napp-it looks like another good option. We've had it running on our labs, but never put it into production. Do you run it in production with OpenVZ or LXC?
  11. A

    Suggestions for SAN Config

    We've used both FreeNas and Nas4Free. Both have been solid. I also helped a buddy deploy an IXSystems solution in a DC and those guys were great to work with. SSD's are purely a reliability play. We're based on California, but our data centers are spread throughout the country. While we can...
  12. A

    Suggestions for SAN Config

    We use the DCS3510's in some of our servers. We did have one go bad after two weeks. But besides that they've been great.
  13. A

    Suggestions for SAN Config

    There are so many things to consider... At this point we're looking at the Arista 7148S and a few Cisco models. We need to do some testing to determine whether we're going to use ISCSI. If we go ISCSI then we don't need multi chassi LAG's. Wolfgang (awesome guy by the way) shared with me that...
  14. A

    Suggestions for SAN Config

    Also regarding the self made node. That's the direction that I believe ink go in. We're planning on using Dell R720xd's with Nas4free. We're going to use all SSD's. We're still researching SSD's.
  15. A

    Suggestions for SAN Config

    Thank you. We are currently on a gig network, but are planning on deploying a 10Gig network specifically for the SAN. I'll pull some io stats from our production environment tomorrow and will post them here ASAP.
  16. A

    Suggestions for SAN Config

    Thank Hec. Another wrinkle. We don't need large IO, and we don't need a lot of space. We only need 6 - 10TB tops. We talked to NetApp and Dell and their solutions are awesome. They have the features and reliability that we need, but they aren't really priced well for small storage/redundant...
  17. A

    Suggestions for SAN Config

    I'm with you 100%. That's one of the problems we're facing. We really need minimal performance, but we want to deploy redundant SAN's because we'll get over 80 support calls because of a 10 minute outage.
  18. A

    Suggestions for SAN Config

    Ha ha. Yes each CT is very basic. 512MB of Ram and 8 gigs of hard drive. Nearly 0 CPU usage. We need a lot of containers, and they need to be very reliable, but the performance requirements are minimal. The number of CT's though will continue to grow, so we want to use separate resources for...
  19. A

    Suggestions for SAN Config

    Thanks again. do you know if LXC supports Gluster or ZFS storage? I haven't read of anyone doing it with Proxmox yet.
  20. A

    Suggestions for SAN Config

    Thank you so much for your detailed response. Here are the answers to your questions. Q1: Are these all in the same Proxmox-Cluster ? As in 3 Nodes on Datacenter A and 3 Nodes in Datacenter B. There are 3 nodes in each data center. We don't need any synchronizing/failover between data centers...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!