I've got myself into a bit of a pickle.
I had a 3 node ceph cluster running for a good couple of years no problem. I'm now upgrading to 6, I was going to switch to ZFS but I have changed my mind though I could be open to persuasion. I was going to reduce my node count in the DC to reduce power...
Is anyone experiencing problems with older VMs?
I've only just installed 6.2 and started migrating VMs but I'm getting a Debian Jessie constantly crashing with a kernel panic relating to some scsci thing or other and I now have an Ubuntu 8.04 VM that almost immediately after bootup loses...
I have a three node cluster with three dual 10GB in each plus 4 x 1GB ethernet.
There is no 10GB switch. We just use DAC cables.
Each node has 2 x 10GB connected to the other two for ceph.
Each node has 2 x 10GB connected to the other two for cluster replication.
Each node has 1 x 10GB...
I have just upgraded my three node cluster and now ceph is reporting a health warning.
I did the upgrade by moving all vms of each machine, doing an apt-get dist-upgrade and then rebooting.
After each came back up, ceph showed degraded and eventually went back to clean.
I now have a health...
I have just been doing some debugging with ngrep inside a VM and suddenly received traffic destined for a number of other VMs. This is Proxmox 5, should I be worried? I've certainly never seen this before but I haven't been using Proxmox for more than a few months.
I'm trying to understand Proxmox networking internally.
I've currently set up a three node cluster with Ceph using the guide which configures 10Gb networking without a switch ie all directly attached using a bond.
I'm now realising that what this probably means is that the regular cluster...
On an eight disk backplane, two sas 8087 connectors, I've currently got one sas cable plugged into onboard hardware raid and one port plugged into a 9211 HBA, does anyone see any problem with this? I did it without thinking but wondered if it may cause an issue in the future?
I'm just about to set up a three node Proxmox cluster with Ceph.
The three machines will all be:
IBM System X3550 M4
Dual: 2.60 GHz Eight (octa) -Core Xeon (E5-2650 V2) - 3.40GHz Turbo Boost Speed - 8GT/20MB
Memory: 192GB - (24 x 8GB) - DDR3 ECC Reg
RAID: Standard M5110 Onboard RAID...
I have three servers with dual E5-2650 and 128GB RAM, there is no PCIe expansion ports and just a single 10GB SPF+ port and 6 onboard SATA.
Will I be able to use these for Ceph? I see it says separate 10GB network required but would it work with a VLAN on 10GB with other VLAN for LAN/WAN...
I'm currently using VMWare with shared storage but am looking to using Proxmox with ceph. I want to utilize my existing hardware so can someone tell me if this is feasible please.
Initially I would be using the following:
2 x Dell R210 II with E3-1245v2, 32GB RAM and a three intel SSD 1 x 40Gb...