Search results

  1. D

    [SOLVED] Hardware Raid or ZFS

    Tomorrow I have 4 x DC600M 3.84TB disks arriving. The whole reason I am getting these is due to very slow restores when using HDD. The question now is do I use hardware raid or zfs. I have a hardware raid card with BBU and also LSI HBAs so I can go with either. Also in terms of raid level, I...
  2. D

    Can I run a two node ceph temporarily?

    I've got myself into a bit of a pickle. I had a 3 node ceph cluster running for a good couple of years no problem. I'm now upgrading to 6, I was going to switch to ZFS but I have changed my mind though I could be open to persuasion. I was going to reduce my node count in the DC to reduce power...
  3. D

    Proxmox 6.2 with older VMs

    Is anyone experiencing problems with older VMs? I've only just installed 6.2 and started migrating VMs but I'm getting a Debian Jessie constantly crashing with a kernel panic relating to some scsci thing or other and I now have an Ubuntu 8.04 VM that almost immediately after bootup loses...
  4. D

    3 Node Proxmox 10GB interfaces without 10GB switch

    I have a three node cluster with three dual 10GB in each plus 4 x 1GB ethernet. There is no 10GB switch. We just use DAC cables. Each node has 2 x 10GB connected to the other two for ceph. Each node has 2 x 10GB connected to the other two for cluster replication. Each node has 1 x 10GB...
  5. D

    Problem with CEPH after upgrade

    I have just upgraded my three node cluster and now ceph is reporting a health warning. I did the upgrade by moving all vms of each machine, doing an apt-get dist-upgrade and then rebooting. After each came back up, ceph showed degraded and eventually went back to clean. I now have a health...
  6. D

    Receiving Traffic Destined For Other VMs

    I have just been doing some debugging with ngrep inside a VM and suddenly received traffic destined for a number of other VMs. This is Proxmox 5, should I be worried? I've certainly never seen this before but I haven't been using Proxmox for more than a few months.
  7. D

    Understanding Proxmox Networking

    I'm trying to understand Proxmox networking internally. I've currently set up a three node cluster with Ceph using the guide which configures 10Gb networking without a switch ie all directly attached using a bond. I'm now realising that what this probably means is that the regular cluster...
  8. D

    HW Raid and HBA on same backplane

    On an eight disk backplane, two sas 8087 connectors, I've currently got one sas cable plugged into onboard hardware raid and one port plugged into a 9211 HBA, does anyone see any problem with this? I did it without thinking but wondered if it may cause an issue in the future?
  9. D

    Small Proxmox Cluster

    Hi all, I'm just about to set up a three node Proxmox cluster with Ceph. The three machines will all be: IBM System X3550 M4 Dual: 2.60 GHz Eight (octa) -Core Xeon (E5-2650 V2) - 3.40GHz Turbo Boost Speed - 8GT/20MB Memory: 192GB - (24 x 8GB) - DDR3 ECC Reg RAID: Standard M5110 Onboard RAID...
  10. D

    Ceph VLAN 10GB

    Hi, I have three servers with dual E5-2650 and 128GB RAM, there is no PCIe expansion ports and just a single 10GB SPF+ port and 6 onboard SATA. Will I be able to use these for Ceph? I see it says separate 10GB network required but would it work with a VLAN on 10GB with other VLAN for LAN/WAN...
  11. D

    Proxmox with ceph question

    I'm currently using VMWare with shared storage but am looking to using Proxmox with ceph. I want to utilize my existing hardware so can someone tell me if this is feasible please. Initially I would be using the following: 2 x Dell R210 II with E3-1245v2, 32GB RAM and a three intel SSD 1 x 40Gb...