Recent content by bradmc

  1. B

    ProxMox VE 3.2 Installation Failure

    I have been having the same issue... I have a new Dell R620 with internal, mirrored, SDcards of 32GB, and it also has a PERC 710 RAID controller with four 600GB drives in one RAID5 volume. I cannot install to either. Installing to the SDcard it gives me the "cannot install bootloader" error...
  2. B

    SD card size

    Hello, I just bought a new Dell R620 with redundant, internal SD cards. I plan to install PVE on these (no VM's). The internal disks will be used for VM's. What size is needed to install PVE? I have installed 4GB cards, but PVE will not install because of space. What is the minimum size...
  3. B

    Problems with cluster...

    Symptoms were the same as in another thread, although it was in regard to running a mixed cluster of 3.10 and 2.6.32 kernels, so I thought I would give the fix a try, and it worked. However, I'm not running a mixed-kernel cluster; everything is 3.10-4. Anyway, this worked and fixed the issue...
  4. B

    Problems with cluster...

    I"m still having the issue, although I have not made any changes, either. If it worked under 3.2, why isn't the same config working under 3.3? Also, two of the three nodes work fine. Why aren't they having the same issue?
  5. B

    Problems with cluster...

    How do you change the heartbeat from VMBR to physical?
  6. B

    Problems with cluster...

    My cluster was running just fine using the latest 3.2 pve-no-subscription. I upgraded node 3, first, which went just fine. It joined the cluster after upgrade and remained in the cluster. Next, I upgraded node 1, which also went just fine. I waited several days, then I upgraded node 2. I...
  7. B

    Problems with cluster...

    I rebuilt the problem server with a fresh install of 3.1 then an upgrade to pve-no-subscription, which is what the other two nodes are running. I join the node to the cluster, and, still, after about five minutes is drops out of the cluster with the above "Retransmit List" issue described...
  8. B

    Problems with cluster...

    Thanks for the reply. I did try what you suggested, but it didn't work. I have rebooted the node, and when it comes up it does join the cluster, and it is indicated at being green on the PVE console. However, it eventually drops out, goes red, and I find myself in the same situation. I'm...
  9. B

    Problems with cluster...

    Hello, I've been running PVE for a while now. Yesterday I updated to the latest PVE-no-subscription. It's a three-node cluster, and the first two nodes upgraded fine with no issues. The third node seemed to go fine, but after the reboot the third node initially joins the cluster, but after...
  10. B

    VLAN Tagging

    When I define the tag number in the VM network config, the VM will then boot and create two more interfaces: bond0.618 and vmbr0v618. I can see the VM requesting the mac address of the default router for the 618 VLAN using tcpdump. I must be missing something basic that isn't documented...
  11. B

    VLAN Tagging

    Here's my default network config. The native VLAN for the PVE server is the 437 VLAN. auto lo iface lo inet loopback iface eth0 inet manual iface eth1 inet manual iface eth2 inet manual iface eth3 inet manual auto bond0 iface bond0 inet manual slaves eth0 eth1 bond_miimon...
  12. B

    IP Change of cluster

    Yes, I fixed that, and the cluster came up fine after a reboot. The SSH keys related to the new IP address needed to be added into the known_hosts file during the initial ssh login.
  13. B

    VLAN Tagging

    Hello, I have a healthy three-node PVE 3.2 cluster running. I want to implement VLAN tagging so that I can virtualize hosts that are on a different VLAN where the IP's cannot change. The hosts are Windows 2008R2. On the VM config I define the VLAN tag of 618 when creating the network config...
  14. B

    IP Change of cluster

    I just changed the IP addresses of the cluster. The web interface works, the cluster seems healthy, and VM's start, but the console does not work. The error in the task menu says: TASK ERROR: command '/bin/nc -l -p 5900 -w 10 -c '/usr/bin/ssh -T -o BatchMode=yes 156.74.237.70 /usr/sbin/qm...
  15. B

    is CEPH stable for environment production critical?

    Is there a reason for staying with 0.67.x instead of going with 0.72.2 right now? I've been running a 0.72.2 cluster and it seems quite stable.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!