Search results

  1. A

    LXC 6.0.0 update

    I accidentally updated from 5.0 to 6.0 on 8.1.10 and wonder if it has any downfall. I have some legacy CentOS 6.10 pcts and from description of LXC 6.0, upstart support is removed. I am not sure how to interpret this change. I need to stick with 6.0 for some historical reasons on this box so...
  2. A

    LXCFS and load average

    Hello everyone, I am using Proxmox 6.1 with containers and I noticed that all containers are getting load average lick from a host. I checked lxcfs site and found this news from last year. https://discuss.linuxcontainers.org/t/lxcfs-3-1-2-has-been-released/5321 I installed the new lxcfs...
  3. A

    PVE 6.0 no mac address in GUI

    When I create a new container I leave network interface set with auto mac. Recently container stopped displaying autogenerated mac in GUI. It shows empty space. The problem is that with firewall on interface enabled and no mac address displayed I loose connectivity. I can either disable the...
  4. A

    Network Card negotiates down from 1 Gig to 100 Mb

    I have Dell R720xd with BCM 5720 nic. I upgraded firmware replaced boards, changed the switch ports, switch and cable. I still get card going down and changing speed to 100 from 1gbit. I can get it up to 1000 with ethtool but it comes back down to 100 and interrupts connection multiple times...
  5. A

    How to prevent network config in lxc from being overwritten

    Dear all, I am trying to add multiple IP addresses inside the lxc container. I know that there are multiple methods, but the one I am looking for is to add extra IP address from within the container using standard network settings of guest OS. PCT seems to overwrite ifcfg-eth0 static IP...
  6. A

    Intermitent cluster node failure

    Hello everyone, On my production pve-3.4-11 cluster (qdisk + 2 nodes) I am having a node evicting at the middle of the night once in a while. The only clues in the other node logs are: corosync.log Dec 19 01:02:28 corosync [TOTEM ] A processor failed, forming new configuration. Dec 19...
  7. A

    PVE 4 HA and redundant ring protocol (RRP)

    I am testing on a real cluster so I decided to open a new thread to avoid the confusion. I have RRP (two different networks) configured on corosync. After testing HA in case of network failure I wonder now if it makes sense at all. When I stop one of the interfaces on node, corosync declares...
  8. A

    PVE 4 HA Simulation and testing

    For better understanding of new HA mechanism, I have decided to try you pve-ha-simulator. I started all nodes and enabled one vm:101. Then I migrated vm:101 to node 2 so far so good. Finally I disabled network on node2. Simulator fenced the node2 and started vm:101 on node1, however it took 3...
  9. A

    PVE 4 Another migration problem

    Some times after running migrate command I get this Executing HA migrate for VM 100 to node virt2n3-la unable to open file '/etc/pve/ha/crm_commands.tmp.19096' - No such file or directory TASK ERROR: command 'ha-manager migrate vm:100 virt2n3-la' failed: exit code 2 In syslog Oct 10...
  10. A

    PVE 4 KVM live migration problem

    Testing live migration on 4 node quorate cluster. It is not 100% of cases but it is reproducible. I migrate vm from one node to another and I get this task started by HA resource agent Oct 09 22:04:22 starting migration of VM 100 to node 'virt2n2-la' (38.102.250.229) Oct 09 22:04:22 copying...
  11. A

    New Ceph KRBD setting on PVE 4

    Coming from 3.4 I noticed a new check box KRBD on RBD storage form. Considering a mix of kvm and lxc on my new cluster nodes, what is the recommended settings on the RBD storage, i.e. should KRBD be checked or not?
  12. A

    Ceph crush map retreival on PVE 4

    I have a ceph cluster built on the separate set of hardware, so I use ceph client configuration on proxmox to access RBD storage. On web interface, entering into ceph tab on each node and selecting crush returns: Error command 'crushtool -d /var/tmp/ceph-crush.map.1930 -o...
  13. A

    whatchdog issues

    I enabled ipmi_watchdog per PVE 4 HA article and now my server cannot boot. I get to the network stage (no limit) and then server reboots. Disabling watchdog in bios doesn't work. I also noticed that there is no recovery kernel in PVE 4 (similar to ubuntu), booting with single option doesn't...
  14. A

    Cluster questions on PVE 4

    Since there is a corosync version 2 can you add ability to add redundant ring from the command line during cluster creation and adding the node? Also I think it is a good idea to document manual restart of cluster if needed. I know it is something to avoid but I am sure people may will needed...
  15. A

    Drbd9

    Can DRBD 9 volume limit redundancy to 2 specific nodes on 4 node cluster? I see that you can specify redundancy and it cannot be more then number of nodes in cluster but can it be less?
  16. A

    CLVMD over DRBD hangs on remote volume access

    I really love proxmox and have plans for a production cluster after successfully implementing test one few month ago. I built a new one , v 3.4.11, with 2 nodes and qdisk for quorum and everything seems to fall in place but clvm on drbd. I lost my notes from test cluster so I don't remember...