Recent content by Eduardo Taboada

  1. E

    Cluster HA not working correctly. Files on /etc/pve/qemu-server/ they are not being replicated between nodes. Errors on corosync start

    If you are using VPC i recommend you to use openvswitch and do a lacp with the following parameters: bond_mode=balance-tcp lacp=active I put a piece of a sample config apt install openvswitch-switch auto lo iface lo inet loopback auto eno1 iface eno1 inet manual iface idrac inet manual...
  2. E

    Cluster HA not working correctly. Files on /etc/pve/qemu-server/ they are not being replicated between nodes. Errors on corosync start

    I see you have bond for interfaces nic0 nic4 with a policy of xmit-hash-policy in layer 2+3 this uses a combination of MAC address and IP addresses to do the bond for balancing. May your switch does not support this feature? You can ping between the cluster interfaces (10.1.30.X/24)?
  3. E

    Cluster HA not working correctly. Files on /etc/pve/qemu-server/ they are not being replicated between nodes. Errors on corosync start

    Check connection via cluster interface (you need to separate via vlan or dedicated nic cluster traffic) If you didn´'t do this yo can have troubles with cluster
  4. E

    Windows 2000 Server -> inaccessible boot device

    Here I found some info https://www.biermann.org/philipp/STOP_0x0000007B/ If the disk is SCSI, you can also try VMware PVSCI
  5. E

    Install Proxmox on Dell PowerEdge R6515 with RAID1

    Its better to put it in IT mode id You want later use ZFS or Ceph for VM storage. You can do that in Idrac/bios
  6. E

    Install Proxmox on Dell PowerEdge R6515 with RAID1

    Yo can provide the controller model (H330, H730...)?
  7. E

    Windows 2000 Server -> inaccessible boot device

    Try to set these registry keys (mergeide.zip) Then uninstall vmware tools. (bakcup before) Download Mergeide Then try to boot first in IDE, if yo have no issues, then use SATA and go up until SCSI for better performance. In first successful boot, mount a IDE CD with virtio drivers and try to...
  8. E

    backup connect failed: command error: client error (Connect)

    What version of PBS are you running? Uncheck Freeze/thaw guest filesystem on backup for consistency. Check network connection and MTU on both equipment.
  9. E

    Recommendation for percentage RAM free

    IMO, there are a lot of factors dealing with RAM, because in certain conditions you need to raise the memory of some virtual machines, if you are running CEPH, you can use the empty hypervisor to provision VM (leaving room to move VM's out of this hypervisor for maintenance and updates) so you...
  10. E

    Modifying ISO for Automated Installation

    We made a video of doing that in our youtube channel https://youtube.com/live/08jGpjTjh18?feature=share It uses a answerfile served by an apache for automated installation and after install, with ansible you can to install openvswicth and test if openvswitch is installed and running and test...
  11. E

    [SOLVED] got "WARN: systemd-boot package installed on legacy-boot system is not necessary, consider removing it"

    As you can see in the upgrade intructions: https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 In the section "Systemd-boot meta-package changes the bootloader configuration automatically and should be uninstalled As Proxmox Systems usually use systemd-boot for booting only in some configurations...
  12. E

    no stable connection between pve and pbs error 500 can not get datastore

    use journalctl, if you are using zfs, check then pool is online
  13. E

    /run/pve ??

    From my opinion it has no sense to have a configured cluster with mostly of the nodes down.
  14. E

    proxmox configuration

    If you use OpenVSwicth with MLAG then you can use these settings auto bond0 iface bond0 inet manual ovs_bonds eno1 eno2 ovs_type OVSBond ovs_bridge vmbr0 ovs_options lacp=active bond_mode=balance-tcp auto vmbr0 iface vmbr0 inet manual ovs_type OVSBridge...