Search results

  1. G

    Node reboot causes other node reboot when in cluster

    I'm afraid I did not find a "solution", it just didn't happen again But in the meantime I added a third node, I think this had some implications
  2. G

    ERROR: Backup failed - start failed Unit already exists

    I can confirm that I got no more backup errors with the new update Thank you
  3. G

    ERROR: Backup failed - start failed Unit already exists

    I noticed that the backup only fails if executed in "stop" mode. Snapshot mode always works correctly
  4. G

    ERROR: Backup failed - start failed Unit already exists

    Same issue here, it only happens with a VM with RHEL 6 (virtualized from a running physical server). I noticed that it is really slow when I shut it down (it takes 5 minutes to power off) Could it be somehow related to the backup issue?
  5. G

    Unrecognized network adapter Intel i350

    Even after the firmware update, one of the network adapters are recognized as "rename2" Manually editing the /etc/network/interfaces file to set up the bonding works, so it's only a matter of naming Could it be something related to this issue...
  6. G

    Unrecognized network adapter Intel i350

    It is a Lenovo server, I think it's a Lenovo derivate. I am trying to update system firmware now, I will let you know the results Thanks
  7. G

    Unrecognized network adapter Intel i350

    Hello, this weekend I set up a new proxmox instance on an IBM SystemX server with Intel i350 network adapter. Unfortunately I only get one port of the network card to work as expected (it is named "rename2" in the GUI). The other port is not working and recognized as "eno1". I think I am...
  8. G

    [SOLVED] Migration ok from PVE1 to PVE2 but fails from PVE2 to PVE1

    The known_hosts file was missing the entry for pve1 (hostname) and 192.168.0.4 ip address running ssh-keyscan and adding the resulting key to the file solved the issue I still don't know what could have caused the known_hosts file to "loose" one of the keys. I bet I did something wrong when I...
  9. G

    [SOLVED] Migration ok from PVE1 to PVE2 but fails from PVE2 to PVE1

    Hello, I have a cluster with two nodes (no HA): - pve1 (192.168.0.4) - pve2 (192.168.0.6) I can successfully migrate VMs and containers from pve1 to pve2. I now need to migrate a container from pve2 to pve1 but the operations fails with this error: 2019-02-06 17:49:26 # /usr/bin/ssh -e none -o...
  10. G

    Backup on external USB device (IBM RDX)

    Hello everyone, I recently migrated a physical IBM server to a proxmox VM I would like to re-use the IBM RDX device with removable disks as daily backup target. First of all, I mounted the /dev/sdd1 device into a /backup folder and configured as a storage entry (type Directory, VZDump selected)...
  11. G

    Local ZFS pool after node installation (member of cluster)

    Hello everyone, I am experimenting with Proxmox, willing to switch from ESXi in SOHO environment. I have a cluster set up with two nodes (pve1, pve2) pve1 is the master of a cluster. Everything works as expected, but I would like to set up replication and, if I understood correctly, I need...
  12. G

    Node reboot causes other node reboot when in cluster

    I am not sure it was already in "disabled" status when I restarted the node. (a lot of trial and error on my side) So I just restarted pve2 and pve1 did not automatically reboot. How can I totally disable HA services? I don't need them and I think it will be dangerous for my scenario thank...
  13. G

    Node reboot causes other node reboot when in cluster

    I tried to set up HA (just to understand how it works) but I don't need this functionality at all. How can I completely remove it? This is the output of ha-manager status: root@pve1:~# ha-manager status quorum OK master pve2 (idle, Thu Jul 26 12:30:41 2018) lrm pve1 (idle, Fri Jul 27 15:15:06...
  14. G

    CloudInit drive has no option for "move disk" in Web UI

    fair enough, I can deal with it. thank you for your time
  15. G

    Node reboot causes other node reboot when in cluster

    Hello, I have a cluster set up for experimenting with Proxmox. The cluster is composed of two nodes, pve1 and pve2. I have created a cluster and the nodes are joined in this cluster. pve1 is the master. I have also set up a shared NFS storage and today I was experimenting live migrations. I...
  16. G

    CloudInit drive has no option for "move disk" in Web UI

    Dear Wolfgang, thanks for the quick reply. As far as I understand, the issue you linked is a logical issue, I mean a bug in the migration process. I was just asking for a change in the web UI to enable the "move disk" button for cloud-init drives. At the moment the button is gray and cannot be...
  17. G

    CloudInit does not set gateway for Ubuntu 18.04 LTS (netplan)

    I am using cloud-init to set up some Ubuntu 18.04 VM's (uses netplan for net config) They are all configured using static IPv4 address like 192.168.0.x/24 The gateway is 192.168.0.254 The generated /etc/netplan/50-cloud-init.yaml file does not contain gateway4 attribute
  18. G

    CloudInit drive has no option for "move disk" in Web UI

    Maybe not an issue, but I couldn't easily move a cloud-init drive from a local to a shared storage (for live migration) The workaround was just remove&recreate the drive, but the "move disk" button is just a bit easier.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!