Search results

  1. P

    ProxMox 5.0 Add Cluster Failure

    Hello, It is perfectly valid to have underscores in the hostname as it is said in RFC 2181, section 11, "Name syntax" Examples of servers with '_' in their names: _jabber._tcp.gmail.com _sip._udp.apnic.net. Route 53 and other DNS servers accepts hostnames with '_' characters. The UI does...
  2. P

    btrfs as a guest file system

    I have all the guest hosts using btrfs. I lost nothing. I use it in JBOD and RAID 1 configurations. The dangerous configurations are software btrfs RAID 5 and software btrfs RAID 6 configurations. They are not strictly RAID 6 or 5 and in any case, they are not stable. btrfs is safe and uses FAR...
  3. P

    proxmox 5.0 works great with btrfs! :D

    I am not sure. If it does not, I hope it integrates BTRFS one day. I am using BTRFS from 2017 with proxmox and it works fine. No issues at all. You can use it with the Directory storage (type dir) in https://pve.proxmox.com/wiki/Storage and https://pve.proxmox.com/wiki/Storage:_Directory btrfs...
  4. P

    Balloon memory behaviour in version 5.3 changed compared to 5.1

    I was digging deeper in the problem. It happens because pvestatd. claims memory from the guest VMs using a criteria that at least does not apply to my scenario. It decides, first, that a Proxmox Node must keep 20% of the RAM for itself. That alone is at least too generic. 20% of what? It is not...
  5. P

    Balloon memory behaviour in version 5.3 changed compared to 5.1

    I pasted below the top output of the host machine with Proxmox VE 5.3 installed. As you can observe, it has plenty of memory availabe, but it refused to provide 8 extra gb to one of the linux guest VM. How could I reconfigure the host so it behaves like Proxmox VE 5.1? top - 15:31:16 up...
  6. P

    Balloon memory behaviour in version 5.3 changed compared to 5.1

    Hello, The balloon memory behavior in version 5.3 changed compared to 5.1. And not for good. I have a VM with a memory configuration like this: min memory 448mb max memory 16gb In Proxmox VE 5.1, the host provided memory to the guest machine as it was needed. Usually, it had 3gb, and when a...
  7. P

    Could a cluster have nodes with Proxmox 5.1 and 5.3 versions

    Thanks for the answer. I mixed them in the same cluster anyway (just NFS, no Ceph volumes) and it works. I am doing a backup to migrate old 5.1 nodes to 5.3. But right now everything is working fine.
  8. P

    Could a cluster have nodes with Proxmox 5.1 and 5.3 versions

    Hello, I am about to add a node with Proxmox 5.3 in a cluster with Proxmox 5.1 nodes. Is it possible to have a mix of 5.1 and 5.3 version nodes in the same cluster for a couple of weeks? The cluster does not have Cephs volumes. It has NFS and btrfs volumes.
  9. P

    Migration failed because ssh connection

    No. After I recreated the cluster all my problems are gone. I am finishing a 3rd host configuration and I will add it to the cluster next week. I will try using the same procedure I used the first time with the old cluster because I still suspect there ssh agent is messing pvecm commands. I ll...
  10. P

    Migration failed because ssh connection

    root@pve3:~# cat /etc/hosts 127.0.0.1 localhost # 127.0.1.1 esx3.ikuni.com esx3 172.17.255.4 pve3.ikuni.com pve3 pvelocalhost # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters...
  11. P

    Migration failed because ssh connection

    I did not understand the request. Could you give me an example?
  12. P

    Is it possible delete a cluster and reuse its nodes?

    I got all the time the message: "unable to copy ssh ID" so if the option -i is present, it is other thing. But I could not pass that point until I deleted the cluster and recreated again.
  13. P

    Migration failed because ssh connection

    I did not imply PVE is leaking public information in a derogatory way. But the fact that I could not add a node because those public keys. I reinstalled pve5 and renamed to pve6 and after I was unabled to add it as pve6, I reinstalled PVE in that node and renamed pve7 with new IP. In pve3...
  14. P

    Migration failed because ssh connection

    I found the problem. I think there is a bug in the command pvecm I deleted the cluster. But the cluster was not the problem. The problem happens when you use pvecm add or create in Proxmox 5.1. For example, if you execute: pvecm add IP-ADDRESS-CLUSTER this command invoke a ssh-copy-id...
  15. P

    Is it possible delete a cluster and reuse its nodes?

    I found the problem. I think there is a bug in the command pvecm I deleted the cluster. But the cluster was not the problem. The problem happens when you use pvecm add or create in Proxmox 5.1. For example, if you execute: pvecm add IP-ADDRESS-CLUSTER this command invoke a ssh-copy-id...
  16. P

    Migration failed because ssh connection

    here are the versions of both machines (I checked all is the same): proxmox-ve: 5.1-25 (running kernel: 4.13.4-1-pve) pve-manager: 5.1-36 (running version: 5.1-36/131401db) pve-kernel-4.13.4-1-pve: 4.13.4-25 libpve-http-server-perl: 2.0-6 lvm2: 2.02.168-pve6 corosync: 2.4.2-pve3 libqb0...
  17. P

    Is it possible delete a cluster and reuse its nodes?

    I have a cluster of 1 node and I cannot add a 2nd node. I described my case here: https://forum.proxmox.com/threads/migration-failed-because-ssh-connection.37808/ Is is possible to delete all the cluster without reinstall Proxmox VE? I want to do that and recreate the cluster from the...
  18. P

    Migration failed because ssh connection

    The host is not working now. I go to https://pve6:8006 and it does not work. Here are the outputs of some services. Several are broken. root@pve6:~# systemctl status pve pvebanner.service pve-firewall.service pve-ha-crm.service pvenetcommit.service pvesr.timer...
  19. P

    Migration failed because ssh connection

    I reinstalled the host with a new name and a new address. Now I get other error in the host: First I created and it got locked in: "waiting for quorum...Connection to pve6 closed by remote host." so I rebooted the host and retried: root@pve6:~# pvecm add 172.17.255.4 -f can't create shared...
  20. P

    Migration failed because ssh connection

    I think I will do that. Still, reinstalling a node is kind of normal if you have lot of hosts in a proxmox cluster. It is a punishment to have to rename a host and change its IP just because the cluster refuses to re accept the host with a different SSH certificate. Is there a way to solve it...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!