Search results

  1. T

    Proxmox 2.x hung on server Hetzner EX 4S

    What datacenter are the "new" ones on and what datacenters are the "old" ones on? Because i got my two servers very recently, they are both ex4s models. And i got no issues.
  2. T

    Proxmox 2.3 Backup & Restore with Sheepdog

    Is 3.0 going to be the next release?
  3. T

    Proxmox 2.x hung on server Hetzner EX 4S

    I have two servers also, never hung for me. Running latest version and kernel of proxmox.
  4. T

    PVE 2.2. upgrade results in authorized_keys content lost...

    Solved with: pvecm updatecerts And then /etc/init.d/apache2 restart
  5. T

    PVE 2.2. upgrade results in authorized_keys content lost...

    I got it working, cleaned out /etc/pve folder in rescue mode. Now both server are up, they are authenticated but in the manager interface the other node is red and when i click it i get asked to login but can never successfully do that. Any advice to why?
  6. T

    PVE 2.2. upgrade results in authorized_keys content lost...

    I have this problem right now, and i updated to 2.3 before creating cluster. Are the keys saved only on master node? Because on the node i can't write to either authorized_keys nor to /etc/pve/priv/authorized_keys When quorum doesn't start, that's caused by no cluster connection right?
  7. T

    Error pve-cluster[main] crit: Unable to get local IP address

    I tried moving the whole IPv6 section above IPv4, still the same error: Unable to get local IP address Uncommenting seem to be the only thing that works. Will this be fixed in the next version?
  8. T

    dedicated network for live migration

    Aw shit, and i who just connected my two servers to be able to use Proxmox HA and Live migration - since multicast isn't supported on Hetzner. I tried the guide to change from multicast to unicast, but i can't access the config file as root even though the guide says i should be able to. What...
  9. T

    Fair IO share between VZ containers?

    Thank you for the quick reply. So if any of the other containers where to use more than 30%, the IO share would be lowered for the one consuming 70%? If so, then all is well, we just don't want to get to the situation where one Virtual machine blocks all the others.
  10. T

    Fair IO share between VZ containers?

    Hello, How can i fairly split the IO resources between my four VZ containers? One of them are using 70% of the whole servers IO resources, i wan't to correct that.