Search results

  1. H

    Cron permission denied?

    hi pkcl, I installed the new OVH image and can confirm that the chmod command you reported should do the trick to fix the problem
  2. H

    Cron permission denied?

    I will install the new image in the coming days on a node of mine and compare permissions. About images, there is a problem with the Proxmox Ubuntu 16.04 template, the permissions on /var/log/syslog are broken and thus nothing is being written there... Maybe I should report this :P
  3. H

    Cron permission denied?

    I removed --chuid man from /etc/cron.daily/man-db as on my sysem the user man has the /usr/sbin/nologin shell. Works now.
  4. H

    Cron permission denied?

    Same problem here with a fresh install on OVH.
  5. H

    Proxmox Firewall default management rules

    Yeah so the problem was that clustering broke down when I set the input policy to DROP. Problem was, my system was missing the default local_network / cluster_network aliases. I added those and now it seems to work fine.
  6. H

    Proxmox Firewall default management rules

    Hi there, I am running Proxmox 3.4 but for some reason the default management rules for the firewall are not there, so on the hosts I am running input policy on ACCEPT for now, since I am not sure what I need to set up aside of SSH and 8006, for VNC and clustering for example. I could not find...
  7. H

    [Solved] NFS-Storage mount error 500 after Update

    Re: NFS-Storage mount error 500 after Update I had the same problem, except I did not have an NFS share before, I configured it freshly. But adding nolock to storage.cfg worked. Thanks for sharing your solution!
  8. H

    Adding node to cluster causes rsa errors in syslog, nodes see each other as offline

    Re: Adding node to cluster causes rsa errors in syslog, nodes see each other as offli Just a small update: I tried using another, previously unused IP for the new node, changing hostname, regenerated ssh keys on both nodes. Clean installing Proxmox on the new node. Cleared cache/cookies etc...
  9. H

    Adding node to cluster causes rsa errors in syslog, nodes see each other as offline

    Hi there! I have a little problem adding a node to the cluster. Everything went apparently fine in the adding process: copy corosync auth key stopping pve-cluster service Stopping pve cluster filesystem: pve-cluster. backup old database Starting pve cluster filesystem : pve-cluster. Starting...
  10. H

    Created cluster - Node IP is 127.0.0.1

    No matter what I try, it just doesn't work. Would this be something you could fix if I had a basic subscription plan?
  11. H

    Created cluster - Node IP is 127.0.0.1

    Yes, I restarted all I could find: pvebanner pve-cluster pvedaemon pve-manager pvenetcommit pveproxy pvestatd
  12. H

    Created cluster - Node IP is 127.0.0.1

    Yeah I just "reclustered" the clavius node after adding pvelocalhost to /etc/hosts. Weirdly though it still assigned 127.0.0.1 as node IP. In /etc/pve/cluster.conf the node name is "clavius". If I ping clavius I get the correct 172.16.0.2 IP.
  13. H

    Created cluster - Node IP is 127.0.0.1

    Well I have production containers running on one box, I cannot just reinstall it. If the cluster was working I could migrate the containers, so I'm stuck here. It sure must be possible to change the node IP somehow or reset something without affecting uptime of the running containers.
  14. H

    Created cluster - Node IP is 127.0.0.1

    Ok I did so and restarted pve-cluster, but pvecm status still says 127.0.0.1 for node IP.
  15. H

    Created cluster - Node IP is 127.0.0.1

    Hi Udo, here is my /etc/hosts 127.0.0.1 localhost.localdomain localhost 172.16.0.1 nebula.fulldomain.tld nsXXXXXX.ip-XX-XX-XX.eu nebula nsXXXXXX 172.16.0.2 clavius.fulldomain.tld nsYYYYYY.ovh.net clavius nsYYYYYY Hostname -a shows "nsXXXXXX.ip-XX-XX-XX.eu nebula nsXXXXXX" for...
  16. H

    Created cluster - Node IP is 127.0.0.1

    Hi there! I have two machines which are connected with a private VLAN. I properly set up the IP in the /etc/hosts file to be 172.16.0.2, but after I created the cluster with the pvecm create clustername and then did pvecm status it says Node addresses: 127.0.0.1 which I guess is wrong. When I...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!