Hello,
I have a cluster and one of the nodes is a ZimaBoard. Everything runs fine, but when I try to do a backup of my CT the backup job is stuck and nothing is happening. It doesn't matter if the backup storage is the local SSD or a NFS or PBS - same behaviour.
Example:
Node name: "darwin"...
I have the exact same issue, "zpool clear" cleanes everything but after a few days it's popping up again. What's also interesting is when looking at the SSD itself:
# fdisk -l /dev/sda
Disk /dev/sda: 931.51 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: CT1000MX500SSD1
Units: sectors...
Just a short question: does the migration setting only affects manual migration tasks, is it also used for cluster replication tasks done by the cluster node itself?
Ich hab das selbe Problem, es tritt nur sporadisch und vereinzelt auf gewissen VMs/Container auf. Hab es mit MaxStartup 100 in der SSH-Config probier aber leider ohne Erfolg.
2023-06-02 09:52:01 505-0: using secure transmission, rate limit: none
2023-06-02 09:52:01 505-0: full sync...
Got it, many thanks! The third cluster node "planck" became master:
May 27 17:12:07 planck pve-ha-crm[1424]: successfully acquired lock 'ha_manager_lock'
May 27 17:12:07 planck pve-ha-crm[1424]: watchdog active
May 27 17:12:07 planck pve-ha-crm[1424]: status change slave => master
May 27...
Thanks for the info! Aim is to get notified when VMs/Containers get spawn up on other nodes due to failover, for montioring reasons. No issues so far, just getting insights of the PVE cluster.
What I see on the former active node "einstein":
May 27 17:59:25 einstein pve-ha-crm[3437]: starting...
What is the best way to detect a cluster failover, meaning that my replicated VMs get started on another node? In /var/log/syslog I found the following maybe relevant messages, but don't know on which message to look after:
May 27 17:10:22 bohr corosync[2084]: [MAIN ] Completed service...
I currently have a 2 node cluster with one quorum device set up. In terms of high availability of the VMs, everything works fine when powering off one node. The only thing that I see in that failure event is that the WebUI of the second (available) node is not showing any content anymore. Is...
Hello,
is there an easy way - beside downloading the backup file to the local hdd - how to open backups, search for specific files, export single files inside the backup, etc. ?
best regards
So, I think I've found the issue - at least the faulty part. I had one single VM (Windows Server 2019) running on this host (Intel NUC, Proxmox v7.0-11), as soon as I stopped or migrated that VM away from the host, everything went smooth - incl. adding new VMs/Containers and running them for...
I recreated it now from scratch with the exact same config mentioned in the article above, and now it works. No clue what's the problem, but it's solved. Thanks oguz for your patience.
Hello,
I hope this is the correct sub-forum. I want to get the PVE console (NoVNC) working in combination with an NGINX reverse proxy. My current NGINX conf is:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log debug;
events {
worker_connections 1024;
}
http {...
That's exactly the thing, I would also love more cloud-init capabilities in terms of user-data directly setable in the WebUI as well as feature-parity with the cloud-init implementation in Openstack.
Reason: many KVM images with cloud-init support only work well using Openstack, with...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.