Same here. After Windows reboot triggered by updates, the VMs have an boot issue, unrelated which version: w10/w2k16/w2k19.
They are stuck in boot screen.
Ok., I found the root cause.
promox-backup-client does not use the FQDN from DNS if only a hostname is given with --repository var:
strace proxmox-backup-client snapshots --output-format json --repository USERNAME@HOSTNAME@HOSTNAME:DATASTORENAME
It has to be changed to
strace...
Maybe this was described inaccurately.
PVE cluster with five or seven nodes. One is the iSCSI/iSER target, three/five are the nodes which are running the VMs, one is backup storage/NFS server.
The PBS is a VM which has its datastore mounted from NFS server. Second ZFS storage server for zpool...
I checked the certificates already against the system CAs with openssl successfully.
It is unclear for me why curl and openssl trust it but the backup client does not if it is using the same CA certs.
My workaround solution to this issue is now the extension of the action script on FreeBSD/pfSense side for the ACME service with following lines for deployment of the certificates plus fingerprint after Letsencrypt renew in a multidomain environment:
# Proxmox Backup Server Cluster Nodes...
Ok., I got it.
I ran a
strace proxmox-backup-client snapshots --output-format json --repository USERNAME@HOSTNAME@HOSTNAME:DATASTORENAME
and saw that it reads the ~/.config/proxmox-backup/fingerprints if it is existing and if yes it is using it.
With PBS_FINGERPRINT you will just overwrite it...
Yes, it is a Letsencrypt R3 wildcard certificate.
curl is working with https://IP_OR_HOSTNAME_OF_PBS:8007
I got a "SSL certificate verify ok."
The ugly thing with the PBS_FINGERPRINT var is that I have to run a script on the PBS after the automated Letsencrypt deployment to catch the new...
Yes, this is what I read in the documentation and the reason why I added the variable.
Before upgrade to the latest version the server certificate was trusted.
So the question is what is the root cause for this?
The key file on PBS is still the same. I assume that the PVE upgrade from 6.4.4 to...
Hello all,
after setup a new PBS, integrated it in PVE cluster, run automated VM backups, I created also a separate user for host backups for the PVE cluster nodes.
I wrote a backup script for the cluster nodes which was working without any problem yesterday.
After that I updated PBS to the...
What is if I have only one IP for SMTP, a load balancer in front, like pfSense, some Proxmox mail gateways and all have to use the same DNS reverse entry for HELO?
Is there a way to overwrite myhostname?
This is only a theoretical question, there is still no need for such a scenario.
Hello all,
today I installed the first time the pmg and there are some questions left.
I have an UCS backend and configured the LDAP settings in pmg.
pgm found all users with all configured email addresses and it found all groups.
For what reason are the groups?
I searched for a possibility to...
Genau das wollten wir nicht. Die Performance unseres Setups ist erstaunlich, da kommt NFS oder Ceph nicht mit.
Aktuell schaffen drei VMs auf drei Clusterknoten verteilt im Schreibtest bei 4M Blöcken im GB Bereich und bei 4K-Blöcken immer noch über 500MB/s mit aktuell 30 VMs als "Nachbarn"...
Wir betreiben ein ähnliches System, allerdings mit LSI HBA SAS 9305-24i. Der erste Cluster Knoten ist ebenfalls in der Regel nur Storage und iSCSI Target, es sei denn man muss mal eine VM "Zwischenparken".
Es gibt einen hddpool mit acht Spiegeln und einen ssdpool mit vier Spiegeln. Jeder Pool...
Hello all,
I think I have found an answer to the address change problem on the Linux clients:
I found a comment on embeddedlinux.org:
A Linux host replies to any ARP solicitation requests that specify a target IP address configured on any of its interfaces, even if the request was received on...
Hello to all,
we have a running Proxmox VE 5.4-13 three node cluster with separate 40Gbit Infiniband dual port card on each server for connecting to a FreeNAS iSCSI server (release is FreeNAS-11.2-U5, latest version).
For the cluster communication we have setup a bond0 with two Gigabit cards...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.