Hello, we have tried to add a transport by API and mtime of both transport and transpor.db changed and is the same.
cat /etc/pmg/transport | grep addeddomain.tld finds the domain but after doing cat on the transport.db file i also get random data but i can clearly see all the domains from...
Well when the bug first occured it lasted for 6 days and got fixed immediately after any manual change in the gui. I will check the mtimes tomorrow with my collegue. Still i cant see any way that the timing coincided with the reload.
We tried to check what does the "create" button does when...
Well the problem is not that it does not deliver the emails localy, the default behaviour is set that way:
But the problem is that the visible config in Transports does not get applied in the backend.
hostname -f: gw1.mx.foo.bar (it is FQDN)
resolvconf:
search mx.foo.bar
nameserver...
Hello,
i would like to ask for help with possibly a bug we encountered on PMG. We have a two node cluster of gateways and we are using it to filter the incoming mail. There is a list of aprox 100 domains we accept in "Mail Proxy - Relay domains" and destinations in "Mail Proxy - Transports"...
Hello to all of you, who are checking this thread,
I was able to solve this issue after making a ticket ( buy those standard subscribtions guys :) ) and the problem was found in the corosync. The procedure we did was in the end quite simple. We stopped pve-cluster service on all nodes unmounted...
As writen above i found out that the source of the problem is faulty HW - 10GbE on the motherboard.
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface enp216s0f0 inet manual
iface enp216s0f1 inet manual
auto bond0
iface bond0 inet static
address...
My suspission is somewhere around corrupted clustrdb. Since quorum is ok but /etc/pve is still read-only.
Also on all nodes i can see this error:
Jun 1 23:00:32 node3 pmxcfs[23003]: [dcdb] crit: ignore sync request from wrong member 3/23003
Jun 1 23:00:32 node3 pmxcfs[23003]: [status] crit...
Seems the same as thread here:
https://forum.proxmox.com/threads/login-failure.71684/
so it gave me the courage to try it. Did it on the cluster stopped pve-cluster on all cleared the lock and started on all but still cannot write to /etc/pve :(
Hello,
so one of my PVE clusters got ugly :) and the common error on all nodes is that nothing can write to /etc/pve. I can read it but root or pve services cannnot write:
Jun 1 18:25:53 node3 pve-ha-lrm[3185]: unable to write lrm status file - unable to open file...
Hello,
i would like to ask you if there is any problem with the following configuration.
2 Separate PVE clusters
1 Pacific ceph cluster - unmanaged by proxmox
The plan is to use one RBD for both PVE clusters. My concern is that the naming conventions for disk images of VMs would cause...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.