Many thanks for your kind response Stoiko.
Log extract from 1 mailbox only for past 2 days;
Mar 06 00:05:08 pmg.nnnnn.net postfix/smtpd[1258267]: BA85B40A20: client=localhost.localdomain[127.0.0.1]
Mar 06 00:05:08 pmg.nnnnn.net postfix/cleanup[1258269]: BA85B40A20...
I am filtering all MX traffic to Proxmox Mail Gateway server plus configure inbound connection to only except emails from PMG server IP Address.
I have tried use MX both on & off withion transport section for domain and report is still not delivered.
Do I need to modify relayhost on PMG server...
ssh works perfectly without password to each node and visa versa.
I was able to migrate vm's via cli back to node 1
I removed node 2 & 3 from cluster now nothing is working on node 1, 2 & 3. Although I can ssh to all nodes as root but web go simply doesn't work anymore...
Hi,
Many thanks. That sort of worked for a few minutes.
I was able to migrate vm's but now node 2 & 3 are unusable.
Now I have 2 live vm's in node 3 and can not console into nor migrate to node 1 and only node 1 is now usable.
Hi,
Many thanks for the advice.
Followed your recommendation and still no luck, the reported error is now;
2022-07-22 21:16:53 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o 'HostKeyAlias=pve3' root@172.16.60.103 /bin/true
2022-07-22 21:16:53 Host key verification failed.
2022-07-22 21:16:53...
Hi,
I originally had 5 nodes and only 2 were have the issue reported, so I removed those 2 nodes and now node 3 is having an issue which it wasn't previously.
The above issue is being reported on node 3, so therefore which node do I run the script master or node 3?
Cheers
Hi,
I have built a new PVE 3 node Cluster v7.2.7 with iSCSI shared storage.
I can migrate from node 1 to node 2 no issues, although am unable to migrate from node 1 or node 2 to node 3, I am receiving the following error;
2022-07-22 11:20:22 # /usr/bin/ssh -e none -o 'BatchMode=yes' -o...
Hi,
I simply wish send email to external email address post scheduled backup.
I have followed instructions on several post from this forum with no suggest, although I am able to successfully emails with the following manually;
echo "this is a test email." | mailx -r admin@xyz.net -s hello...
Hi Moayad,
Yes, multipath is enabled and am able to migrate existing VM just not newly clone one's from templates anymore.
No, I have not rebooted since umount the NFS shares, should I? If so, do I need to reboot all 5 nodes?
Attached are snapshots of the task log
Cheers
Hi Moayad,
Unfortunately, the solving issue has now created a new major issue with my cluster.
I can only create and/or clone new VM's on the node that I applied your suggested fix to, I now can only live migrate existing VM's created prior to applying your suggested fix.
Even if I create VM...
Hi Moayad,
Many thanks, your "mount" suggestion identified the issue and "umount" resolved the issue.
Which then enabled me to resolve another issue as live syslogs were more stable.
Cheers
Hi,
In the syslogs I am seeing continuous entries on the main node only in a Cluster attempting to connect to NFS servers that do not exist anymore, I removed these NFS shares months ago. Below is a sample of the syslog.
Can anyone advised were to look to remove all old traces of NFS client...
Hi Che,
Many thanks for your kind assistance.
After a few attempts with different options you suggested and results of other searches from the forum, he is what worked;
Removed PBS storage from Proxmox datacenter console
Snapshot of storage A
Create new clean dataset on storage B
ZFS...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.