Hi,
through proxmox GUI is there a way for enable RELATED,ESTABLISHED connection inside a vm, like in follow cmd way ? :
iptables -A INPUT -i eth0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
Or the only way is through cmd commnads?
Thanks
Hi,
I've got a 3 nodes cluster and I'm trying to enable firewall on vms. Only in one node I can see logs of capturing traffic and so only in this one firewall is working.
At datacenter level I have:
at node level:
at vm level:
So why can't I get firewall filtering?
Thanks.
Ok. So only if I use PMG for outbonding email I have to add another REVERSE PRT rules PMG_hostmane <-> IP.
In all ather cases the reverse PRT rule for mailserse_histname <-> IP remains the same.
Ok, but have I to change revese PTR so that it points to PMG's public IP only if PMG sends outbounding mails? Because if PMG doesn't send email the sender is mailserver and its public IP address is different from PMG public IP, so reverse PTR could fail, or am I wrong?
Another question: since I change mx record with PMG server, I have to contact my ISP and change also reverse PTR wich now is pointing to my mailserver, is it right? In that way reverse PTR have to point to PMG.
Thanks
Hi,
I am trying to insert PMG befeore my mail server. I read Proxmox MG admin guide but I have these questions:
Imap port for the client (ex. like thunderbird) must point always to the mailsever and not to PMG? Since, from the guide, I read PMG doesn't deal this job.
So DNS MX records must point...
Sorry @t.lamprecht but I've read your answer just now. I solved the problem reinstalling the node and this fix my problem.
May be your solution could have solved my problem; surely I would have spent less time.
Tnx
Hi, I always have this problem I can't fix:
root@vs1:~# ha-manager status --verbose
quorum OK
master vs3 (active, Thu Mar 28 11:05:13 2019)
lrm vs1 (idle, Thu Mar 28 11:05:17 2019)
lrm vs2 (active, Thu Mar 28 11:05:17 2019)
lrm vs3 (active, Thu Mar 28 11:05:13 2019)
full cluster state:
{...
I see logs about wathdog only in journal. From these logs watchdog seem to be active in every node:
root@vs1:~# journalctl |grep watchdog
lug 12 09:35:36 vs1 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter.
lug 12 09:35:38 vs1 systemd[1]: Started Proxmox VE watchdog...
Yes even if I reboot vs1, nothing changes. Which specific logs do you mean? Because in /var/log/messages or /var/log/syslog I can't see nothing about the problem..
On all machines I can't migrate, start or stop if vms belong to HA. Only on vs1 I always get status "starting" if a vm belong to HA.
Yes they are.
root@vs1:~# pveversion -v
proxmox-ve: 5.2-2 (running kernel: 4.15.17-3-pve)
pve-manager: 5.2-5 (running version: 5.2-5/eb24855a)
pve-kernel-4.15...
Yes. Furthermore If virtual machines are not belonging to HA I can start stop and migrate them; if they belongs to HA I can't start stop or migrate them.
Yes it's running:
# systemctl status pve-ha-lrm
● pve-ha-lrm.service - PVE Local HA Ressource Manager Daemon
Loaded: loaded (/lib/systemd/system/pve-ha-lrm.service; enabled; vendor preset: enabled)
Active: active (running) since Sat 2018-06-09 10:09:47 CEST; 2 days ago
Process: 4036...
Hi,
If I try to enable HA for a vm (128 or 123 in my case see attachment), from a node (vs1 is belonging to a 3 node cluster), I always have vm in status "starting". I read from https://pve.proxmox.com/wiki/High_Availability that this state is when:
but I don't know where to fix this problem.
I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.