Hi,
we're are having a serious issues with live migration during upgrade to PVE 7.3
Our cluster is a 15 node cluster with external Ceph storage (no HA configured)
Since 4 years we are updating our cluster with an ansible script: (Upgrade packages on host, migrate vms to spare host, reboot and...
Hello everyone,
according to WIKI the suricata integration take place under /etc/pve/firewall/<VMID>.fw, and the rule will be automatically added to the iptables . It is exactly my case however i am not receiving alerts at Suricata. this is how the rule looks like:
2 NFQUEUE all --...
I am actually trying to link Pve-IPS output to suricata. I am running suricata using the NFQ mode and im sending traffic to suricata with the gateway-scenario using the following cmd: # iptables -I FORWARD -j PVEFW-IPS
The problem is every time i restart the host the added rule is gone (-A...
After some minutes I got:
2021-08-12 14:48:06 ssh: connect to host 10.39.0.6 port 22: Connection timed out
2021-08-12 14:48:06 ERROR: migration aborted (duration 00:02:09): Can't connect to destination address using public key
So I added a ssh rule and migration is working..., BUT shouldn't...
Hi,
as soon as i enable firewall, live migration stops working.
I have inserted on datacenter level one rule for ceph (macro) an the following:
live migration (VM memory and local-disk data): 60000-60050 (TCP)
Migration uses dedicated network (the same as corosync traffic)...
ALL backups are failing with: unable to create temporary directory '/mnt/pve/backup/dump/vzdump-qemu-146-2020_02_12-02_00_02.tmp' at /usr/share/perl5/PVE/VZDump.pm line 703.
We export ZFS via nfs kernel server, working fine all the time. NFS shares are readonly now.
Latest Update was...
Hi,
it depends - where is your data stored?
We have similar problems:
We have a 15node cluster using ceph, so it is important to stop all VMs on all hosts, before the storage can be stopped.
Proxmox will shut down all VMs on a single host and then shuts down this host.
So it is possible that...
Thanks for your response,
Yes, agent was missing in VM, but activated in options.
If I disable agent in options, the VM properly shutdown, so I guess it uses ACPI-power signal ONLY with disabled agent.
Didn't know that.
For the record: KVM processes became unresponsive, had to be killed with -9
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.