If you haven't solve this yet, you might want to search about "nf_conntrack_allow_invalid: 1" in the forum (that goes in the firewall configuration on each node).
I think It's supposed to fix such issues (even if I can't get it to work, every other people here seems they can).
Hello.
I use PMG for filtering several domains that are hosted on the same Zimbra server (could be anything else than Zimbra as long as the users are available through LDAP).
Each time I add a new domain, I have to create the full profile for this domain in "User Management/LDAP".
Next to...
Hello.
Last version of Wietse's post on this also recommends "smtpd_discard_ehlo_keywords = chunking" (to disable BDAT, which inherently allows pipelining).
Hi all.
I currently have a 5 nodes cluster.
On each node, there's :
. 2x10 Gbps NIC in LACP for "data" (VM traffic using VLANs)
. 2x10 Gbps NIC in LACP for Ceph (both public and cluster networks on the same NIC)
. 2x10 Gbps NIC currently unused (unused because lack of cables when setting up...
Hi.
We're currently running a PVE cluster of a few nodes and a PBS in the same datacenter.
The PBS is used for the backup of the VMs and these backups are encrypted (using integrated encryption feature).
We're going to add another PBS in another datacenter (far from the first one) so we have...
Well, it can't find it if it's there.
root@pbs:~# date
Mon 17 May 2021 09:33:11 AM CEST
root@pbs:~# uname -a
Linux pbs.domain.tld 5.4.106-1-pve #1 SMP PVE 5.4.106-1 (Fri, 19 Mar 2021 11:08:47 +0100) x86_64 GNU/Linux
root@pbs:~# apt update
Hit:1...
Hello.
I can seem to find pve-headers in pbs-no-subscription repo.
Is it possible to have it in this repo?
Or should I hadd pve-no-subscription repo and install it like this:
apt install pve-headers--$(uname -r)
I eventually did it on a server yesterday.
I created a RAID1 of two SSD in the PBS setup, then in "Advanced option" used only 30 GB for PBS.
Then once PBS was installed, using fdisk I created a new partition on each SSD.
Then added these partition as a special mirror to the existing pool...
Hello.
I'm sure I read it but I can't find the thread anymore.
I'll be building a new PBS server soon.
Main storage will be 12 rotating disks in ZRAID3 and I've learnt the hard way (>48hours garbage collector) I need a special device for metadata.
I'd like to setup a ZRAID1 mirror of two...
Another one, CentOS7 with qemu-guest-agent.
root@pve04:~# VM_PID=7154
root@pve04:~# gdb attach $VM_PID -ex='bt' -ex='quit'
GNU gdb (Debian 8.2.1-2+b3) 8.2.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is...
Here we go, a freezed OPNsense VM.
root@pve04:~# VM_PID=216327
root@pve04:~# gdb attach $VM_PID -ex='bt' -ex='quit'
GNU gdb (Debian 8.2.1-2+b3) 8.2.1
Copyright (C) 2018 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
This is free...
<AOL mode>
me too
</AOL mode>
We experienced the issue last night (E5 2697 v3 CPU on the nodes, no AMD), upgraded to last PVE during the week-end.
All VMs on one node became totally unresponsive, qm command went timeout, etc.
Had to ssh into the node, kill all kvm processes, restart the VM...
0.02 ratio between storage and specialdevice.
https://forum.proxmox.com/threads/size-on-special-device-for-metadata-cache.78720/
So for 100TB of storage, about 2TB of special device.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.