I just got burned by this. I've just migrated away from esxi, and never had to do this. IMO, bringing down a VM just to move the trivially small EFI disk is less than optimal...
I found the following, which seems to solve the rest of my requirements. e.g.
1. disable quarantine.
2. have PMG flag spam emails as such.
3. have dovecot on MTA file these in inbox.
4. have user move such messages to JUNK folder for sa-learn via ssh.
I run roundcube as the webmail GUI, and...
What I've done so far:
Removed the "Modify Spam Subject" action from the "Quarantine/Mark SPAM" action. I added a X-Spam-Flag with YES, as documentation. I decided not to apply a dovecot filter to put those messages in the Junk folder, sinde they would already have been in Quarantine, and the...
thanks for the reply. looking at my 6.4 config, the above rule is priority 90. Is yours modified? So, what I need to do?
Add a new Add Spam Tag (or something), and have it add this "X-Spam-Status: Yes". Also, disable/remove the invocation of "Modify Spam Subject" in the case of detected spam?
I have a related question. I don't like PMG or whichever proxy I'm using modifying the message. I'd rather have it add something like 'X-Spam: yes' or somesuch, and then I can have dovecot on the MTA file the email appropriately. Is it possible to tweak PMG to do this?
Thinking more on this...
We're not communicating clearly, sorry :( I don't want two different storage nodes, I want one storage node accessed by two different IP addresses, so that I can migrate guests from one node to the other and back, even though one node has a slower connection to the SAN appliance.
I currently have shared storage on a omnios ZFS appliance via NFS. This uses a 10gb enet interface. I have a second host in the cluster which is normally off (it is a smaller, older host.) Literally only used to migrate guests to while doing maintenance on the primary host. Currently I need...
Create a dummy qcow2 disk in the GUI attached to the VM. Then remove from the VM. In the CLI, 'mv' the qcow2 image you want to use on top of the dummy one. In the GUI, re-attach the 'unused' qcow2 disk to the VM. I've done something like this...
I think the recommended way of doing a 2-node cluster is by adding the 'two_node: 1' setting to the corosync file. Like this:
quorum {
provider: corosync_votequorum
two_node: 1
wait_for_all: 0
}
There's a gotcha there though: while that allows the cluster to be quorate with only one...
I have my one big server with NVME for the guests. Works great. Sometimes an update is pushed by proxmox which requires a reboot. So I have an (older) sandybridge server which I added to the cluster. I migrate all the storage to NFS, then all the guests to the other node. Upgrade & reboot...
Maybe I am misunderstanding, but there is a fundamental difference here. ESXi does not require the USB drive to be functional. e.g. it can completely die, and ESXi is still fine, since it is in 'run from RAM' mode. I don't think proxmox would work that way, would it?
It isn't a big deal, but when I did the initial 4.4 install, I used two SSD on an old LSI RAID controller. I'd like to migrate the install to a ZFS mirror. If there is no good way to do that without a reinstall, I'm fine with leaving it the way it is. Thanks!
I didn't want to disable the cache import unit, so I deleted the cache file, exported, imported and rebooted (to test), and nvme was imported just fine. So I guess it got corrupted. Thanks for the fix :)
root@pve:~# fdisk -l | grep nvme
Partition 3 does not start on physical sector boundary.
Disk /dev/nvme1n1: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Disk /dev/nvme0n1: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Partition 3 does not start on physical sector boundary.
I have...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.