>>one action with remove attachment (and put the mail into attachment quarantine)
since the gateway remove attachment , where should I find the files? in quarantine? that's not what I want .
I want to assign a path to save the attachment .
I would like to set up the following conditions:
Emails are from aaa@aaa.com.
Emails are sent to bbb@bbb.com.
Emails contain an attachment of type PDF.
When these conditions are met, I want to perform the following actions:
Save the attachment to a specified path on the server (e.g., /tmp...
I used two 8G ecc ram proxmox servers , and One of them has been having some problems lately, the dmesg shows system auto corrected the error
[15327239.518589] {4}[Hardware Error]: Hardware error from APEI Generic Hardware Error Source: 1
[15327239.518590] {4}[Hardware Error]: It has been...
2 Synology DS415+ , build synology HA with their apps , connect nas to each other via eth2 (direct connect) and connect each nas to switch with proxmox node with eth1 .
after SHA config completed , make a nfs share folder in nas admin console.
mount the nfs share folder in proxmox admin GUI ...
thanks to @wolfgang , I remove the nfs storage and mount again with specified NFS version 4.1 , then run the same procedure again , and now I can read/write in VM without rebooting !
I build a lab with 2 serves and 2 synology NAS(ds 415+ with Synology HA )
synology HA has already be configured and works.
and I make a nfs sharing from nas to proxmox . add storage in proxmox then create a VM on NFS storage.
when the nas cluster runs with one nas crashed . the nfs sharing will...
I also try to create a new vm in glusterfs , it also shows the same "all subvolume down" messages , but the create job can be done.
[2019-06-04 08:28:45.414580] E [MSGID: 108006] [afr-common.c:4404:afr_notify] 0-datavol-replicate-0: All subvolumes are down. Going offline until atleast one of...
I add a gluster storage in my proxmox cluster (4 nodes)
the glusterfs volume will be mounted , but when I try to clone from template into the storage , there`s always some error like this
root@pve:~# qm clone 200 189 --name test.glusterfs --full --storage testglusterfs
create full clone of...
sorry , I don`s know how to describe the problem .
let`s see the picture.
I add a cron job to create snapshot in one of my node at 20:00 everyday (No.2 in picture)
and I can confirm that date in node is the correct time (no.2/No.3 in picture)
but in tasks , dunno why , that cron job delayed 8...
I`m not trying to test or restore the backup , I`m more concern about the backup "process" and how much resource(time/backup space) used by the backup job , then I can provide report to customers/departments who want to know their VM were really backup .
like I said, backup 10 vm on 5 nodes ...
yes, but let`s said I have a backup job to backup 10 VMs on 5 nodes , then I will have 5 backup notification mail on different time . I have to combine those emails as one to review what`s really happened .
I think that should be more convenient way .
is it possible to add a view last job log/status in datacenter-->backup ?
when I want to review a backup job status , I have to click on all nodes and find the backup job task , then combine all logs into one piece to review the really backup job status (succeeded/failed/speed/capacity)
since...
proxmox splits a backup job to each nodes with different task ID , and that makes more difficult to get complete log of a backup job for me.
if a backup job contains multiple VMs on multiple nodes , then I will receive multiple backup notification mail , and have to determine what is the...
I`m sorry for giving wrong message , easy2boot will not boot proxmox . you have to follow steps here
http://rmprepusb.blogspot.com/2014/03/add-proxmox-isos-to-easy2boot.html
I'd like to further analyze the current backup status of Proxmox VE
So the log of the existing backup job is required
I double clicked task from the web UI, and then CTRL +a/CTRL +c/CTRL +p select, copy and paste.
But the LOG pasted is incomplete
You can see the action video here...
let`s say I have a VM with several snapshot created by pve-autosnap (https://github.com/kvaps/pve-autosnap)
for some reason , I need to move the disk to another storage , so I try to use shell to delete the snapshot first.
and then I try
still the same error : 400 too many arguments
so ...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.