Search results

  1. S

    Syncing IP's from fail2ban

    I got this to work using daily cron job. root@vm2401:~# cat /root/bin/banned2proxmox.sh #!/bin/bash # # Sync fail2ban log files from client servers rsync -a root@vm1.ic4.eu:/var/log/fail2ban.log /root/bin/fail2ban-vm1.log rsync -a root@vm2.ic4.eu:/var/log/fail2ban.log...
  2. S

    Syncing IP's from fail2ban

    It's possible to keep adding XXX1, XXX2 etc at the bottom of the file but as soon as cluster firewall rules are modified the Proxmox GUI sorts the new IPSET rules.
  3. S

    Syncing IP's from fail2ban

    I have been testing my script to copy fail2ban log files to Proxmox firewall and have managed to make it work... one time :) cat /root/bin/banned2proxmox.sh #!/bin/bash # # Sync fail2ban log files from client servers rsync -a root@vm1.ic4.eu:/var/log/fail2ban.log /root/bin/fail2ban-vm1.log...
  4. S

    IPset or Security Group

    I'm trying to decide which is better for our SPAM firewall rules. What is your take on this? Which do you use?
  5. S

    Cluster join problem

    I tried with both hostname and IP. I finally managed by first using ssh-copy-id manually before trying again with pvecm add hostname so I'm thinking the problem lies in that direction.
  6. S

    Cluster join problem

    Both. Both failed (timed out).
  7. S

    [SOLVED] Slow Disk IO inside VM but not Proxmox

    For my self... I never use compression on KVM's even when they are ZFS. It's easy to disable when you build your KVM Proxmox host. Disk space is cheap. Currently our bottle neck is RAM (because ZFS eats like it's Christmas.) Lack of RAM often causes IO slow down on VM's.
  8. S

    Cluster join problem

    I think there is a problem with pvecm since it can't seem to join a cluster that uses the secondary nic and IPv6 only. All the nodes are listed on /etc/hosts file and all the nodes can ping each other and echo using IPv6 (using both hostname and IP). But still every time I try to add a node to...
  9. S

    [SOLVED] Remove disk from ZFS pool

    From the host Shell type: zfs unmount /sdb zpool destroy sdb After that you can use Proxmox GUI to wipe the disk.
  10. S

    Moving from Amavis to new and improved rspamd

    Seems that lots of people are getting tired of the constant strugle of Amavis and jumping to Rspam bandwagon. I'm noting clear drop in resource use on every mail server that has switched away from Amavis. Any thoughts?
  11. S

    Some Windows Server generates way too much dirty backups

    Stop picking on the good people at NSA. They are just trying to make our life simpler :) I mean safer.
  12. S

    Quota on LXC

    Does any1 know how to get quota to work inside Debian 11 LXC container?
  13. S

    LXC secondary NIC

    That is a good rule of thumb. Always blame IPv6.
  14. S

    Simple reset script - ping

    Legacy containers (with multiple vulnerabilities) that can't easily be upgraded. We usually just dump them on one host Proxmox and leave them to die slow death.
  15. S

    Simple reset script - ping

    Most of our VM's are LXC containers.
  16. S

    Simple reset script - ping

    Datacenter menu should include Scripts. Something simple that could be saved in .sh script under /etc/pve/scripts folder.
  17. S

    no storage ID

    My guess is when you move drive from storage 2 storage it is registered somewhere (via storage) and that info is used when migrating the container. When the old drive is missing the migration is aborted.
  18. S

    no storage ID

    I don't know where this came from subvol-174-disk-0' (via storage). The container only had disk-1 active (or visible anywhere.) So I just backed up the container and restored it on the new node. But now the container has this drive subvol-174-disk-2
  19. S

    no storage ID

    I just created a new qemu "node" with vdd drive (zfs) that I created using Proxmox GUI. I had the "Add to Storage" marked when I created the ZFS drive. After that I joined the new node to cluster and tried to move LXC container to it. () 2022-03-04 17:22:18 starting migration of CT 174 to node...
  20. S

    no storage ID

    But this is not the reason the first container migration failed because it never had more that 1 drive.