Search results

  1. J

    fail2ban on PBS 3

    Yep, that was it. Thanks for the help.
  2. J

    fail2ban on PBS 3

    Interesting. /var/log/auth.log didn't exist on my PBS3 installation until I created it and SSHD isn't writing to it. /etc/ssh/sshd_config has the default config for logging (SyslogFacility AUTH). I have no idea where SSHD is writing it's logs!
  3. J

    fail2ban on PBS 3

    Thanks Dominik. There are 3 nodes each accessing several datastores on PBS so that would explain it. I'm not familiar with "using api tokens instead of the root user directly" but I'll have a look into it. Anyway I'm not particularly concerned about it and log rotation seems to be working...
  4. J

    fail2ban on PBS 3

    Actually the failure is recorded in /var/log/proxmox-backup/api/auth.log: "authentication failure; rhost=[::ffff:10.100.0.10]:60267 user=fake@pam msg=user account disabled or expired." I must have missed it the first time - the log file is huge. When I follow the log with tail I see it's...
  5. J

    fail2ban on PBS 3

    Hello all, I've followed the guide at https://github.com/inettgmbh/fail2ban-proxmox-backup-server to set up fail2ban for PBS. Couple of issues with this, but the one I'm currently trying to sort out is why failed attempts via the PBS gui aren't recorded in...
  6. J

    5 Node Proxmox Cluster, 4 Node CEPH

    Thanks for the tip, but I won't be running any VMs on node 5. All the VM's will be (are) on CEPH SSD's on nodes 1 - 4. The HDD storage is primarily for Proxmox Backup Server datasets, but will also provide shared HDD storage over NFS for VM bulk data disks. Using zfs send to sync the pools...
  7. J

    5 Node Proxmox Cluster, 4 Node CEPH

    Ah yes, will run zfs on these.
  8. J

    5 Node Proxmox Cluster, 4 Node CEPH

    I've got half a dozen Seagate IronWolf HDDs in an R730xd. They IronWolf's aren't real quick but they have been pretty reliable so far. The server takes 12 x 3.5" drives and I'll likely fill the remaining bays with some Dell 3TB SAS drives that are laying around.
  9. J

    5 Node Proxmox Cluster, 4 Node CEPH

    That makes sense. Thanks for the feedback gurubert.
  10. J

    5 Node Proxmox Cluster, 4 Node CEPH

    Hi All, I've read a few posts about running an even number of nodes in a Proxmox cluster and issues with split brain. I'm building a 5 node Proxmox cluster where 4 of the nodes have SSDs and the 5th node has 3.5" HDDs for bulk storage. So, the 4 x SSD nodes will run CEPH and all 5 nodes will...
  11. J

    [TUTORIAL] How to create Windows cloudinit templates on proxmox 7.3 (PATCH INCLUDED)

    Answering my own question here, it looks like the only way to go back to 7.3.3 is to reinstall all the nodes. I'm up for it, but I can't find the installer anywhere. Does anyone know where I can get an iso to install 7.3.3? Thanks
  12. J

    [TUTORIAL] How to create Windows cloudinit templates on proxmox 7.3 (PATCH INCLUDED)

    Me too! I really need a version for 7.4.3. Alternatively, does anyone know how to downgrade Proxmox to 7.3.3?
  13. J

    HP NC523SFP - driver bug UBSAN: shift-out-of-bounds

    Just wondering if this was ever solved? I have a few Dell QL8272's which I believe are the same card and have the same issue; i.e. they work but produce the out of bounds error. I found a thread at https://github.com/liamalxd/kmod-qlcnic which has instructions for compiling a new driver for...
  14. J

    Chelsio/pfSense SR-IOV Passthrough Issue

    Actually I think pfsense is build on 12.2 STABLE.
  15. J

    Chelsio/pfSense SR-IOV Passthrough Issue

    Just to update this. No solution with the Chelsio cards, but SR-IOV passthrough woks fine with Intel X710-DA2. So, noting again that SR-IOV passthrough works fine with native FreeBSD, OPNsense and Linux distros, this appears to be a pfSence issue with the Cheliso cards. I've given up for the...
  16. J

    Chelsio/pfSense SR-IOV Passthrough Issue

    Hi All, Hoping someone can tell me the obvious thing that I'm missing! I'm running dual Chelsio T520-CRs in a Dell R620 with PVE 7.1. I have a pfSense VM on the host and am passing through the T520's. All works fine if I pass through the whole card, but if I pass through an SR-IOV VF...
  17. J

    Backup thin provisioned VM to NFS

    Thanks oguz. The last post in this thread suggests that NFS above 4.1 will allow trim, at least for VMs running on the share. I might give it a try but my own experience moving thin provisioned disks around is that they get expanded. I'm pretty sure I could use zfs over iscsi but believe I...
  18. J

    Backup thin provisioned VM to NFS

    Hi All, I've been toying with Proxmox for a few years but this is my first post to this forum. So greetings. I have a question in relation to backing up a thin provisioned disk to an NFS share, more specifically a VM in a local ZFS pool to an ext4 NFS share. Will the disk be expanded to its...