Search results

  1. H

    Please help with my current zpool

    hmmm... I wonder if you haven't had a once off event that got "fixed" ?? check/.report on the SMART values for those two HDDs you are complaining is failing...
  2. H

    Please help with my current zpool

    nope, the setup is like a "stipe" from the moment you've attached the 2nd vdev (that being raidz1-1) and that means that if a vdev fails, ALL the data in the zpool is basically... gone (safe for your backups). In short, the zpool will balance to an extend, the data, so that (on average) both...
  3. H

    PBS Location & off-site strategy

    depending on your requirements, you could still sync, but for example only leave the last 2 or so on those with limited storage
  4. H

    PVE to PBS point to point 10G Fiber high I/O delay

    replace with CMR drives? Hendrik's rules of computing: 1) Make a backup 2) Make *ANOTHER* backup 2b) At least one backup *off* provider (ie. a totally different DC/owner/etc.) 3) *CHECK* those backups Sorry, but it looks like those drives aren't in a good state w.r.t. ZFS.
  5. H

    Backing up of MinIO (S3 compatible)

    Anybody arund here doing actual backups of the MinIO storage? I've contemplated using PVE->PBS backups of the nodes (currently I deploy using LXCs) but I'm concerned w.r.t. the sequential nature of the node backups. Which then brings me to the rclone mount type backups, ie. mount the bucket...
  6. H

    PBS Location & off-site strategy

    I've found the namespaces... not that "great" (though, I am using them in some cases), and have opted to rather do separate datastores for my "DCs"/clusters, even when on the same ZFS pool, at least I get better indications on the storage usage for each DC (which you won't see "Easily" in PBS...
  7. H

    PVE to PBS point to point 10G Fiber high I/O delay

    seems like that might be the clue that you are using SMR and not CMR drives.
  8. H

    PVE to PBS point to point 10G Fiber high I/O delay

    1) check the disk models whether they are CMR or shingled(sp?)/SMR - the Shingle drives (typically those with like 128MB-256MB or more RAM/cache) are known to break horribly with ZFS 2) personally, I'd rather setup a DRAID2 or a RaidZ3 given your number of disks *AND* have SSD/NVMe SLOG and...
  9. H

    Custom IPAM plugins - NIPAP

    That is what I am aiming for, ie. Layer3 evpn/vxlan and remove the switching network, but the FortiGateVM being the gateway, not the PVE hypervisors
  10. H

    Custom IPAM plugins - NIPAP

    Thank you! Having those entries added in, is a good starting point to check/do things. What I am getting/doing, just... might not work in the SDN setup, as I don't "want" the PVEs to be the router/gateways, but a dedicated VM (FortiGate/etc.) , but still thank you for the pointers to consider
  11. H

    Proxmox Backup Server 2.4 available

    the emphatic word there that *I* want from S3: like tape does now
  12. H

    Custom IPAM plugins - NIPAP

    Good day, is there perhaps a template/documentation/examples for implementing a custom IPAM for the ProxMox SDN side? I'm looking at https://spritelink.github.io/NIPAP/ for IPAM as we roll out the next steps of our network, so obviously the question comes in w.r.t. PVE not (yet) supporting...
  13. H

    linux bridge vs ovs Bridge

    The last time *I* used pfSense/OPNsense, the reason you disabled the hardware acceleration and did not want to use the virtIO nics, was that DHCP/UDP had troubles with the lack of checksums added to the packets, and then certain parts would drop the packets, so the need was to stick with E1000...
  14. H

    [SOLVED] Is verify task needed with ZFS-backed datastore ?

    The case where I used that verification, was when I needed to bootstrap a remote ('cross atlantic) PBS, and the single connection synchronizations was just.... plain molasses on a freezinglingly cold winters day... The easiest was to spin up multiple rsync sessions of each of the directories...
  15. H

    [SOLVED] Devuan 4.0 template and tmux - UNprivileged container failure with non-root user

    Okay, seems to find the "culprit": This is the values that is set inside the Devuan 4.0 container: /proc/1/task/1/mounts:devpts /dev/pts devpts rw,nosuid,noexec,relatime,mode=600,ptmxmode=000 0 0 And for a Debian container: devpts /dev/ptmx devpts...
  16. H

    [SOLVED] Devuan 4.0 template and tmux - UNprivileged container failure with non-root user

    Good day, I've been deploying Devuan 4.0 images (pve 7.4) the past 4 months, and had noticed `byobu` strangeness, but only this week it caused me problems that I had to get to the bottom of it. I've tried all the settings in the GUI panel for UNprivileged containers, and eventually...
  17. H

    cloud-init unable to run on Debian 11 cloud image

    that, when you look at the replied posts, using the PVE 7.4 GUI, to configure the CloudInit for Debian 12 cloudinit images, it doesn't configure the needed and the vm doesn't boot right
  18. H

    cloud-init unable to run on Debian 11 cloud image

    similar troubles using the ProxMox 7.4 GUI interface with the debian-12-genericcloud-amd64.qcow2
  19. H

    custom pre/post-scripts/hooks for ACME renewals (not plugins, but firewall etc. related)

    I'm in need of executing a script to allow traffic through firewall and open port 80 inbound to the PVE (and next PBS), and then once done, close the ports etc. Is there a current way to do it in PVE 7.x ?