Recent content by einsibjani

  1. E

    25-30 TB, HDD too slow

    Yes, that would solve most of our problems :) I did re-create our datastores, this time adding the special vdev's when we create the datastores and first impressions are good. Listing backups is faster than before and no random sync errors like before. The first verify & garbage collect jobs...
  2. E

    25-30 TB, HDD too slow

    Yes, that would be very interesting. We could do something today with two datastores, one on SSD's and one on HDD's, and a sync job. One downside is that to restore from tier-2 datastore, you would have to add both datastores to PVE. Not a big problem, we already do something similar where we...
  3. E

    Special Device on Existing PBS

    I recently added a special vdev to an existing pool, and after monitoring it for a couple of days and seeing ~10 GB of data on the special dev for a 20 TB datastore, I though "that's fine, all the data in the datastore will get re-written eventually and all the metadata will be on the special...
  4. E

    25-30 TB, HDD too slow

    I was thinking I could get away with smaller SSD storage for latest backups, and tape for longer term. With de-duplication and compression, the storage needed on SSD might not be that much less that having incremental snapshots all on SSD. You're right, I added the special dev to an already...
  5. E

    25-30 TB, HDD too slow

    I have two servers running PBS. Three clusters backup to the first PBS server and the other PBS server synchs backups from the first one. PVE does hourly backups of 10-20 VM's, staggered between clusters so they're not backing up at the same time. Both servers have 6x10 TB HDD in ZFS "RAID10"...
  6. E

    Backup Slow Listings

    I have two servers, each with 6x10 TB HDD's in RAID10 ZFS (three mirrors striped). 30 TB total usable, 15 TB currently used. It works, but as you can imagine, verification and garbage collection takes a long time. I was considering buying PCIe m.2 cards and adding a mirror of nvme drives to use...
  7. E

    RBD mirroring for disaster recovery

    Hi, I've posted before about this (https://forum.proxmox.com/threads/ceph-mirroring-between-datacenters.66890/#post-300712). We're still working out our DR plans. I'm a little spooked about reports of greatly reduced IO on ceph when using journal based mirroring. We have two identical 3-node...
  8. E

    [SOLVED] Multicast not working between guests on different members

    Never mind. A misconfigured switch with a L3 interface filtered the packets.
  9. E

    [SOLVED] Multicast not working between guests on different members

    We're running a three node Proxmox cluster on 6.2. In the last months we've been replacing a lot of our networking gear, replacing older Cisco swithces and routers with Juniper EX3400, EX4600 and MX204. Shortly after moving our Proxmox cluster from a Cisco Nexus 5010 to a virtual chassis with...
  10. E

    Ceph mirroring between datacenters

    Kind of what I was thinking. Have you ever had a shutdown of the primary site, either for real or as a drill?
  11. E

    Ceph mirroring between datacenters

    Interesting. What is the plan if the primary site goes down (fire, flood, nuclear meltdown etc.)? Could you start a fourth monitor on the secondary site and get it up and running? Regarding write overhead with journaling, all the drives in the ceph cluster(s) are NVM.e SSD drives.
  12. E

    Ceph mirroring between datacenters

    I'm setting up a new Proxmox environment, in two datacenters, with three nodes in each datacenter. The new setup will use ceph as shared storage. Our old cluster is a mixed bag of servers spread across both datacenters with no shared storage. We have a redundant 10G connections between the...
  13. E

    net.bridge.bridge-nf-call-iptables and friends

    I disabled all firewalls in Proxmox and setup nftables on the hosts.
  14. E

    net.bridge.bridge-nf-call-iptables and friends

    I have a 3 node cluster setup in production. Recently we discovered a problem where fragmented UDP packets were being dropped somewhere along the way from our vm's. Finally we tracked to culprit down, and it was the fact that proxmox had set net.bridge.bridge-nf-call-ip6tables = 1...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!