Recent content by jfenning

  1. J

    ZFS Special Device

    Thanks for the info. I decided the only way to know is to change my setup from hwraid to ZFS. I've got the controller so it an easy change over. Having some nvme enterprise modules sitting on the shelf helps. I'll leave my secondary hw raid so I can compare.
  2. J

    ZFS Special Device

    Has anyone tried using a ZFS special device to speed up the metadata on a ZFS HDD array? I run (24) 2.5 10k sas drives on hw raid. Performace is good thou my verify jobs take a very long time. I'm working on my next hw refresh. To test I'd have to blast my second PBS server and I was hoping...
  3. J

    Moving from Xeon to EPYC

    I've got a production Proxmox cluster with subscriptions running on Intel E5-2640 v3 CPUs. I'm looking at EPYC for my next refresh. My VM's are running the KVM64 CPU config. Am I correct that if I were to add new EPYC based servers that I could move them over to the new hosts without issues?
  4. J

    Random zfs replication errors

    Have you resolved your problem? I get random replication errors mostly during the backups. Moved to PBS has helped. Today I was doing some upgrading, creating some disk I/O and started getting them. Under normal load everything is quiet. Checked my logs as well and see nothing. Running all...
  5. J

    Datastore Replication

    I should have added, I'm assuming that ZFS would use send/recv and a hardware raid backed datastore would use Rsync.
  6. J

    Datastore Replication

    Tom thanks for the reply. I aware of that but I wanted to know what that feature uses to sync data between backup servers so I can make an informed descision. ZFS has worked the best for me but In this case I need to consider blind swap in order to get a drive repalced in a timely manner.
  7. J

    Datastore Replication

    I curous to what PBS uses to sync it's datastore to another unit? ZFS send, Rsync or based on files system used?
  8. J

    Three node Ceph Hyperconverged

    Thanks for the input. Would going with more devices like smaller 960G enterpise sata SSD's and say 8-14 drivers per node work? Like the same effect of more spindles on on a raid set? I would think this would help with rebuilt times as well. Is there a number of OSD's per node that scaling...
  9. J

    Three node Ceph Hyperconverged

    What advice would you give me on running a three node setup? We are a small campus network of 145 employees. I'm currently running two, three node inverted pyramids of doom. One cluster is for disaster recovery at my other data center. I would like to migrate to a Hyperconverged Ceph. I...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!