Search results

  1. A

    Backup storage option if solely used CEPH with Proxmox

    Benji is a fork of backy2 with alot of enhanced features. https://github.com/elemental-lf/benji We have had in place for almost a year now and so far it has been really solid. No issues running multiple backups at the same time and the developer has put alot of work into exporting the...
  2. A

    ZFS 0.8.0 Released

    Bummer, sounds like you need more vdev's to get the required IOPS to run zvol's for your VM's.
  3. A

    ZFS 0.8.0 Released

    Doesn't surprise me, imo 128k block would help here and not make the disks work so hard (As you really only have the power of 1 disk IOPS wise). However its never going to provide that much performance. Maybe a couple hundred IOPS.
  4. A

    ZFS 0.8.0 Released

    That is your issue then, they really can't be compared performance wise. I doubt the cpu aspect is that big unless its saturated. I had a feeling as we too have seen considerably better performance with a non zvol setup vs zvol. The only time we have found success with zvol's is having a...
  5. A

    ZFS 0.8.0 Released

    Is that setup the same? Are you running VM's on zvol's just like proxmox? Or are you simply using the zfs filesystem with no zvol's involved on the Debian setup. We use zfs within proxmox in many many setups with zero issues, but there are alot of options configuration wise to pin down for...
  6. A

    ZFS 0.8.0 Released

    How does the output of "zpool iostat 1 10000000" look? Is it mostly reads or writes (post a snipit here is possible). A 2 drive mechanical setup isn't going to provide much in regards to IOPS.
  7. A

    ZFS 0.8.0 Released

    What does your zfs config look like? Raidz? Mirror? Disk counts? Are you using the SSD for read cache or zil?
  8. A

    New All flash Ceph Cluster

    From what I am reading it should be 4% of the block device size. So if I am using 7.68TB drives that would be roughly 307G. Ill probably shoot for 2x 350G partitions for the DB of each OSD and the rest can be for WAL. The 800G drives definitely make a bit more sense. Appreciate the input!
  9. A

    New All flash Ceph Cluster

    They do have 800G versions as well. What is typically the size ratio for the WAL/DB device?
  10. A

    ZFS Boot mirror M2

    Fantastic, I was really hoping that was addressed. Appreciate the input.
  11. A

    ZFS Boot mirror M2

    Can proxmox be installed into 2x M2 SSD's with ZFS mirror? I know this was a issue in the past, but I haven't seen much more on it.
  12. A

    Proxmox Supporting Ceph

    Appreciate the input. I was more talking about proxmox support itself. Will they address anything related to ceph when we log a ticket? Like explanations of some tuning options, or maybe assistance with a crush map. Just incase something happens to me, that way my guys have some route to get...
  13. A

    New All flash Ceph Cluster

    I am seeing quite a few examples out there were they are using cheaper SATA SSD's for OSD and high end NVMe for journal/WAL/DB. Micron specially has a great example of this setup and looks really solid (Link is in my OP) I have changed my config a bit more. I am now going to use 1x Micron...
  14. A

    Proxmox Supporting Ceph

    I know proxmox supports Ceph pretty extensively. We are looking to move our production over to ceph and wouldn't mind having some solid 3rd party support. Proxmox support has been good to me in the past and I was wondering if that is enough or if we should consider a 3rd party option? I am...
  15. A

    New All flash Ceph Cluster

    I think my biggest concern is the Micron 5210 endurance. Open to other brands, also looking at the Micron 5300 Pro which has a much better DWPD. Our writes will be primarily 4k-8k and our read/write work load should be in the 50/50 range. Doesn't really look like many others have anything...
  16. A

    New All flash Ceph Cluster

    We are looking to build out a all flash ceph cluster and I wanted to get some opinions. We would use proxmox with enterprise subscriptions. This cluster would be dedicated to ceph with no VM's. Leaning towards a 8 node setup using the supermicro 2113S-WTRT as the chassis with a AMD epyc cpu...
  17. A

    iGPU issues with proxmox 6.1 and i5 4460

    Are you just seeing a black screen, have you installed the required drive inside windows for the card?
  18. A

    iGPU issues with proxmox 6.1 and i5 4460

    I also have that magic vbios rom file you can also try with UEFI. Unfortunately the forums don't allow me to attach this file as the extension isn't allowed. Id have to get it to you some other way.
  19. A

    iGPU issues with proxmox 6.1 and i5 4460

    Sounds like your install is specific for UEFI boot. Can you reinstall windows with seabios?
  20. A

    iGPU issues with proxmox 6.1 and i5 4460

    The only way I ever got this to work with ovmf was to provide the rom file. Going seabios may proove to be much easier.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!