Recent content by Whatever

  1. After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Unfortunately not. I've tried to switch from journaling to snapshot replication mode but was unable to set it up with current wiki manual: https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring
  2. zfs 2.1 roadmap

    Thomas, thanks for clarification. Are they (user-space 2.1.1 and kernel 2.0.6 100% compatible?
  3. Very Slow Performance after Upgrade to Proxmox 7 and Ceph Pacific

    Enabling rbd_cache should help a lot Ex in my setup [client] keyring = /etc/pve/priv/$cluster.$name.keyring rbd_cache_size = 134217728
  4. zfs 2.1 roadmap

    Are you sure that pve-kernel 5.11 test ships with ZFS 2.0.6? On my test node I see:
  5. Two-ways mirroring CEPH cluster how to?

    Dominik, thanks for the hint I've already tried do so but was facing errors and warnings in syslog related to rbd-mirror From my perspective RBD mirroring solution fro PVE Wiki only suitable for journal mirroring and not for image one. It would be extremely useful if Proxmox team will extend...
  6. Ceph replication setup via GUI ?

    Any plans to integrate ceph replication (RBD mirroring) functionality into GUI? (with both snapshot and journaling modes) Current wiki tutorial (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) covers only journaling one and not fully suitable for recent Pacific ceph distro(
  7. Ceph rbd mirroring Snapshot-based not working :'(

    Did you manage to get snapshot mirroring working with PVE wiki howto? I'm facing the same issue and result: 1 starting_replay
  8. Two-ways mirroring CEPH cluster how to?

    In PVE Wiki (https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring) written Could anyone advice how to extend one-way mirroring to two-ways with respect to original PVE Wiki howto? Is it enough to install rbd-mirror in master (source) ? If so is it enough to install on one node in source CEPH...
  9. After enabling CEPH pool one-way mirroring pool usage is growing up constantly and pool could overfull shortly

    Do I understand you correctly that two way mirroring requres installing rdb-mirror deamon on both sides? (muster and backup cluster) However in PVE WIki is clearly written: rbd-mirror installed on the backup cluster ONLY (apt install rbd-mirror). With PVE 6.4 I still get health: WARNING...
  10. [SOLVED] "One of the devices is part of an active md or lvm device" error on ZFS pool creation (dm-multipath)

    Yeap, I did some progress indeed Unfortunately, I didn't manage to find out what caused "Device busy" - my assumption is that it somehow related to ZFS import (scan?) procedure that occurs on PVE (OS) start up (all the disks were a part of another ZFS pool from different storage without...
  11. [SOLVED] "One of the devices is part of an active md or lvm device" error on ZFS pool creation (dm-multipath)

    I'm facing an issue with creating ZFS pool with dm-mappers (clean 6.3 PVE) I have HP gen8 server with dual port HBA connected with two SAS cables to HP D3700 and dual port SAS SSD disks SAMSUNG 1649a I've installed multipath-tools and changed multiapth.conf accordantly ...
  12. [SOLVED] PVE 6.3-4 and ZFS 2.0 ignores zfs_arc_max

    It's not sctually correct! If you set zfs_arc_min to zfs_arc_max it does not use zfs_arc_min as zfs_arc_max! It sets zfs_arc_min to desired value and ignores value for zfs_arc_max (so it's kept as default - half of RAM)

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!