Search results

  1. H

    Storage, checking data integrity, bit rot protection

    Ok. So the free space is shared among others. I am trying to handle a case where it will be necessary to limit the space in a certain place. The more control you have over the system, it will be better ;) I'm trying to find any information regarding space limiting in BTRFS, but I am not sure...
  2. H

    Storage, checking data integrity, bit rot protection

    I'm considering something like LVM volumes where I could manage a volume pool using BTRFS. At first I would like to use LVM as layer below, but it's not recommended. My goal is to manage it as easily as possible without having to move free space (normal partitions are just not enough for my...
  3. H

    Storage, checking data integrity, bit rot protection

    Today I have read the following: source: https://wiki.debian.org/Btrfs#Other_Warnings Probably It will change my current plan into: [remove lvm layer] disk -> mdadm (raid-1) -> BTRFS filesystem (of course boot and efi will be handled in separate, different way outside of raid and lvm)...
  4. H

    Storage, checking data integrity, bit rot protection

    I would like to split zpools, because: I would like to have separate pool for OS, VMs images and VMs data Disks have different sizes, what potentially complicate RAIDZ configuration Flexibility with moving from smaller to bigger pools and backup procedures Less impact on performance (greater...
  5. H

    Storage, checking data integrity, bit rot protection

    I am during migration to ZFS pool. I have copied as many data ass possible to second location (unfortunately, second data location is smaller). I have read many documentation and forum threads and I am doing the follow steps (of course, It is risky for non-backed up data). I hope it will useful...
  6. H

    Storage, checking data integrity, bit rot protection

    Thank you very much for help. I performed test on my environment and it seems to be quite efficient compared to the previous solution (mdadm + lvm) - I think the following optimizations and ARC do the job in my case: /etc/modprobe.d/zfs.conf # Description: Minimum ARC size limit. When the ARC...
  7. H

    Storage, checking data integrity, bit rot protection

    Thank you @Dunuin for answers and explanations. I really appreciate that! Regarding this, I am trying to avoid use another solution, because I will need to setup separate hardware and separate configuration layers. I am going to achieve something like that: if i will need data integration...
  8. H

    Storage, checking data integrity, bit rot protection

    I found answer my my questions. I needed to understand ZFS architecture first and compare it to known by me solution: block devices -> VDEV -> ZPOOL -> DATASET VDEV is similar to mdadm, because here I will set up potential raidz solution ZPOOL is similar to LVM group DATASET is similar to...
  9. H

    Storage, checking data integrity, bit rot protection

    Thank you @LnxBil and @Dunuin for you answers. @Dunuin , could you write more about typical ZFS administration in the case of RAID1 and RAID6. Currently I have the following architecture: block devices -> mdadm (raid1/raid6) -> LVM -> VM side [ LUKS -> EXT4] and block devices...
  10. H

    Storage, checking data integrity, bit rot protection

    Hi All For two weeks I have been searching the market for a good solution to solve the problem of data integrity, but I still can't decide due to insufficient experience. I think, It will be the best to write a new thread for sharing it. I believe someone had a similar problem and I hope it...
  11. H

    [LAB] Ceph architecture, performance and encryption

    Hi All I am still looking for the best solution, so let me refresh the topic. I update my first question: Does my idea worth implementing? Will this have a significant impact on performance or causes additional problems (about which I don't know yet :) )? I have tried to compare performance...
  12. H

    [LAB] Ceph architecture, performance and encryption

    Hi All I am trying to improve my home LAB and move from local LVM to NFS/ISCSI or one of distributed file system. I have compared GlusterFS, CEPH and other object storage solutions like minio/seaweedfs with NFS/iSCSI and the most appropriate for me seems to be CEPH due to compatibility with...
  13. H

    Proxmox Offline Mirror released!

    Done. Thank you! Best Regards
  14. H

    Proxmox Offline Mirror released!

    Hi All I have performed Proxmox Offline Mirror thorough tests for all used by me debian-based repositories. I have tried to replace my current tool Aptly. Unfortunately I have experienced the following similar issue for almost all cases. Example error message: Verifying 'Release(.gpg)'...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!