Ok. So the free space is shared among others. I am trying to handle a case where it will be necessary to limit the space in a certain place. The more control you have over the system, it will be better ;)
I'm trying to find any information regarding space limiting in BTRFS, but I am not sure...
I'm considering something like LVM volumes where I could manage a volume pool using BTRFS. At first I would like to use LVM as layer below, but it's not recommended. My goal is to manage it as easily as possible without having to move free space (normal partitions are just not enough for my...
Today I have read the following:
source: https://wiki.debian.org/Btrfs#Other_Warnings
Probably It will change my current plan into:
[remove lvm layer] disk -> mdadm (raid-1) -> BTRFS filesystem (of course boot and efi will be handled in separate, different way outside of raid and lvm)...
I would like to split zpools, because:
I would like to have separate pool for OS, VMs images and VMs data
Disks have different sizes, what potentially complicate RAIDZ configuration
Flexibility with moving from smaller to bigger pools and backup procedures
Less impact on performance (greater...
I am during migration to ZFS pool. I have copied as many data ass possible to second location (unfortunately, second data location is smaller). I have read many documentation and forum threads and I am doing the follow steps (of course, It is risky for non-backed up data). I hope it will useful...
Thank you very much for help. I performed test on my environment and it seems to be quite efficient compared to the previous solution (mdadm + lvm) - I think the following optimizations and ARC do the job in my case:
/etc/modprobe.d/zfs.conf
# Description: Minimum ARC size limit. When the ARC...
Thank you @Dunuin for answers and explanations. I really appreciate that!
Regarding this, I am trying to avoid use another solution, because I will need to setup separate hardware and separate configuration layers. I am going to achieve something like that: if i will need data integration...
I found answer my my questions. I needed to understand ZFS architecture first and compare it to known by me solution:
block devices -> VDEV -> ZPOOL -> DATASET
VDEV is similar to mdadm, because here I will set up potential raidz solution
ZPOOL is similar to LVM group
DATASET is similar to...
Thank you @LnxBil and @Dunuin for you answers.
@Dunuin , could you write more about typical ZFS administration in the case of RAID1 and RAID6. Currently I have the following architecture:
block devices -> mdadm (raid1/raid6) -> LVM -> VM side [ LUKS -> EXT4] and
block devices...
Hi All
For two weeks I have been searching the market for a good solution to solve the problem of data integrity, but I still can't decide due to insufficient experience. I think, It will be the best to write a new thread for sharing it. I believe someone had a similar problem and I hope it...
Hi All
I am still looking for the best solution, so let me refresh the topic.
I update my first question: Does my idea worth implementing? Will this have a significant impact on performance or causes additional problems (about which I don't know yet :) )?
I have tried to compare performance...
Hi All
I am trying to improve my home LAB and move from local LVM to NFS/ISCSI or one of distributed file system. I have compared GlusterFS, CEPH and other object storage solutions like minio/seaweedfs with NFS/iSCSI and the most appropriate for me seems to be CEPH due to compatibility with...
Hi All
I have performed Proxmox Offline Mirror thorough tests for all used by me debian-based repositories. I have tried to replace my current tool Aptly. Unfortunately I have experienced the following similar issue for almost all cases.
Example error message:
Verifying 'Release(.gpg)'...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.