Search results

  1. J

    PVE replication and ZFS Snapshot

    Hello, sorry for my late answer and thank you for your clarification. I tested several solutions, including save the snapshot ( full, incremental and "differential" ) in a file to send it to software like borg or restic. Unfortunately these software deduplicate only at file level and not...
  2. J

    PVE replication and ZFS Snapshot

    Hello everyone, I'm trying to figure out how to replicate the VM between two nodes and get a snapshot as backup. Example: Node-1 and Node-2 VM-100 running on Node-1 VM-100 replicated on Node-2 If I get a snapshot on Node-1 this will be replicated on Node-2 NODE-1 NAME...
  3. J

    Help me to understand the used space on ZFS

    Hello, thank you for explanation. I become to figure out how ZFS works. Today I have done several tests for the space efficiency, maybe could help someone else: I created 3 pools, 2x single vdev + 1x RAIDz1 1st pool Name: hc0 RAID: single vdev ( 1x 1TB Hitachi ) Disk Block size: 512n...
  4. J

    Help me to understand the used space on ZFS

    Yesterday I did some test with containers ( debian 9.4 ). I cloned the VMs in different recordsize and I got the similar results: higher recordsize results in a better compression: NAME USED LUSED REFER LREFER VOLSIZE RECSIZE VOLBLOCK COMPRESS REFRATIO...
  5. J

    [SOLVED] Advise for zfs over iscsi

    Ok thanks for the suggestion! The pool will be used mainly for the NAS VM and the others container/vm. In the NAS VM the files are mainly read/stream. What do you mean for "copy the same file in the same time"? Can I have an example? Thanks
  6. J

    Help me to understand the used space on ZFS

    Ok got it! In my case the file doesn't change too much, except for the small files with my personal cloud, but they are really fews gigabyte. I spotted that starting from 32k the result is almost the same. Which pros/cons are there using 32k/64k instead of 1M? If I well uderstood using a...
  7. J

    Help me to understand the used space on ZFS

    So if my disks are 4k, I should use a recordsize of 4k, with the compression disabled? As you can see, using aa recordsize=1M I got the a better result: NAME USED LUSED REFER LREFER RECSIZE VOLBLOCK COMPRESS REFRATIO lxpool/xpestore/vm-100-disk-1 1.49T...
  8. J

    [SOLVED] Advise for zfs over iscsi

    With the new disks I got a good performance, so that with the correct recordsize gets me a good result.
  9. J

    Help me to understand the used space on ZFS

    Hello All, I'm using proxmox with ZFS and I created a RAIDZ1 with 3x4TB disk and ashift=12 # cat /sys/block/sdb/queue/logical_block_size 512 # cat /sys/block/sdb/queue/physical_block_size 4096 These are the options enabled on the pool: NAME SYNC DEDUP...
  10. J

    [SOLVED] Advise for zfs over iscsi

    I'll try and I'll report it back. I already have to understand how to decide the record/block size. I figure out that's depends of workload type, but it isn't straightforward.
  11. J

    [SOLVED] Advise for zfs over iscsi

    Thanks, I got an improvement, but I still have some issue ( I guess ) I have done the same test using a FreeNAS VM, and then using a zvol on ext4 and NTFS. Copy on FreeNAS VM ( 1 CPU, 4 Core, RAM 8GB, Disks in Raw ) I created an 8k dataset shared by SMB: Then I added back the pool in...
  12. J

    [SOLVED] Advise for zfs over iscsi

    Ok I have done others test, using ZFS on Proxmox I'm getting an high IO delay. Just to recap, here my hardware config: CPU: Intel I5-7400 RAM: 32GB Network: 2x Intel Gigabit The config I have been using on the test, is not the final one. In the final solution, I'll have 2 or 3 disks WD RED...
  13. J

    [SOLVED] Advise for zfs over iscsi

    This is the behavior that I got and I've been trying to fix it During the writing, the process goes down to zero ( or almost ) and then it continues to write. It seems a timeout during the writing.. could be the old disks that i'm using for test? but with ext4 as fs they work well.. I used an...
  14. J

    [SOLVED] Advise for zfs over iscsi

    ok so I can use default settings Ok got it :) Which details I should give? What you mean with zfs on zfs? I got the same feeling about the cache, but I don't know where I'm wrong.. I set up the datastore on proxmox with write cache=enabled and I setup the vdisk with cache=writeback
  15. J

    [SOLVED] Advise for zfs over iscsi

    ok, but what if I just created a vmbr without dev? If I set up the mtu on the physical dev, this one will change on the vmbr without dev? Ok, I got this feeling, but I wasn't sure. I have been using iperf3 for all test No, I just attached the disks as raw to the OmniVM ( and created the...
  16. J

    [SOLVED] Advise for zfs over iscsi

    Hello All, I would like to have some advice for my setup with proxmox and zfs on my homelab. My hardware: CPU: i5-7400 RAM: 32GB NIC: 2x1Gb Intel Disk in use for test: 2x160gb mirror with 1 Intel ssd for slog. I attached them as raw disks proxmox ver 5.2-9 Trying to switch to zfs, I've been...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!