Hello,
sorry for my late answer and thank you for your clarification.
I tested several solutions, including save the snapshot ( full, incremental and "differential" ) in a file to send it to software like borg or restic.
Unfortunately these software deduplicate only at file level and not...
Hello everyone,
I'm trying to figure out how to replicate the VM between two nodes and get a snapshot as backup.
Example:
Node-1 and Node-2
VM-100 running on Node-1
VM-100 replicated on Node-2
If I get a snapshot on Node-1 this will be replicated on Node-2
NODE-1
NAME...
Hello,
thank you for explanation. I become to figure out how ZFS works.
Today I have done several tests for the space efficiency, maybe could help someone else:
I created 3 pools, 2x single vdev + 1x RAIDz1
1st pool
Name: hc0
RAID: single vdev ( 1x 1TB Hitachi )
Disk Block size: 512n...
Yesterday I did some test with containers ( debian 9.4 ).
I cloned the VMs in different recordsize and I got the similar results: higher recordsize results in a better compression:
NAME USED LUSED REFER LREFER VOLSIZE RECSIZE VOLBLOCK COMPRESS REFRATIO...
Ok thanks for the suggestion!
The pool will be used mainly for the NAS VM and the others container/vm.
In the NAS VM the files are mainly read/stream.
What do you mean for "copy the same file in the same time"?
Can I have an example?
Thanks
Ok got it! In my case the file doesn't change too much, except for the small files with my personal cloud, but they are really fews gigabyte.
I spotted that starting from 32k the result is almost the same.
Which pros/cons are there using 32k/64k instead of 1M?
If I well uderstood using a...
So if my disks are 4k, I should use a recordsize of 4k, with the compression disabled?
As you can see, using aa recordsize=1M I got the a better result:
NAME USED LUSED REFER LREFER RECSIZE VOLBLOCK COMPRESS REFRATIO
lxpool/xpestore/vm-100-disk-1 1.49T...
Hello All,
I'm using proxmox with ZFS and I created a RAIDZ1 with 3x4TB disk and ashift=12
# cat /sys/block/sdb/queue/logical_block_size
512
# cat /sys/block/sdb/queue/physical_block_size
4096
These are the options enabled on the pool:
NAME SYNC DEDUP...
I'll try and I'll report it back. I already have to understand how to decide the record/block size. I figure out that's depends of workload type, but it isn't straightforward.
Thanks, I got an improvement, but I still have some issue ( I guess )
I have done the same test using a FreeNAS VM, and then using a zvol on ext4 and NTFS.
Copy on FreeNAS VM ( 1 CPU, 4 Core, RAM 8GB, Disks in Raw )
I created an 8k dataset shared by SMB:
Then I added back the pool in...
Ok I have done others test, using ZFS on Proxmox I'm getting an high IO delay. Just to recap, here my hardware config:
CPU: Intel I5-7400
RAM: 32GB
Network: 2x Intel Gigabit
The config I have been using on the test, is not the final one. In the final solution, I'll have 2 or 3 disks WD RED...
This is the behavior that I got and I've been trying to fix it
During the writing, the process goes down to zero ( or almost ) and then it continues to write.
It seems a timeout during the writing.. could be the old disks that i'm using for test? but with ext4 as fs they work well..
I used an...
ok so I can use default settings
Ok got it :)
Which details I should give?
What you mean with zfs on zfs?
I got the same feeling about the cache, but I don't know where I'm wrong.. I set up the datastore on proxmox with write cache=enabled and I setup the vdisk with cache=writeback
ok, but what if I just created a vmbr without dev? If I set up the mtu on the physical dev, this one will change on the vmbr without dev?
Ok, I got this feeling, but I wasn't sure.
I have been using iperf3 for all test
No, I just attached the disks as raw to the OmniVM ( and created the...
Hello All,
I would like to have some advice for my setup with proxmox and zfs on my homelab.
My hardware:
CPU: i5-7400
RAM: 32GB
NIC: 2x1Gb Intel
Disk in use for test: 2x160gb mirror with 1 Intel ssd for slog. I attached them as raw disks
proxmox ver 5.2-9
Trying to switch to zfs, I've been...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.