@tuonoazzurro Are you sure this is something that is happaning in standard environment? Never realized something like that.
@DerDanilo Well that's what I though too, until LnxBil "convinced" me... or maybe not convinced.... :confused:
This is exactly what I wanted, an discussion about the...
Well difficult to say because I read a lot of stuff the last days :-). But it was suggested somewhere to not boot from ZFS, and instead do a mdadm raid.
So you think that we should not have separate system disks?
(using all disks in one ZFS pool that includes system and VMs storage)
@guletz
THX for the thoughts. So in later Test Use, how could I determine if a l2arc or slog would improve performance?
@LnxBil
Hmm.. I read somewhere that for the Linux boot it is better to not use ZFS raid1. So this is why I ended up in using mdadm raid1 for system.
Hi @ all,
we are planning actually our new Proxmox Servers and therefore think about the best performing and cost efficient setup.
Now we are a bit stuck at using ZIL and L2ARC on separate devices.
Here our actual planed setup:
Xeon 6134 Gold
254GB ECC DDR4 RAM
2 x 60GB SSD SATA - mdadm Raid...
Hi @ all,
we are facing a similar error on 5.2-3 :
The mouse on noVNC also laggs but it is caught by the system cursor. So we only see one cursor that is lagging.
Any idea why this problem occours on proxmox 5.2-3, because on 5.2.1 everything is fine?
So that means that the volume "rpool/quaterhourly/vm-122-disk-1" is the last snapshot "rpool/quaterhourly/vm-122-disk-1@rep_vm1-QUATERHOURLY_2018-04-10_11:30:01", and that the volume is always the "most actual copy"?
And what does the last snapshot on the working node, that is growing slowly...
Hi@all,
I am actually trying to understand pve-zsync but don't get it...
I have a working node and a backup node trying to sync snapshots every 15min to the backup node.
Here the behaivour of pve-zsync or zfs that I don't understand:
on working node
Every 2.0s: zfs list -t snapshot -r...
Well unfortunately the new "Replication" feature has still some errors where replication fails!
See my post:
https://forum.proxmox.com/threads/replication-failed-to-some-node.35684/#post-188445
This is why I am using now pve-zsync instead of replication!
So is it not planned to implement a "more intelligent" backup feature of linked clones to PVE?
So the way I could go withouth having full size backups would be:
zfs send -R rpool/data/base-555-disk-1@__base__ | ssh "node-to" zfs recv rpool/data/base-555-disk-1
zfs snapshot...
That sounds "great". I think with ZFS and NVMe SSDs the performance issues should not be a problem, rigth?
BTW: Still no info about Backups of linked clones ;-)
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.