Search results

  1. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    Dietmar... it does not reject. It looses snapshots silently. This is silent data loss. If this is by design, the design needs to be reconsidered.
  2. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    There is no need to fix an issue in general, when the issue is only relevant to a specific case. According to you it is not possible to fix it in general because some backends do not support snapshots. And not all backends can do this "easily", even if they do support. This is why there are...
  3. J

    Proxmox Setup with FaultTolerance for Zero Downtime

    your best bet would be to implement it yourself, using two or more KVM virtual machines and threating them as physicals. Then use proxmox to migrate those KVM on physical nodes and abstract yourself from the underlying hardware. This is how I'm doing it and works treats.
  4. J

    Custom zfs arguments during installation?

    You should use your warranty with supermicro. There's nothing zfs could do to break the disks. Why do you think that it was shift in the first place ? Why do you think than NTFS, Ext4 or other filesystems will write in anything but 4k sectors ? And BTW, SSD have physical block size ranging in...
  5. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    Fabian, are you talking about the snapshots seen in proxmox or the filesystem snapshots ? And, how do you create such a situation, with multiple mount points on the same LXC ? Currently the gui will not let you do this. Also: why one would want to support all X * Y combinations ? I see no...
  6. J

    Migration LXC CT with bind mount point

    The point is: mp0 or other options are not handled correctly. Bin mount is just an example. And, there's a reason for using a single nfs client for a single nfs export. Resource-wise it makes a hell of a lot of sense.
  7. J

    Custom zfs arguments during installation?

    ashift=12 is standard on zfsonlinux. It's there for a good reason and a design change that was not "light headed". It has nothing to do with SSD failing. Perhaps you can tell us more on what SSD you're using and what type of workload you have on them. When you say failed, what do you mean ...
  8. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    Ok, I get you. We're talking pears and apples. In the case of ZFS to LVM, we're converting storage backends. It would be expected that not all features can be "converted" or supported on all backends. This is normal and can be documented. In this specific case Storage.pm would not use "zfs...
  9. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    I don't follow you. If you move a container from ZFS to ZFS, you use "zfs send". This is hard-coded in Storage.pm. What I'm asking is to use "zfs send -R" instead. Please give an example of a situation that will not work. Thank you. PS: Just to clarify, we're not talking about the snapshots...
  10. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    You're saying it's a design decision, however evidence is there this cannot be correct. In Storage.pm there's an explicit check for zfs backend and a branch to take advanted of the zfs features in a non-compatible way for other storages. Specifically, Proxmox uses the command "zfs send" to...
  11. J

    [SOLVED] Migration of LXC on ZFS looses ZFS underlying snapshots

    There's a bug (bad feature) when migrating LXC containers hosted on ZFS: It looses the snapshots. Longer explanation: I snapshot routinely all LXC containers for backup and replication. This is "a goog thing" and saved my azz a few times over the years. I discovered that proxmox will migrate...
  12. J

    Migration LXC CT with bind mount point

    Bump. I just hit this issue. Any idea on how to fix it ? I cannot edit the config files each time I need to migrate a LXC container.
  13. J

    Optimizing ZFS backend performance

    I have an update, using the Intel DC S3500 SSD. They're both capable of 5'000 iops (fio sync write, 4k, 4 jobs). With 16 jobs they go up to 10'000 iops. Now... pveperf still sucks: CPU BOGOMIPS: 57529.56 REGEX/SECOND: 2205679 HD SIZE: 1528.62 GB (rpool/t) FSYNCS/SECOND...
  14. J

    Optimizing ZFS backend performance

    The IO wait is not bad on average. However, the server is "empty" with just two VM right now. I'm concerned about the disk io when the full load is applied. Here's the first server. 6x2.5" SATA 7.2K 1TB "enterprise sata" Here's a second server with only 4x3.5" SATA 7.2K (WD Black 1TB) And...
  15. J

    Optimizing ZFS backend performance

    I'm using writeback and writetrough. It's windows and linux guests. I also have samba shares serving home directories to windows domain users. I know, the server is doing a lot of things, still I'm trying to dimension it properly and understand why the disks are underperforming. I'm waiting...
  16. J

    Optimizing ZFS backend performance

    What would be your suggestion for (consumer) or cheap enterprise drive ? I'm not sure about your numbers tough. The MX200 has abour 5000 IOPS with 256 jobs and 950 with 8 jobs (fio test): --direct=1 --sync=1 --rw=write --bs=4k --numjobs=8 --iodepth=1 --runtime=60 --time_based...
  17. J

    Optimizing ZFS backend performance

    Well, I use(d) samsung 850 pro (128GB) + Crucial MX200. Now I have a Crucial MX200 (256GB) + a OCZ Vertex 150 (indilix). Both SSD can handle 400MB+ writes and 60'000 IOPS during write (benchmarked using atto). The MX200 was suggested to me on IRC (I don't remember whether the #zfsonlinux or...
  18. J

    Optimizing ZFS backend performance

    Min Arc Size is 1GB. Max ArcSize is 4GB.
  19. J

    Optimizing ZFS backend performance

    Hello, I think I have a problem with ZFS performance, which is much below what I see advertised on the forum and considering the hardware I'm using. Unfortunately I cannot see the issue, so I hope that someone will be smarter than me. The problem is the IOPS I can get from a ZFS pool with 6...
  20. J

    Problems booting proxmox 3.4 + zfs raid 1

    Sorry for the late reply. No I used a live CD to run commands. Initrd has no packaging tools in it.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!