Search results

  1. N

    Shared disk between VM's

    Share storage using 9p https://forum.proxmox.com/threads/virtfs-virtio-9p-plans-to-incorporate.35315/#post-184993
  2. N

    Nfs with Zfs usage.

    I dont use NFS but I found old line in /etc/exports /media/zfs_pool/cloud/mk 10.10.10.16(rw,sync,no_subtree_check)
  3. N

    Nfs with Zfs usage.

    Set sharenfs off on ZFS and add it to the /etc/exports
  4. N

    Nfs with Zfs usage.

    Edit /etc/exports file and reload NFS service #exportfs -a
  5. N

    How to sharing data between containers?

    If you want to configure SAMBA use google. IF your two LXC containers are in the same server you can try folder share https://pve.proxmox.com/wiki/Linux_Container#_bind_mount_points
  6. N

    Nfs with Zfs usage.

    I guess ZFS NFS and SAMBA export is broken. Try to edit manually and export.
  7. N

    Zfs Pool Resilvering

    Looking to your ZFS status I can suggest you to make a scrub after resilvering will be done.
  8. N

    Zfs Pool Resilvering

    Resilvering- the process then one or more disk have missing new data. ZFS check and sync pool data to make 'delayed' disk to catch up timestamp
  9. N

    Increasing prices for community edition?

    I`m very old Proxmox user. I can`t afford to buy a subscription but then I want to get stable version I always look at https://pve.proxmox.com/wiki/Downloads '# pveversion -v' section to know stable packages numbers before I do update.
  10. N

    zfs pool

    ZFS is local file system. Not network. It is not ceph or gluster.
  11. N

    ZFS zfs_send_corrupt_data parameter not working

    Not so long time ago I and my cousin had some problem with his ZFS pool. Pool is 8 disk raidz2 configuration. At the beginning one HDD started to show checksum, r/w error on ZFS pool status. Scrub show no error but disk error count still occurs. Suddenly another HDD showed smart errors and we...
  12. N

    ZFS zfs_send_corrupt_data parameter not working

    It may be relate to hardware. Check the cables and so on.
  13. N

    Too slow windows startup on zfs?

    Everything looks normal. Not sure about compression. I manually set to lz4. ZFS pool speed depends on slowest disk performance. To speed it request bigger ARC. Then ARC is cold you get disks very busy. To make ARC 'hot' some say it takes 2 days to warmup. ZFS advantages have it own cost.
  14. N

    Too slow windows startup on zfs?

    You are very distracted. Can you replay to post above ?
  15. N

    Too slow windows startup on zfs?

    for i in /dev/sd{a,b}; do smartctl -i $i | grep Sector; done # zpool get ashift zfs get volblocksize,compression As you can see only 37% data comes from ARC cache. My server with 12G ARC hit rate is 87%. You can try to disable prefetch #echo 1 >...
  16. N

    Too slow windows startup on zfs?

    To make ZFS work better with the same resources is tricky. 1. What is your disks sector size? 2. What is your ZFS pool ashift setup? 3. What is your ZFS volume volblocksize, compression? 4. ARC size? #arc_summary
  17. N

    Too slow windows startup on zfs?

    ZFS have a tool like "#zpool iostat -v 1" to see disk activity. ZFS speed depend on slowest disk on the pool. If you see big activity different between disks - it may be: 1. You are using different model of disks. 2. If your ZFS pool is made from identically disk -> one of the disk have problem...
  18. N

    Too slow windows startup on zfs?

    It is normal. My ZFS pool with 2 hdd mirror or another pool with 3 hdd raidz VM startup takes some time. But in ZFS pool with 3 SSD raidz VM boots instantly.
  19. N

    Too slow windows startup on zfs?

    Build ZFS pool from 8 or more disk if you want fast VM startup, or use SSD only.
  20. N

    Too slow windows startup on zfs?

    You have pool from 2 disks. Don`t expect fast read, except your pool are from SSD.