Search results

  1. N

    ZFS Cache and Log disk Failure

    Log device is needed to check missing data on pool import action. If your Log device is crashed you can import pool manually by force .
  2. N

    PVE 5.2: ZFS "No space left on device" Avail 4,55T

    The error you think you hitted is more related to Linux kernel 3.x and may be older versions. Do this test # rm -rf SRC; mkdir SRC; for i in $(seq 1 10000); do echo $i > SRC/$i ; done; find SRC | wc -l #for i in $(seq 1 10); do cp -r SRC DST$i; find DST$i | wc -l; done
  3. N

    [SOLVED] ZFS Raid

    # zpool attach Example: 1. Lets create stripe pool # zpool create test /zfs_test/file1 /zfs_test/file2 # zpool status test pool: test state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0...
  4. N

    RAID-Z 1 ZFS - CEPH

    Try to create Ceph mannualy on zvol.
  5. N

    [SOLVED] ZFS Raid

    But RAID10 is possible to 'upgrade'
  6. N

    Creating CT on ZFS storage failed!

    You must do what you must just you need to adjust your needs with Proxmox
  7. N

    Creating CT on ZFS storage failed!

    Set mountpoint. PROXMOX do not look at current mountpoint of ZFS volume. It have static build in settings. example: if you want to use pool zfs_pool/vm_volume it must be mountpoint to /zfs_pool/vm_volume/ not like /my_vm_directory/
  8. N

    PVE 5.2: ZFS "No space left on device" Avail 4,55T

    As bad I/O your Raidz2 is bad. Look at the picture for allocation overhead. You are running out of space insive VM (KVM?) or in host ?
  9. N

    [SOLVED] ZFS Raid

    First can you print results of # zpool status
  10. N

    [SOLVED] ZFS Raid

    From ZFS striped you can convert to ZFS striped/mirror like RAID10 You need only attach disks to existing disks. BTW. Google about RAID controllers and ZFS for bad work
  11. N

    Meltdown and Spectre Linux Kernel fixes

    News about bugs # ./spectre-meltdown-checker.sh Spectre and Meltdown mitigation detection tool v0.37+ Checking for vulnerabilities on current system Kernel is Linux 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200) x86_64 CPU is Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz...
  12. N

    How to sharing data between containers?

    You have to share host catalog. Set full path like /lxc/110/mnt/Test777
  13. N

    Shared disk between VM's

    Share storage using 9p https://forum.proxmox.com/threads/virtfs-virtio-9p-plans-to-incorporate.35315/#post-184993
  14. N

    Nfs with Zfs usage.

    I dont use NFS but I found old line in /etc/exports /media/zfs_pool/cloud/mk 10.10.10.16(rw,sync,no_subtree_check)
  15. N

    Nfs with Zfs usage.

    Set sharenfs off on ZFS and add it to the /etc/exports
  16. N

    Nfs with Zfs usage.

    Edit /etc/exports file and reload NFS service #exportfs -a
  17. N

    How to sharing data between containers?

    If you want to configure SAMBA use google. IF your two LXC containers are in the same server you can try folder share https://pve.proxmox.com/wiki/Linux_Container#_bind_mount_points
  18. N

    Nfs with Zfs usage.

    I guess ZFS NFS and SAMBA export is broken. Try to edit manually and export.
  19. N

    Zfs Pool Resilvering

    Looking to your ZFS status I can suggest you to make a scrub after resilvering will be done.