Search results

  1. N

    Additional ZFS Volume and VM Snapshots

    ZFS dont have protection option from deleting zvol. But you may protect in other way. Part 1 Snaps How the Proxmox and Console do snapshots on ZFS? Console -> zfs snap zvol@name Proxmox -> zfs snap zvol@name -> put info in VM config file. Looking snapshosts from console you can see all...
  2. N

    Existing Raid 10 import

    If your RAID is made from ZFS then you don't need to worry about HDD names.
  3. N

    ZFS directory empty after reboot

    If your ZFS pool become damaged then as your self why it may happen. ECC ram? ZFS configuration?
  4. N

    ZFS directory empty after reboot

    if all mountpoints is OK and all volumes are mounted then I don't know why your files are missing.
  5. N

    ZFS directory empty after reboot

    # zfs get mountpoint # zfs get mounted
  6. N

    ZFS Cache and Log disk Failure

    Log device is needed to check missing data on pool import action. If your Log device is crashed you can import pool manually by force .
  7. N

    PVE 5.2: ZFS "No space left on device" Avail 4,55T

    The error you think you hitted is more related to Linux kernel 3.x and may be older versions. Do this test # rm -rf SRC; mkdir SRC; for i in $(seq 1 10000); do echo $i > SRC/$i ; done; find SRC | wc -l #for i in $(seq 1 10); do cp -r SRC DST$i; find DST$i | wc -l; done
  8. N

    [SOLVED] ZFS Raid

    # zpool attach Example: 1. Lets create stripe pool # zpool create test /zfs_test/file1 /zfs_test/file2 # zpool status test pool: test state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM test ONLINE 0 0 0...
  9. N

    RAID-Z 1 ZFS - CEPH

    Try to create Ceph mannualy on zvol.
  10. N

    [SOLVED] ZFS Raid

    But RAID10 is possible to 'upgrade'
  11. N

    Creating CT on ZFS storage failed!

    You must do what you must just you need to adjust your needs with Proxmox
  12. N

    Creating CT on ZFS storage failed!

    Set mountpoint. PROXMOX do not look at current mountpoint of ZFS volume. It have static build in settings. example: if you want to use pool zfs_pool/vm_volume it must be mountpoint to /zfs_pool/vm_volume/ not like /my_vm_directory/
  13. N

    PVE 5.2: ZFS "No space left on device" Avail 4,55T

    As bad I/O your Raidz2 is bad. Look at the picture for allocation overhead. You are running out of space insive VM (KVM?) or in host ?
  14. N

    [SOLVED] ZFS Raid

    First can you print results of # zpool status
  15. N

    [SOLVED] ZFS Raid

    From ZFS striped you can convert to ZFS striped/mirror like RAID10 You need only attach disks to existing disks. BTW. Google about RAID controllers and ZFS for bad work
  16. N

    Meltdown and Spectre Linux Kernel fixes

    News about bugs # ./spectre-meltdown-checker.sh Spectre and Meltdown mitigation detection tool v0.37+ Checking for vulnerabilities on current system Kernel is Linux 4.15.17-1-pve #1 SMP PVE 4.15.17-9 (Wed, 9 May 2018 13:31:43 +0200) x86_64 CPU is Intel(R) Xeon(R) CPU E5-2603 v3 @ 1.60GHz...
  17. N

    How to sharing data between containers?

    You have to share host catalog. Set full path like /lxc/110/mnt/Test777