Search results

  1. S

    High SSD wear after a few days

    ZVOL and "raw" vm files. There is no actual file, the volume is a device. Also, do not do raidz (worse z2/3) on SSDs, it looks like it has huge write amplification:https://groups.google.com/a/zfsonlinux.org/forum/#!topic/zfs-discuss/hUlryHtJMnw 1. Use "raid10" like setup, which is striped...
  2. S

    High SSD wear after a few days

    That's strange. For each ~1MB written to the pool, ~120MB reach the drives. In a raidz config this would be expected to be ~3x. Ignore attribute 177 to count the writes, because a SSD has write amplification. Attribute 241 gives you the total 512 byte LBAs written so ~820GB on each drive. For a...
  3. S

    High SSD wear after a few days

    You seem to have a high write I/O profile. Udo's iostat returned 161GB written on each SSD, 61 GB read and 2MB/sec write since last reboot. Btw, did you buy these drives new? What's your current smartctl status (since 13 hours ago on your first post)
  4. S

    kernel module fuse for lxc

    It is build into the kernel, not as a separate module: # grep fuse /lib/modules/4.2.3-2-pve/modules.builtin kernel/fs/fuse/fuse.ko
  5. S

    Migratin XEN LVM VM to Proxmox KVM

    I don't know. If it is systemd I think with systemctl --failed. If upstart, I don't know. Check the logs, check for tty[1-6] services and so on.
  6. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    iostat still doesn't show /dev/sd? activity at all. Please try with iostat -kxz 1
  7. S

    Migratin XEN LVM VM to Proxmox KVM

    If you can ssh to it, check why getty didn't start on VTs. I don't know what distribution you use and that is not generic.
  8. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    Of course there are good and bad experiences on the fragmentation level. And of course it was about free space fragmentation because we're talking about slow write performance. Fragmented files do not affect write performance, but fragmented free space does (more time to hunt for places to write...
  9. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    It is not that bad. The 5 secs txg flush period is there also for being able to re-order the writes to be more sequential. File copying outside or inside the VM is asynchronous AFAIK, except when you run the whole VM in sync or set sync=always on its ZVOL. One issue might be the (pretty) big...
  10. S

    Migratin XEN LVM VM to Proxmox KVM

    send ctrl-alt-f1, f2, f3... from the console GUI
  11. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    Please re-do a file copy test (not dd /dev/zero because you are using compression) and capture the output of iostat during that operation.
  12. S

    which raid

    I was just noticing it. I'm not using it at all, as I'm all ZFS.
  13. S

    Shrink disk size

    You've just broken your VM disk. You need to shrink the disk inside the VM first and then shrink the raw device. What you did was to blind copy the a 100GB disk to a 30GB disk and then wonder why it is broken.
  14. S

    Migratin XEN LVM VM to Proxmox KVM

    Your image is a partition, not a disk. Your best best would be to create a VM with a disk larger than your partition and then: - attach your partition file to the VM as secondary disk (edit the /etc/pve/qemu-server/<VMID>.conf file) - boot a Live Linux ISO - partition the main VM disk - dd your...
  15. S

    which raid

    It looks to me like current fdisk (proxmox, jessie) does know about GPT: root@hypervisor:~# fdisk -l /dev/sdm Disk /dev/sdm: 931.5 GiB, 1000204886016 bytes, 1953525168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size...
  16. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    Copy a big iso file into that folder while collecting iostat data?
  17. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    the /storagepool folder is a ZFS dataset in the same pool (unlinke your VM disks that are ZVOLs).
  18. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    If you copy a big file in /storagepool folder do things look better?
  19. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    zpool list zpool get all storagepool zfs list -o name,compression,recordsize

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!