Recent content by tufkal

  1. T

    New Install, Doing Storage Right (I hope?)

    Your example is so simple it makes me want to copy & paste it to a Wiki somewhere. Makes perfect sense. For some reason I thought with SSDs, TRIM, parity, there was some underlying big no-no for using SSDs in a parity raid. I'm over thinking things now...
  2. T

    New Install, Doing Storage Right (I hope?)

    ...ok something is bugging me, and since I have the attention of someone who knows quite a bit I'll just ask. I'm concerned about doing a RAIDZ of SSDs, no matter what the HBA/card. RAIDZ is glorified RAID5. Won't all that parity I/O tear those SSDs up super fast? Maybe I was not clear...
  3. T

    New Install, Doing Storage Right (I hope?)

    OK ok ok!! I'm going to buy a HBA, your case is rock solid~ It's a reality check that a card that can do everything I want the 9650 does but better for <$50. The plan is going to be to ZFS RAIDZ the SSDs on a H310, and ZFS RAIDZ2 the spinners on another H310. There really is only a few VMs...
  4. T

    New Install, Doing Storage Right (I hope?)

    Very good information, let me see if I can clear it up for you so you can make a proper recommendation. -8VMs, mix of Debian and Windows 7, doing various activities (PBX, file server, managed backup server, file sync node, etc) -Incredibly slow with current setup of RAIDZ2 of the 8 spinners...
  5. T

    New Install, Doing Storage Right (I hope?)

    Great code examples, now I'm torn..... Do I go that route with a full big ZFS pool with SSD caches, or do I use the raid card's RAID6 on the spinners and LVM the SSDs....... All of the machines are idle 95% of the time, and only a few of them need access to the big 8 drive storage. If I...
  6. T

    New Install, Doing Storage Right (I hope?)

    A bit of Googling has me interested. Instead of using SSDs for the VMs and 7200s for the data, this concept of ZIL/ARC and mixing them all together to get the best of both worlds. Unfortunately I know nothing about how to implement it. Seems like a lot of ZFS misinformation out there since...
  7. T

    New Install, Doing Storage Right (I hope?)

    Greetings all, I currently have a small production server running <10 VMs that is having performance problems. Most notable, WA (I/O delay). All of my VMs are backed up to a NFS share on a NAS weekly, so my plan is to next weekend tear the box apart, reinstall PVE, and restore the backups...
  8. T

    vzdump Output Explained? Finding bottleneck

    Bad choice of words heh! When I said i'd trim it down, i meant I was going to shrink it down, not just the literal TRIM process. If I am reading the documentation properly, once I have used fstrim and all the data is orderly and TRIM, I simply use 'qemu-img convert' and the resulting file...
  9. T

    vzdump Output Explained? Finding bottleneck

    Thank you. While you and HBO both said the same thing, your answer laid out the process much better and is what I needed to know. The VM I am having a 'too long' backup problem is indeed provisioned much larger than the data that resides on it, so I now see it spends alot of time processing...
  10. T

    vzdump Output Explained? Finding bottleneck

    Unfortunately, the start of the log doesn't show anything like read/write, and based on the numbers I don't think that's it. Here's a full log of a very small VM for reference. pastebin.ca/3968200 (gotta add h t t p s : / /, forums wont let me post links) If I knew that those numbers meant...
  11. T

    vzdump Output Explained? Finding bottleneck

    I have one particular VM that is taking a huge amount of time to backup, and I'm trying to figure out what the problem is. To start, it would be helpful if I understood the backup log. Example: INFO: status: 90% (1932821856256/2147483648000), sparse 61% (1311810846720), duration 52643, 585/0...