Search results

  1. E

    Assign public IPs automatically to VMs

    IP assignment is usually a function of the guest OS not the host.
  2. E

    Resize qcow2 - Reduce size of the disk

    The problem I always seem to run into, is that the guest OS needs to be reduced in size first. You need the right utility to achieve this. Generally you can't just steal GBs from a guest OS without using a shrinking tool on it first like gpartd to get everything out of the reclaimed space, and...
  3. E

    creating central storage with 3 nodes - without extra SAN

    I think it's considered to still be in development. Though it does support a 3 node mirror now without stacked layers which could be good. I would be interested in hearing any reports of it's performance on Proxmox.
  4. E

    High SSD wear after a few days

    I have an 840 Evo in my gaming PC but it doesn't do enough to be a benchmark. Here is an endurance test on the 840 Pro it comes up pretty well http://techreport.com/review/27062/the-ssd-endurance-experiment-only-two-remain-after-1-5pb/2 The secret with commodity SSD is to keep spares, and...
  5. E

    High SSD wear after a few days

    Don't journal drives wear the fastest? Might not be a good reference point. What you need to do is work out if the volume of data being written to the SSD is what you expected, and if it's sent to the SSD in a way as to maximize it's life. I know we don't have TRIM but things like block size...
  6. E

    High SSD wear after a few days

    Is there some way you can use zpool iostat or some other utility to double check and make sure the SMART reading is accurate?
  7. E

    migrating non-shared storage ZFS-backed VM

    What about if your disk image isn't a zvol, but just a raw or qcow2 image that is stored in a zfs filesystem directory, does that work just like non-zfs local storage?
  8. E

    Slow performance with ZFS?

    for your interest the red are not the re drives you mention. They are for home NAS users. WD enterprise drives are RE SE and XE Home drives are Green, red, blue, black the green and red are only 5400rpm or less because of their power saving, the blue and black are 7200rpm...
  9. E

    Urgent: ZFS problem with guest VM Drives - Loosing connection to host-frozen etc

    Are there problems with consumer grade SSDs on Proxmox? I was thinking of buying some. Specifically Samsung 850 EVO.
  10. E

    Slow performance with ZFS?

    The red drives are slower than the green, both are only 5400 rpm. Both are terrible options for ZFS. If you need speed use the black drives.
  11. E

    Slow performance with ZFS?

    Not for L2ARC, only if you choose to run with SLOG, which benefits from a mirror. Remember that SLOG should never be read unless there is a system crash, it's just a backup copy of data with the sync bit set, that's along with everything else in the normal RAM queue to be asynchronously written...
  12. E

    Proxmox 4.0 VE fresh install: can't shutdown VMs with host

    Send the UPS shutdown signals to the guests first, then the host.
  13. E

    Proxmox 4.0 VE fresh install: can't shutdown VMs with host

    Some guests don't play cleanly with a shutdown of the host, it depends on their OS. I make it a rule to shutdown each guest from within it's own OS first before finally issuing the host shutdown. Don't count on the host to do it for you.
  14. E

    Latest Proxmox 4 update

    Is there a Changelog available?
  15. E

    Slow performance with ZFS?

    With green drives you need to use a small SSD for L2ARC and possibly SLOG caches. 40 to 50MB/s is not surprising for green drives, even fast WD black gaming drives are pushing to get up to 100MB/s they might give 80MB/s reliably. The SSD cache will help.
  16. E

    Backup solutions other than vzdump ?

    Change the bwlimit in /etc/vzdump.conf to limit the strain on the node.
  17. E

    Proxmox 4.0 ZFS disk configuration with L2Arc + Slog. 2nd opinion

    Most applications are have asynchronous writes, which is the default. If you host some big databases they will do synchronous writes. In a memory constrained system like in the OP. L2ARC will significantly increase the amount of RAM required by about 200bytes per record, so you have to not go...
  18. E

    Proxmox 4.0 ZFS disk configuration with L2Arc + Slog. 2nd opinion

    You might be loading up the E3 a bit. I know they like to limit them to 32GB and a lot of the support chips will only drive at SATA II speeds so watch out for that. I like to give ZFS 1GB of ARC per VM which sounds like it would be getting tight on your machine. If it were me I would be trying...
  19. E

    [SOLVED] Backup REALLY slow after upgrade to Proxmox VE 4.0

    There is an error in your log which says: 101: Nov 17 01:00:01 INFO: mode failure - some volumes does not support snapshots 101: Nov 17 01:00:01 INFO: trying 'suspend' mode instead 101: Nov 17 01:00:01 INFO: backup mode: suspend Where your previous log showed the snapshot was possible. Also...
  20. E

    [SOLVED] Backup REALLY slow after upgrade to Proxmox VE 4.0

    What does the backup log say about the speed obtained?