Search results

  1. S

    Migratin XEN LVM VM to Proxmox KVM

    virtio is the best or, even better, for ZFS ZVOLs you will want virtio-scsi (select 'scsi' as disk and in 'Options', 'Controller Type', 'VIRTIO-SCSI'). It has TRIM/UNMAP support, so your zvols will shrink when deleting files in the guest (if the guest supports TRIM that is).
  2. S

    Migratin XEN LVM VM to Proxmox KVM

    hvc0 is Xen specific. you need to re-enable tty[1-6] For example: root@hosting:~# cat /etc/init/tty1.conf # tty1 - getty # # This service maintains a getty on tty1 from the point the system is # started until it is shut down again. start on stopped rc RUNLEVEL=[2345] and (...
  3. S

    [BUG?] ZFS data corruption on Proxmox 4

    It's pretty easy to decide if ZoL is the issue or not. Install SmartOS (if you still want virtualization) or any other Illumos-based distribution (e.g. OmniOS). Restore the data and play with it. Please note that SmartOS disables C-States, so, if it works, it may be a lead (did you try that on...
  4. S

    What is the procedure to replace failed HD in ZFS Raid Mirror conf?

    zpool add was a very bad idea. That adds a new "raid0" vdev instead of mirror. The correct procedure is to do "zpool attach rpool /dev/sdb /dev/sdnew"
  5. S

    [BUG?] ZFS data corruption on Proxmox 4

    My understanding is that you are blaming ZFS because it works with 2 RAM sticks, but not with four. So the constant is ZFS and the variables are RAM sticks and motherboard. Is that correct?
  6. S

    High SSD wear after a few days

    I think Proxmox is hardcoded with ashift=12 for rpool. rpool is hardly touched, so no big deal.
  7. S

    High SSD wear after a few days

    That's too much data to grasp. What I would do is this. 1. Reboot the server 2. Take a snapshot (short SMART data) 3. Leave 12-24 hours 4. Take another snapshot 5. Do an "iostat -dm" (this will show you read & written data in MB since the last reboot) 6. Substract 2 from 4 and map it to 5 to...
  8. S

    High SSD wear after a few days

    The same, but I keep storage/vms and storage/containers separate. It is easy to do separate send/recv -R :) Just a matter of personal taste.
  9. S

    ZFS 2x2TB HDD and SSD LOG + L2ARC - Slow writes, high IO Wait ? Need your advice

    When you copy the file to storage pool, there should be no activity on log devices, because that is an async operation. Therefore I assume that 60MB/s and 45MB/s writes to sd[ab] is ARC evict to L2ARC which is pretty high. Did you adjust ZFS parameters? By default it writes with ~8MB/sec on L2ARC.
  10. S

    High SSD wear after a few days

    Anyway, going back to the original issue: you need to replicate the mirrored setup on the real host, with SSDs and check again the writes (iostat and zpool iostat at the same time). Also a smartctl output 1-2 days apart to map.
  11. S

    [BUG?] ZFS data corruption on Proxmox 4

    Is your CPU a Haswell? I'm asking because lots of people have issues with Haswell C-States.
  12. S

    High SSD wear after a few days

    You can create a file, but you will lose the (easy) incremental capabilities. "zfs send" outputs a ZFS stream that you can redirect to a file, SSH or whatever. The main issue is the zfs send -i part because you will output small .zfs files for incremental backups. When restoring, you will need...
  13. S

    LXC and NAT

    Proxmox is an option for a virtualization platform. When picking it, I think it is assumed that there is basic knowledge about virtualization and/or containers, networking and so on. The free version is a "bring your own experience to the table". There is also the option to require professional...
  14. S

    LXC and NAT

    The common wisdom is to use a "router" instance (VM) for this.
  15. S

    High SSD wear after a few days

    No, you don't need "software raid". That is ZFS job. You boot the standard installer, pick the install type ZFS and pick two drives. It will mirror them and also install GRUB on both. I think it is almost exactly what you did before with 3 drives, but pick only 2. After install do this: zfs...
  16. S

    High SSD wear after a few days

    It looks almost exactly like DL160. I think there is space for 7mm SSD between the two iron sheets (above hard-drives): http://en.community.dell.com/cfs-file/__key/communityserver-discussions-components-files/956/6153.c1100.jpg I think you have plenty of space under the cables coming from power...
  17. S

    High SSD wear after a few days

    What kind of 1U case you have? What server? No, the steps presented above are not OK. The install should be done in standard mode on OS disks. Now that you've told me that you have only 4 slots, this is an issue. I do have a HP DL160 G6 with 4 front slots only, but there are 2 more SATA ports...
  18. S

    High SSD wear after a few days

    You can use 4 large SSDs for VM storage. Let's call them /dev/sdc /dev/sdd /dev/sde /dev/sdf. You will create the pool like this for a "raid10" setup: # zpool create -o ashift=9 storage mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf # zfs set atime=off storage # zfs set compression=lz4...
  19. S

    High SSD wear after a few days

    Not good. The sector size of the flash devices is 512 bytes, so ashift should be 9. I assume this pool was created by the proxmox installer. For raidz this means at least wasted space. If you don't mind a suggestion, I would go with a pair of small SSDs (I use a single one, 32GB) for the root...
  20. S

    High SSD wear after a few days

    zpool set cachefile=/etc/zfs/zpool.cache rpool zdb | grep ashift

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!