virtio is the best or, even better, for ZFS ZVOLs you will want virtio-scsi (select 'scsi' as disk and in 'Options', 'Controller Type', 'VIRTIO-SCSI'). It has TRIM/UNMAP support, so your zvols will shrink when deleting files in the guest (if the guest supports TRIM that is).
hvc0 is Xen specific.
you need to re-enable tty[1-6]
For example:
root@hosting:~# cat /etc/init/tty1.conf
# tty1 - getty
#
# This service maintains a getty on tty1 from the point the system is
# started until it is shut down again.
start on stopped rc RUNLEVEL=[2345] and (...
It's pretty easy to decide if ZoL is the issue or not. Install SmartOS (if you still want virtualization) or any other Illumos-based distribution (e.g. OmniOS).
Restore the data and play with it.
Please note that SmartOS disables C-States, so, if it works, it may be a lead (did you try that on...
My understanding is that you are blaming ZFS because it works with 2 RAM sticks, but not with four. So the constant is ZFS and the variables are RAM sticks and motherboard.
Is that correct?
That's too much data to grasp. What I would do is this.
1. Reboot the server
2. Take a snapshot (short SMART data)
3. Leave 12-24 hours
4. Take another snapshot
5. Do an "iostat -dm" (this will show you read & written data in MB since the last reboot)
6. Substract 2 from 4 and map it to 5 to...
When you copy the file to storage pool, there should be no activity on log devices, because that is an async operation.
Therefore I assume that 60MB/s and 45MB/s writes to sd[ab] is ARC evict to L2ARC which is pretty high.
Did you adjust ZFS parameters? By default it writes with ~8MB/sec on L2ARC.
Anyway, going back to the original issue: you need to replicate the mirrored setup on the real host, with SSDs and check again the writes (iostat and zpool iostat at the same time). Also a smartctl output 1-2 days apart to map.
You can create a file, but you will lose the (easy) incremental capabilities.
"zfs send" outputs a ZFS stream that you can redirect to a file, SSH or whatever.
The main issue is the zfs send -i part because you will output small .zfs files for incremental backups.
When restoring, you will need...
Proxmox is an option for a virtualization platform. When picking it, I think it is assumed that there is basic knowledge about virtualization and/or containers, networking and so on.
The free version is a "bring your own experience to the table". There is also the option to require professional...
No, you don't need "software raid". That is ZFS job. You boot the standard installer, pick the install type ZFS and pick two drives. It will mirror them and also install GRUB on both. I think it is almost exactly what you did before with 3 drives, but pick only 2.
After install do this:
zfs...
It looks almost exactly like DL160. I think there is space for 7mm SSD between the two iron sheets (above hard-drives): http://en.community.dell.com/cfs-file/__key/communityserver-discussions-components-files/956/6153.c1100.jpg
I think you have plenty of space under the cables coming from power...
What kind of 1U case you have? What server?
No, the steps presented above are not OK. The install should be done in standard mode on OS disks. Now that you've told me that you have only 4 slots, this is an issue. I do have a HP DL160 G6 with 4 front slots only, but there are 2 more SATA ports...
You can use 4 large SSDs for VM storage. Let's call them /dev/sdc /dev/sdd /dev/sde /dev/sdf. You will create the pool like this for a "raid10" setup:
# zpool create -o ashift=9 storage mirror /dev/sdc /dev/sdd mirror /dev/sde /dev/sdf
# zfs set atime=off storage
# zfs set compression=lz4...
Not good. The sector size of the flash devices is 512 bytes, so ashift should be 9. I assume this pool was created by the proxmox installer. For raidz this means at least wasted space.
If you don't mind a suggestion, I would go with a pair of small SSDs (I use a single one, 32GB) for the root...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.