zpool status
zpool export rpool
zpool destroy rpool
zpool labelclear -f /dev/disk/by-id/xxx-disk1-to-disk4
for every disk and every partition
And for safety
wipefs /dev/sda ... sdd
parted /dev/disk/by-id/ata-HGST_HUS726020ALN610_xxx mklabel gpt
zpool create -n -O compression=on -o ashift=12 -f r10pool mirror /dev/disk/by-id/ata-HGST_HUS726020ALN610_00 /dev/disk/by-id/ata-HGST_HUS726020ALN610_01 mirror /dev/disk/by-id/ata-HGST_HUS726020ALN610_02 /dev/disk/by-id/ata-HGST_HUS726020ALN610_03
zpool create -O compression=on -o ashift=12 -f r10pool mirror /dev/disk/by-id/ata-HGST_HUS726020ALN610_00 /dev/disk/by-id/ata-HGST_HUS726020ALN610_01 mirror /dev/disk/by-id/ata-HGST_HUS726020ALN610_02 /dev/disk/by-id/ata-HGST_HUS726020ALN610_03
The work is done
I build in one M.2 NVMe SSD on the small Fujitsu D3417-B Skylake Mainboard with the newest BIOS
Install Proxmox 4.3 with ext4, 33 GB for root
There were no problems with the booting (yeah!)
Only now I have connected the 4 hard disks (HGST with 4Kn Sector size)
Thanks to spudger for the tip: Now I have deleted the old ZFS signatures
Code:zpool status zpool export rpool zpool destroy rpool zpool labelclear -f /dev/disk/by-id/xxx-disk1-to-disk4 for every disk and every partition And for safety wipefs /dev/sda ... sdd
With parted I created a new partition table
for every disk (I'm not sure if that is necessary)Code:parted /dev/disk/by-id/ata-HGST_HUS726020ALN610_xxx mklabel gpt
There follows the big command to create the ZFS Raid 10
No partitioning one large pool called rpool
Code:zpool create -n -O compression=on -o ashift=12 -f rpool mirror /dev/disk/by-id/ata-HGST_HUS726020ALN610_00 /dev/disk/by-id/ata-HGST_HUS726020ALN610_01 mirror /dev/disk/by-id/ata-HGST_HUS726020ALN610_02 /dev/disk/by-id/ata-HGST_HUS726020ALN610_03
In the Wiki, by the way, a small "-o" before compression is wrong
-n => Displays the configuration that would be used without actually creating the pool.
When everything is alright, without option "-n":
Code:zpool create -n -O compression=on -o ashift=12 -f rpool mirror /dev/disk/by-id/ata-HGST_HUS726020ALN610_00 /dev/disk/by-id/ata-HGST_HUS726020ALN610_01 mirror /dev/disk/by-id/ata-HGST_HUS726020ALN610_02 /dev/disk/by-id/ata-HGST_HUS726020ALN610_03
And the last step I took via the GUI:
select - Datacenter - Storage => Add => ZFS => ID = Raid10, ZFS_Pool = rpool => Add
The small problem that I see is the fact that no backups and ISO files can be selected here
Is there a way to also save ISO files and VZDump backups on this ZFS pool?
regards, maxprox
I would call the pool something else ("rpool" is by convention the root pool) - you can just export and import with a new name to do that.
You can store backups and ISO files on it by creating a separate dataset (e.g., "zfs create yourpool/backups") and then configuring a directory storage for the mount point ("/yourpool/datasetname" by default). I would recommend using a different storage for backups though (ideally on a different machine) - locally you can already do snapshots which are way more convenient IMHO..
zpool export rpool
root@oprox:# zpool import rpool r10pool
cannot import 'rpool': no such pool available
zpool import -f rpool r10pool
root@oprox:~# zpool status
pool: r10pool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
r10pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-HGST_HUS726020ALN610_00 ONLINE 0 0 0
ata-HGST_HUS726020ALN610_01 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-HGST_HUS726020ALN610_02 ONLINE 0 0 0
ata-HGST_HUS726020ALN610_03 ONLINE 0 0 0
zfs create r10pool/dataset
root@oprox:~# zfs mount
r10pool /r10pool
r10pool/dataset /r10pool/dataset
root@oprox:~# ll /r10pool/dataset/
total 2.5K
drwxr-xr-x 5 root root 5 Dec 23 14:39 .
drwxr-xr-x 3 root root 3 Dec 23 14:37 ..
drwxr-xr-x 2 root root 2 Dec 23 14:39 dump
drwxr-xr-x 2 root root 2 Dec 23 14:39 images
drwxr-xr-x 4 root root 4 Dec 23 14:39 template
I noticed that the zpool was using /dev/sda2 and /dev/sdb2 for one mirror but /dev/sdc and /dev/sdd for the other mirror. Looking at the partition tables with fdisk I see two BIOS boot partitions (not unexpected) but also a "Solaris reserved 1" partition #9 on every drive (as shown here.) The ones with the BIOS boot partition are 8 mb, the other two are 64 mb. Are these some kind of remnant from cylinder alignment? More to the point, do I need them at all if I use four identical disks with one partition each?
Also, the installer started on cylinder 2048. When I use fdisk it defaults to 256. Is this a "just in case" thing or is there a good reason to skip that many?
for the non-boot disks, we leave the disk setup to ZFS - it seems they have sector count for the reserved partition so it's bigger on 4Kn disks. for the boot disks we always reserve 8mb (like ZFS does with 512e disks).
I am not sure why you installed with ZFS (and then again with LVM-thin on other disks?)? you can just create a zpool with "zpool create" on any existing PVE installation..
Yes there is, have a look at the ubuntu wiki:https://wiki.ubuntuusers.de/Ubuntu_umziehen/The intent here was to get Proxmox installed on (and booting from) a small but reliable ZFS array. Given that the 4.X installer will install to that, perhaps there is a recipe for moving just the boot to another device?
...
nvme0n1 259:0 0 119.2G 0 disk
├─nvme0n1p1 259:1 0 1007K 0 part
├─nvme0n1p2 259:2 0 127M 0 part
└─nvme0n1p3 259:3 0 119.1G 0 part
├─pve-root 251:0 0 29.8G 0 lvm /
├─pve-swap 251:1 0 4G 0 lvm
├─pve-data_tmeta 251:2 0 72M 0 lvm
│ └─pve-data 251:4 0 70.5G 0 lvm
└─pve-data_tdata 251:3 0 70.5G 0 lvm
└─pve-data 251:4 0 70.5G 0 lvm
root@oprox:~# dmesg | grep -i error
....
[ 0.581966] ACPI Error: [\_SB_.PCI0.LPCB.H_EC.ECAV] Namespace lookup failure, AE_NOT_FOUND (20150930/psargs-359)
[ 0.581970] ACPI Error: Method parse/execution failed [\_TZ.FNCL] (Node ffff880fed91f4d8), AE_NOT_FOUND (20150930/psparse-542)
[ 0.581976] ACPI Error: Method parse/execution failed [\_TZ.FN02._ON] (Node ffff880fed91f168), AE_NOT_FOUND (20150930/psparse-542)
[ 0.589970] ACPI Error: [\_SB_.PCI0.LPCB.H_EC.ECAV] Namespace lookup failure, AE_NOT_FOUND (20150930/psargs-359)
[ 0.589975] ACPI Error: Method parse/execution failed [\_TZ.FNCL] (Node ffff880fed91f4d8), AE_NOT_FOUND (20150930/psparse-542)
[ 0.589980] ACPI Error: Method parse/execution failed [\_TZ.FN02._ON] (Node ffff880fed91f168), AE_NOT_FOUND (20150930/psparse-542)
[ 0.610004] ACPI Error: [\_SB_.PCI0.LPCB.H_EC.ECAV] Namespace lookup failure, AE_NOT_FOUND (20150930/psargs-359)
$ smartctl /dev/nvme0 -x
SMART/Health Information (NVMe Log 0x02, NSID 0xffffffff)
Critical Warning: 0x00
Temperature: 45 Celsius
Available Spare: 100%
Available Spare Threshold: 10%
Percentage Used: 0%
Data Units Read: 1,303,319 [667 GB]
Data Units Written: 799,662 [409 GB]
Host Read Commands: 12,615,280
Host Write Commands: 5,916,573
Controller Busy Time: 92
Power Cycles: 23
Power On Hours: 373
Unsafe Shutdowns: 2
Media and Data Integrity Errors: 0
Error Information Log Entries: 0
Warning Comp. Temperature Time: 0
Critical Comp. Temperature Time: 0
Error Information (NVMe Log 0x01, max 64 entries)
No Errors Logged
root@oprox:~# cat /etc/cron.weekly/fstrim
#!/bin/sh
## trim das root / file systems welches auf der NVMe SSD liegt
## /sbin/fstrim --all || true
LOG=/var/log/batched_discard.log
echo "*** $(date -R) ***" >> $LOG
/sbin/fstrim -v / >> $LOG
##/sbin/fstrim -v /home >> $LOG
$ systemctl mask sleep.target suspend.target hibernate.target hybrid-sleep.target
$ swapoff -a
root@oprox:~$ zfs create -V 16G -b $(getconf PAGESIZE) \
-o logbias=throughput -o sync=always \
-o primarycache=metadata -o secondarycache=none \
-o com.sun:auto-snapshot=false r10pool/swap
$ mkswap -f /dev/zvol/r10pool/swap
$ swapon /dev/zvol/r10pool/swap
/dev/zvol/r10pool/swap none swap defaults 0 0
$ cat /proc/sys/vm/swappiness => 60
vm.swappiness = 1
$ sysctl vm.swappiness=1
$ cat /proc/swaps
$ cat /proc/sys/vm/swappiness
$ free -hm
root@oprox:~# cat /etc/modprobe.d/zfs.conf
## den von zfs arc max zu verwendende RAM-Speicher:
## in Byte, hier =
## 2GB=2147483648
## 4GB=4294967296
## 8GB=8589934592,
## 12GB=12884901888:
options zfs zfs_arc_min=2147483648
options zfs zfs_arc_max=12884901888
$ update-initramfs -u
$ arcstat.py 3 4
$ cat /proc/spl/kstat/zfs/arcstats
$ cat /sys/module/zfs/parameters/zfs_arc_max
We use essential cookies to make this site work, and optional cookies to enhance your experience.