Proxmox 4 ZFS Install

matthew

Renowned Member
Jul 28, 2011
211
5
83
Is there a how to on getting started with ZFS on Proxmox 4? Youtube video?

I installed proxmox 4 on a couple of SATA drives with ZFS RAID1. Downloaded the centos 7 container template too local. I then tried to create a Centos 7 container and get this error.

Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O mmp /var/lib/vz/images/100/vm-100-disk-1.raw' failed: exit code 144

So what steps do I need before this? Really wish installer did this for me. Totally new to ZFS.
 
in ZFS you use following commands to create a virtual block device

zfs create -V 8G rpoo/mydisk and it will show up in /dev/zvol/rpool/mydisk

and from there you can format/partition mydisk into any kind of file system

However, I am very new to Proxmox
 
Last edited:
So on the server I have:


# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 3.62T 859M 3.62T - 0% 0% 1.00x ONLINE -


# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/zvol/rpool/swap none swap sw 0 0
proc /proc proc defaults 0 0


# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.2G 8.9M 3.2G 1% /run
rpool/ROOT/pve-1 3.5T 858M 3.5T 1% /
tmpfs 7.9G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
rpool 3.5T 128K 3.5T 1% /rpool
rpool/ROOT 3.5T 128K 3.5T 1% /rpool/ROOT
/dev/fuse 30M 12K 30M 1% /etc/pve
cgmfs 100K 0 100K 0% /run/cgmanager/fs


When I go in GUI to Datacenter - Storage and Add ZFS it wants me to enter ID and then select ZFS pool. What do I use for these? ZFS pool gives me options: rpool, rpool/ROOT and rpool/ROOT/pve-1.
 
you can use the command zfs create rpool/[new directory] create a seperate dataset which later you can snapshot/backup with

don't mix it with your ROOT and boot OS dir
 
Under datacenter storage I created "ID" "zfs-storage" and selected "ZFS Pool" "rpool". I then disabled container and disk_image on local. Is this the correct way to do this? Is there better way?
 
I think I did it that way. Now I have something like this after creating a 200GB Centos 7 container. I also created /backups on a 2TB third drive in this machine. Does this look right?

Datacenter - Storage

backups : Directory : VZDump backup file : /backups
local : Directory : ISO Image, Container Template : /var/lib/vz
storage : ZFS : Disk image, Container

# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 3.62T 1.32G 3.62T - 0% 0% 1.00x ONLINE -


# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 3.2G 8.9M 3.2G 1% /run
rpool/ROOT/pve-1 3.5T 1.1G 3.5T 1% /
tmpfs 7.9G 43M 7.8G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
rpool 3.5T 128K 3.5T 1% /rpool
rpool/ROOT 3.5T 128K 3.5T 1% /rpool/ROOT
/dev/fuse 30M 12K 30M 1% /etc/pve
cgmfs 100K 0 100K 0% /run/cgmanager/fs
/dev/sda1 1.8T 68M 1.7T 1% /backups
rpool/storage 3.5T 128K 3.5T 1% /rpool/storage
rpool/storage/subvol-100-disk-1 200G 268M 200G 1% /rpool/storage/subvol-100-disk-1


# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 10240 0 10240 0% /dev
tmpfs tmpfs 3284820 9020 3275800 1% /run
rpool/ROOT/pve-1 zfs 3753684608 1112576 3752572032 1% /
tmpfs tmpfs 8212040 43680 8168360 1% /dev/shm
tmpfs tmpfs 5120 0 5120 0% /run/lock
tmpfs tmpfs 8212040 0 8212040 0% /sys/fs/cgroup
rpool zfs 3752572160 128 3752572032 1% /rpool
rpool/ROOT zfs 3752572160 128 3752572032 1% /rpool/ROOT
/dev/fuse fuse 30720 12 30708 1% /etc/pve
cgmfs tmpfs 100 0 100 0% /run/cgmanager/fs
/dev/sda1 ext4 1922728752 68960 1824967736 1% /backups
rpool/storage zfs 3752572160 128 3752572032 1% /rpool/storage
rpool/storage/subvol-100-disk-1 zfs 209715200 274048 209441152 1% /rpool/storage/subvol-100-disk-1
 
How do I tell if compression is already enabled? How do I see existing ZFS settings for partitions etc?
 
I decided to test this issue on proxmox 3.4 and it seems I can create openvz containers on local /var/lib/vz fine without creating a zfs file storage plugin?

I also do not get this anymore - everything just works: Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O mmp /var/lib/vz/images/100/vm-100-disk-1.raw' failed: exit code 144

If I reinstall on proxmox 4.1 then I seem to get that error with or without using ZFS plugin, but then again this is LXC now. So is it maybe a LXC issue?