Installing Appliance Based VMs with ZFS-local installation

juiceman84

Active Member
Jan 27, 2017
7
0
41
40
Hi Proxmox Forum,

*This is my first time posting, so please be gentle if I do not abide by all the rules*

I researched through the forum but could not find an answer pertaining to this question. Although the link below was very helpful:

https://forum.proxmox.com/threads/storage-local-and-local-zfs.31761/#post-157887

I am looking for some guidance on how to install a KVM FortiGate in Proxmox. Prior to installing ZFS, I used a normal directory for my VM images storage. When creating the VM, I can create a "placeholder" VM disk and then after creating the VM, I could scp the disk image for the VM to the proxmox server, start up the VM and was good to go.

Now with ZFS local, I am unable to directly access the file system of the VM because it looks like it is mounted into its own zfs pool but not mounted into the file system. For example:

root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.4G 249G 104K /rpool
rpool/ROOT 719M 249G 96K /rpool/ROOT
rpool/ROOT/pve-1 719M 249G 719M /
rpool/backups 96K 249G 96K /rpool/backups
rpool/data 5.76G 249G 96K /rpool/data
rpool/data/vm-101-disk-1 5.76G 249G 5.76G - Not Mounted/accessible from SSH
rpool/images 3.18G 249G 3.18G /rpool/images - Mounted and accessible from SSH

What I did manually is created a separate pool (see "rpool/images") and used the "directory" to make it accessible from Proxmox. Once I did this, I was able to see the directory within Proxmox similar to:

root@pve:/rpool/images/images/100# ls
vm-100-disk-1.qcow2 vm-100-disk-2.qcow2

root@pve:/rpool/images/images/100# cd ..


As you can see below, I can access the "101" directory, but when I list the contents of that directory, it shows up blank.

root@pve:/rpool/images/images# cd 101
root@pve:/rpool/images/images/101# ls -al
total 1
drwxr----- 2 root root 2 Jan 19 15:37 .
drwxr-xr-x 6 root root 6 Jan 25 16:39 ..
root@pve:/rpool/images/images/101#


I am fine with the workaround, but the major problem is the performance level of the storage with my VM when I install it on the directory with the zfs pool. Below is a the output of my VM's configuration file:

root@pve:/etc/pve/qemu-server# cat 100.conf
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 1024
name: fortios56
net0: virtio=B6:CB:87:CF:C8:63,bridge=vmbr0
net1: virtio=BE:FB:F1:5F:28:63,bridge=vmbr0,tag=4000
numa: 0
onboot: 1
ostype: l26
scsi0: images:100/vm-100-disk-2.qcow2,cache=writeback,size=2G
scsi1: images:100/vm-100-disk-1.qcow2,cache=writeback,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=e7da37a1-0714-4339-9c71-0dc3598112d5
sockets: 1

For the details on my install, it is the following:


Hardware:
Dell R710 with H200 (Flashed to IT Mode)
2 x 300GB 10K HDDs
32 GB of RAM and dual x5660
ZFS

I am currently on Proxmox 4.4.35-1-pve
==============================
root@pve:~# pveversion -v
proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-1 (running version: 4.4-1/eb2d6f1e)
pve-kernel-4.4.35-1-pve: 4.4.35-76
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-88
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
==============================

root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.4G 249G 104K /rpool
rpool/ROOT 719M 249G 96K /rpool/ROOT
rpool/ROOT/pve-1 719M 249G 719M /
rpool/backups 96K 249G 96K /rpool/backups
rpool/data 5.76G 249G 96K /rpool/data
rpool/data/vm-101-disk-1 5.76G 249G 5.76G -
rpool/images 3.18G 249G 3.18G /rpool/images
rpool/iso 2.29G 249G 2.29G /rpool/iso
rpool/swap 8.50G 257G 64K -
 
Hi,
rpool/data/vm-101-disk-1 5.76G 249G 5.76G
This is a zvol an emulation of a Block device.
You can mount it if the vm is not running.

(see "rpool/images")
This is no pool this is a subset of the rpool

We recommend to use the zfs pool plugin and not directory plugin.
And also use no cache with zfs pool plugin then the performance is better.

General if you have rpool and VM data on one pool you lose Performance.
 
Hi Wolfgang,

Thanks a ton for your response! I will give this a try when I get access back to my environment and confirm.

I agree with what you mentioned around the performance, it is totally true. Running a VM on a native zvol with no cache is way faster than creating a directory on top of the zpool with writeback enabled.
 
Hi Wolfgang,

I finally got around to attempting to mount this VM after creation but I am running into some issues. I have been trying to Google my way through this one but it's to no avail.

When I attempt to mount the zfs dataset, I am met with the following:

root@pve1:/rpool/data# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 556G 719M 555G - 0% 0% 1.00x ONLINE -

root@pve1:/rpool/data# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 9.21G 529G 96K /rpool
rpool/ROOT 717M 529G 96K /rpool/ROOT
rpool/ROOT/pve-1 717M 529G 717M /
rpool/data 160K 529G 96K /rpool/data
rpool/data/vm-100-disk-1 64K 529G 64K -
rpool/swap 8.50G 538G 64K -

root@pve1:/rpool/data# zfs set mountpoint=/tmp/vm100 rpool/data/vm-100-disk-1
cannot set property for 'rpool/data/vm-100-disk-1': 'mountpoint' does not apply to datasets of this type

I am unsure of why I am getting this message. Do you have any insight on how I can mount this type of dataset?

Thanks!
 
By the way, I did some more research and I believe I am starting to understand why this potentially cannot be mounted. According to the following link, there are two types of datasets that can be created. The type of filesystem and the type of volume. I do not know how to confirm it, but I am suspecting that the type that is being created in proxmox is of type volume.

With that said, what I am trying to accomplish is the ability to create a virtual appliance in Proxmox from a qcow2 file provided by a vendor. I do not have a problem converting that file from qcow2 to raw, but I do not know what to do with it once I have it in that format. My original thought was to get access to the existing raw file and overwrite it, but now, I am not sure if that is possible.

Please let me know your thoughts when you get a chance.

Thanks!
 
Last edited:
By the way, I did some more research and I believe I am starting to understand why this potentially cannot be mounted. According to the following link, there are two types of datasets that can be created. The type of filesystem and the type of volume. I do not know how to confirm it, but I am suspecting that the type that is being created in proxmox is of type volume.

With that said, what I am trying to accomplish is the ability to create a virtual appliance in Proxmox from a qcow2 file provided by a vendor. I do not have a problem converting that file from qcow2 to raw, but I do not know what to do with it once I have it in that format. My original thought was to get access to the existing raw file and overwrite it, but now, I am not sure if that is possible.

Please let me know your thoughts when you get a chance.

Thanks!

Yes, ZFS has file system datasets and zvol datasets. We use zvols for VMs, as they are exposed as block device via the kernel (and you then put a file system / partition table / .. on it inside the VM).

If you want to convert and existing raw or qcow2 disk image, you can follow these steps:
  1. move/copy image to correct place on a directory storage (e.g., /var/lib/vz/images/VMID/vm-VMID-disk-1.qcow2 , where VMID is the ID of your VM)
  2. run "qm rescan -vmid VMID" to make PVE check for non-referenced disks for this VM ID (now the disk should be added as unused disk in the VM configuration)
  3. add the disk as used disk (in the GUI, simply double click on the unused disk entry and select the options you want)
  4. move the disk to your ZFS storage (in the GUI, use the "move disk" button, on the CLI, use "qm move_disk ...")
 
Hi Fabian,

This is outstanding and worked absolutely as you described!

Thanks for you responding and providing me the steps to get this accomplished!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!