Hi Proxmox Forum,
*This is my first time posting, so please be gentle if I do not abide by all the rules*
I researched through the forum but could not find an answer pertaining to this question. Although the link below was very helpful:
https://forum.proxmox.com/threads/storage-local-and-local-zfs.31761/#post-157887
I am looking for some guidance on how to install a KVM FortiGate in Proxmox. Prior to installing ZFS, I used a normal directory for my VM images storage. When creating the VM, I can create a "placeholder" VM disk and then after creating the VM, I could scp the disk image for the VM to the proxmox server, start up the VM and was good to go.
Now with ZFS local, I am unable to directly access the file system of the VM because it looks like it is mounted into its own zfs pool but not mounted into the file system. For example:
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.4G 249G 104K /rpool
rpool/ROOT 719M 249G 96K /rpool/ROOT
rpool/ROOT/pve-1 719M 249G 719M /
rpool/backups 96K 249G 96K /rpool/backups
rpool/data 5.76G 249G 96K /rpool/data
rpool/data/vm-101-disk-1 5.76G 249G 5.76G - Not Mounted/accessible from SSH
rpool/images 3.18G 249G 3.18G /rpool/images - Mounted and accessible from SSH
What I did manually is created a separate pool (see "rpool/images") and used the "directory" to make it accessible from Proxmox. Once I did this, I was able to see the directory within Proxmox similar to:
root@pve:/rpool/images/images/100# ls
vm-100-disk-1.qcow2 vm-100-disk-2.qcow2
root@pve:/rpool/images/images/100# cd ..
As you can see below, I can access the "101" directory, but when I list the contents of that directory, it shows up blank.
root@pve:/rpool/images/images# cd 101
root@pve:/rpool/images/images/101# ls -al
total 1
drwxr----- 2 root root 2 Jan 19 15:37 .
drwxr-xr-x 6 root root 6 Jan 25 16:39 ..
root@pve:/rpool/images/images/101#
I am fine with the workaround, but the major problem is the performance level of the storage with my VM when I install it on the directory with the zfs pool. Below is a the output of my VM's configuration file:
root@pve:/etc/pve/qemu-server# cat 100.conf
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 1024
name: fortios56
net0: virtio=B6:CB:87:CF:C8:63,bridge=vmbr0
net1: virtio=BE:FB:F1:5F:28:63,bridge=vmbr0,tag=4000
numa: 0
onboot: 1
ostype: l26
scsi0: images:100/vm-100-disk-2.qcow2,cache=writeback,size=2G
scsi1: images:100/vm-100-disk-1.qcow2,cache=writeback,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=e7da37a1-0714-4339-9c71-0dc3598112d5
sockets: 1
For the details on my install, it is the following:
Hardware:
Dell R710 with H200 (Flashed to IT Mode)
2 x 300GB 10K HDDs
32 GB of RAM and dual x5660
ZFS
I am currently on Proxmox 4.4.35-1-pve
==============================
root@pve:~# pveversion -v
proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-1 (running version: 4.4-1/eb2d6f1e)
pve-kernel-4.4.35-1-pve: 4.4.35-76
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-88
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
==============================
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.4G 249G 104K /rpool
rpool/ROOT 719M 249G 96K /rpool/ROOT
rpool/ROOT/pve-1 719M 249G 719M /
rpool/backups 96K 249G 96K /rpool/backups
rpool/data 5.76G 249G 96K /rpool/data
rpool/data/vm-101-disk-1 5.76G 249G 5.76G -
rpool/images 3.18G 249G 3.18G /rpool/images
rpool/iso 2.29G 249G 2.29G /rpool/iso
rpool/swap 8.50G 257G 64K -
*This is my first time posting, so please be gentle if I do not abide by all the rules*
I researched through the forum but could not find an answer pertaining to this question. Although the link below was very helpful:
https://forum.proxmox.com/threads/storage-local-and-local-zfs.31761/#post-157887
I am looking for some guidance on how to install a KVM FortiGate in Proxmox. Prior to installing ZFS, I used a normal directory for my VM images storage. When creating the VM, I can create a "placeholder" VM disk and then after creating the VM, I could scp the disk image for the VM to the proxmox server, start up the VM and was good to go.
Now with ZFS local, I am unable to directly access the file system of the VM because it looks like it is mounted into its own zfs pool but not mounted into the file system. For example:
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.4G 249G 104K /rpool
rpool/ROOT 719M 249G 96K /rpool/ROOT
rpool/ROOT/pve-1 719M 249G 719M /
rpool/backups 96K 249G 96K /rpool/backups
rpool/data 5.76G 249G 96K /rpool/data
rpool/data/vm-101-disk-1 5.76G 249G 5.76G - Not Mounted/accessible from SSH
rpool/images 3.18G 249G 3.18G /rpool/images - Mounted and accessible from SSH
What I did manually is created a separate pool (see "rpool/images") and used the "directory" to make it accessible from Proxmox. Once I did this, I was able to see the directory within Proxmox similar to:
root@pve:/rpool/images/images/100# ls
vm-100-disk-1.qcow2 vm-100-disk-2.qcow2
root@pve:/rpool/images/images/100# cd ..
As you can see below, I can access the "101" directory, but when I list the contents of that directory, it shows up blank.
root@pve:/rpool/images/images# cd 101
root@pve:/rpool/images/images/101# ls -al
total 1
drwxr----- 2 root root 2 Jan 19 15:37 .
drwxr-xr-x 6 root root 6 Jan 25 16:39 ..
root@pve:/rpool/images/images/101#
I am fine with the workaround, but the major problem is the performance level of the storage with my VM when I install it on the directory with the zfs pool. Below is a the output of my VM's configuration file:
root@pve:/etc/pve/qemu-server# cat 100.conf
bootdisk: scsi0
cores: 1
ide2: none,media=cdrom
memory: 1024
name: fortios56
net0: virtio=B6:CB:87:CF:C8:63,bridge=vmbr0
net1: virtio=BE:FB:F1:5F:28:63,bridge=vmbr0,tag=4000
numa: 0
onboot: 1
ostype: l26
scsi0: images:100/vm-100-disk-2.qcow2,cache=writeback,size=2G
scsi1: images:100/vm-100-disk-1.qcow2,cache=writeback,size=32G
scsihw: virtio-scsi-pci
smbios1: uuid=e7da37a1-0714-4339-9c71-0dc3598112d5
sockets: 1
For the details on my install, it is the following:
Hardware:
Dell R710 with H200 (Flashed to IT Mode)
2 x 300GB 10K HDDs
32 GB of RAM and dual x5660
ZFS
I am currently on Proxmox 4.4.35-1-pve
==============================
root@pve:~# pveversion -v
proxmox-ve: 4.4-76 (running kernel: 4.4.35-1-pve)
pve-manager: 4.4-1 (running version: 4.4-1/eb2d6f1e)
pve-kernel-4.4.35-1-pve: 4.4.35-76
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-101
pve-firmware: 1.1-10
libpve-common-perl: 4.0-83
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-70
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.0-9
pve-container: 1.0-88
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u2
lxc-pve: 2.0.6-2
lxcfs: 2.0.5-pve1
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
==============================
root@pve:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 20.4G 249G 104K /rpool
rpool/ROOT 719M 249G 96K /rpool/ROOT
rpool/ROOT/pve-1 719M 249G 719M /
rpool/backups 96K 249G 96K /rpool/backups
rpool/data 5.76G 249G 96K /rpool/data
rpool/data/vm-101-disk-1 5.76G 249G 5.76G -
rpool/images 3.18G 249G 3.18G /rpool/images
rpool/iso 2.29G 249G 2.29G /rpool/iso
rpool/swap 8.50G 257G 64K -