Proxmox ZFS Pool

krogac

Member
Jul 18, 2018
35
0
11
43
Hello, i have new proxmox server installation. I use ZFS when i create disks.
After installation in web proxmox i see that i have local-lvm?

Code:
 df -h
Filesystem        Size  Used Avail Use% Mounted on
udev               32G     0   32G   0% /dev
tmpfs             6.3G  9.5M  6.3G   1% /run
rpool/ROOT/pve-1  1.6T  911M  1.6T   1% /
tmpfs              32G   63M   32G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
tmpfs              32G     0   32G   0% /sys/fs/cgroup
rpool             1.6T     0  1.6T   0% /rpool
rpool/ROOT        1.6T     0  1.6T   0% /rpool/ROOT
rpool/data        1.6T     0  1.6T   0% /rpool/data
/dev/fuse          30M   52K   30M   1% /etc/pve

Code:
zpool status
  pool: rpool
 state: ONLINE
  scan: none requested
config:
    NAME        STATE     READ WRITE CKSUM
    rpool       ONLINE       0     0     0
    sda3      ONLINE       0     0     0


Code:
zfs list
NAME               USED  AVAIL  REFER  MOUNTPOINT
rpool              913M  1.57T   104K  /rpool
rpool/ROOT         911M  1.57T    96K  /rpool/ROOT
rpool/ROOT/pve-1   911M  1.57T   911M  /
rpool/data          96K  1.57T    96K  /rpool/data
 
Last edited:
Please use code tags for command-outputs (it helps me in reading the output).

* Where do you see the local-lvm?
* Is this a clustered environment? (keep in mind that the storage definitions are clusterwide)
* please post the contents of your '/etc/pve/storage.cfg'
 
Ok, I edit post and add code tags.
1. I see local-lvm in web-gui console and when I try to add
2.Yes this is a clustered environment. I need this server as a replication server.

Code:
cat /etc/pve/storage.cfg 
dir: local
    path /var/lib/vz
    content iso,vztmpl,backup
lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images
dir: ISO
    path /msa/ISO
    content iso
    shared 1
dir: VM
    path /msa/VM
    content images
    shared 1
dir: NASBACKUP
    path /mnt/NASBACKUP
    content backup
    maxfiles 2
    shared 0
 
1. I see local-lvm in web-gui console and when I try to add
yes, because you have one defined in your '/etc/pve/storage.cfg' - it's probably from one of the other nodes in the cluster?

2.Yes this is a clustered environment. I need this server as a replication server.
for storage-replication to work all nodes involved need to have a zfs-storage - see https://pve.proxmox.com/pve-docs/chapter-pvesr.html

The rpool you listed in your zfs-output above is not in your '/etc/pve/storage.cfg'
I guess this happened, because you joined the node with the rpool to a cluster, which does not have it defined (the contents of /etc/pve are taken from the existing cluster, and the ones from the new node are discarded) - see https://pve.proxmox.com/pve-docs/chapter-pvecm.html

Hope this helps!
 
Thanks for replay.
What better to do. Change storage from zf to lvm or can i have this configuration?
 
I try create new VM on this node and i get error:

Code:
kvm: -drive file=/msa/VM/images/134/vm-134-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on: file system may not support O_DIRECT
kvm: -drive file=/msa/VM/images/134/vm-134-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on: Could not open '/msa/VM/images/134/vm-134-disk-0.qcow2': Invalid argument
TASK ERROR: start failed: command '/usr/bin/kvm -id 134 -name IMSRV -chardev 'socket,id=qmp,path=/var/run/qemu-server/134.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -chardev 'socket,id=qmp-event,path=/var/run/qmeventd.sock,reconnect=5' -mon 'chardev=qmp-event,mode=control' -pidfile /var/run/qemu-server/134.pid -daemonize -smbios 'type=1,uuid=90130139-23c7-45ab-8960-193e51b9745b' -smp '2,sockets=1,cores=2,maxcpus=2' -nodefaults -boot 'menu=on,strict=on,reboot-timeout=1000,splash=/usr/share/qemu-server/bootsplash.jpg' -vnc unix:/var/run/qemu-server/134.vnc,x509,password -cpu kvm64,+lahf_lm,+sep,+kvm_pv_unhalt,+kvm_pv_eoi,enforce -m 4048 -device 'pci-bridge,id=pci.2,chassis_nr=2,bus=pci.0,addr=0x1f' -device 'pci-bridge,id=pci.1,chassis_nr=1,bus=pci.0,addr=0x1e' -device 'vmgenid,guid=3f53f224-6406-4041-b776-318be5aab5a8' -device 'piix3-usb-uhci,id=uhci,bus=pci.0,addr=0x1.0x2' -device 'usb-tablet,id=tablet,bus=uhci.0,port=1' -device 'VGA,id=vga,bus=pci.0,addr=0x2' -device 'virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3' -iscsi 'initiator-name=iqn.1993-08.org.debian:01:e3a584921a3c' -drive 'file=/msa/ISO/template/iso/debian-9.4.0-amd64-netinst.iso,if=none,id=drive-ide2,media=cdrom,aio=threads' -device 'ide-cd,bus=ide.1,unit=0,drive=drive-ide2,id=ide2,bootindex=200' -device 'virtio-scsi-pci,id=scsihw0,bus=pci.0,addr=0x5' -drive 'file=/msa/VM/images/134/vm-134-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on' -device 'scsi-hd,bus=scsihw0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0,id=scsi0,bootindex=100' -netdev 'type=tap,id=net0,ifname=tap134i0,script=/var/lib/qemu-server/pve-bridge,downscript=/var/lib/qemu-server/pve-bridgedown,vhost=on' -device 'virtio-net-pci,mac=A6:F0:11:D9:D4:44,netdev=net0,bus=pci.0,addr=0x12,id=net0,bootindex=300' -machine 'type=pc'' failed: exit code 1"
 
kvm: -drive file=/msa/VM/images/134/vm-134-disk-0.qcow2,if=none,id=drive-scsi0,format=qcow2,cache=none,aio=native,detect-zeroes=on: file system may not support O_DIRECT
This could be due to the fact that you try to create a VM, with a disk-image on a zfs-dataset instead of a volume?
* what is mounted on '/msa/VM/' on the node you're trying to create the vm?
* what is the config of the vm?

What better to do. Change storage from zf to lvm or can i have this configuration?
You can have 2 different storages on your 2 nodes - but you need to be aware, that the storages are only available on the node

I probably would keep as many things equal for all nodes as possible
 
is the directory on ZFS?
if yes then this does not work - for guests on ZFS you need to use the zfspool plugin (because there is no O_DIRECT support on ZFS)
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!