zfs - basic configuration question [SOLVED]

KevinH

Member
Aug 31, 2017
8
0
6
39
Hello fellow Proxmox users,

For the last five months I have been experimenting with ZFS as a root filesystem on a testbox. Since it's a testbox there is nothing on it that really matters. A couple of VMs; one webserver and a mailserver for a testdomain I'm using to play around with. I also host some dedicated servers on it from time to time for friends (Terraria, Don't Starve)

For anyone interested I have added some output of lshw below.

Main problem: Through the web interface I "accidentally" removed the reference to the ZFS pool called "data". Accidentally between quotes because I meant to, but now I'm stuck. What I was trying to do is make the Summary under Datacenter show me the right amount of storage. Somehow it doubled the amount of available space and I figured it was because I had two ZFS mounts to the same storage under homelab1\Disks\ZFS

I don't remember if this is the default data pool that is created during Proxmox installation or if I created it myself. The right amount of storage is now displayed in the Summary section but I don't know how to proceed now, to get the VMs back online.

Output of zpool list:
Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool   464G  13.8G   450G        -         -     3%     2%  1.00x    ONLINE  -

Output of zfs list:

Code:
NAME                       USED  AVAIL     REFER  MOUNTPOINT
rpool                     13.8G   436G      104K  /rpool
rpool/ROOT                3.75G   436G       96K  /rpool/ROOT
rpool/ROOT/pve-1          3.75G   436G     3.75G  /
rpool/data                10.1G   436G       96K  /rpool/data
rpool/data/vm-100-disk-0  3.56G   436G     3.56G  -
rpool/data/vm-101-disk-0  6.52G   436G     6.52G  -

When I look at the mount point under /dev:
Code:
root@homelab1: ~ # ls /dev/rpool/data/vm-100-disk-0
lrwxrwxrwx 1 root root 9 Jun 19 07:33 /dev/rpool/data/vm-100-disk-0 -> ../../zd0

Code:
root@homelab1: ~ # ls -lah /dev/zd*
brw-rw---- 1 root disk 230,  0 Jun 19 10:39 /dev/zd0
brw-rw---- 1 root disk 230,  1 Jun 19 10:39 /dev/zd0p1
brw-rw---- 1 root disk 230,  2 Jun 19 10:39 /dev/zd0p2
brw-rw---- 1 root disk 230,  5 Jun 19 10:39 /dev/zd0p5
brw-rw---- 1 root disk 230, 16 Jun 19 07:33 /dev/zd16
brw-rw---- 1 root disk 230, 17 Jun 19 07:33 /dev/zd16p1
brw-rw---- 1 root disk 230, 18 Jun 19 07:33 /dev/zd16p2
brw-rw---- 1 root disk 230, 21 Jun 19 07:33 /dev/zd16p5

The rpool/data is still there. I can cfdisk /dev/zd0, mount zd0p1 somewhere and view its contents, so nothings lost. What I can't figure out is how I can get Proxmox to recognise this as a device I can add to a VM. The device that used to be in the configuration file of vm100 is gone. What should I add to it? What are the steps I should take in the web interface? I was a bit too cavalier in my assumption that I could just mount it somewhere and google a couple of commands
How would you proceed?

If there is a need for more information I'll provide it.

Code:
root@homelab1: ~ # cat /etc/pve/qemu-server/100.conf
agent: 1
balloon: 1024
bootdisk: scsi0
cores: 2
cpu: host
ide2: none,media=cdrom
memory: 2048
name: WEB1-eos
net0: virtio=76:BE:D5:B7:09:82,bridge=vmbr0,firewall=1
numa: 0
onboot: 1
ostype: l26
scsihw: virtio-scsi-pci
shares: 2000
smbios1: uuid=4b7eb630-e9b7-4379-92d4-93f852f67841
sockets: 1
startup: order=1,up=30,down=300
vmgenid: d482b94e-fe3c-4875-8380-f7d28cdf8257






Code:
homelab1
    description: Desktop Computer
    product: MS-7693
    vendor: MSI
    version: 3.0
    width: 64 bits
    capabilities: smbios-2.7 dmi-2.7 smp vsyscall32
    configuration: boot=normal chassis=desktop
  *-core
       description: Motherboard
       product: 970A-G43 (MS-7693)
       vendor: MSI
       physical id: 0
       version: 3.0
       serial: To be filled by O.E.M.
       slot: To be filled by O.E.M.
     *-firmware
          description: BIOS
          vendor: American Megatrends Inc.
          physical id: 0
          version: V10.6
          date: 01/08/2016
          size: 64KiB
          capacity: 8MiB
          capabilities: pci upgrade shadowing cdboot bootselect socketedrom edd acpi usb biosbootspecification uefi
     *-cpu
          product: AMD FX(tm)-6300 Six-Core Processor
          vendor: Advanced Micro Devices [AMD]
          version: AMD FX(tm)-6300 Six-Core Processor
          slot: CPU 1
          size: 1398MHz
          capacity: 3500MHz
          width: 64 bits
          clock: 200MHz
          capabilities: lm fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp x86-64 constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold cpufreq
          configuration: cores=6 enabledcores=6 threads=6
     *-memory
          description: System Memory
          physical id: 26
          slot: System board or motherboard
          size: 32GiB

        *-raid
             description: RAID bus controller
             product: SB7x0/SB8x0/SB9x0 SATA Controller [RAID5 mode]
             vendor: Advanced Micro Devices, Inc. [AMD/ATI]
             physical id: 11
             bus info: pci@0000:00:11.0
             logical name: scsi0
             version: 40
             width: 32 bits
             clock: 66MHz
             capabilities: raid bus_master cap_list emulated
             configuration: driver=ahci latency=128
             resources: irq:19 ioport:f040(size=8) ioport:f030(size=4) ioport:f020(size=8) ioport:f010(size=4) ioport:f000(size=16) memory:fe50b000-fe50b3ff
           *-disk
                description: ATA Disk
                product: Samsung SSD 860
                physical id: 0.0.0
                bus info: scsi@0:0.0.0
                logical name: /dev/sda
                version: 3B6Q
                serial: S4XBNF1M942919H
                size: 465GiB (500GB)
                capabilities: gpt-1.00 partitioned partitioned:gpt
                configuration: ansiversion=5 guid=f7f62559-8a1c-46a8-b8d9-f002e07fbf4e logicalsectorsize=512 sectorsize=512
              *-volume:0
                   description: BIOS Boot partition
                   vendor: EFI
                   physical id: 1
                   bus info: scsi@0:0.0.0,1
                   logical name: /dev/sda1
                   serial: 7316aeb6-3160-453b-a551-d15f71e9eb6c
                   capacity: 1006KiB
                   capabilities: nofs
              *-volume:1
                   description: Windows FAT volume
                   vendor: mkfs.fat
                   physical id: 2
                   bus info: scsi@0:0.0.0,2
                   logical name: /dev/sda2
                   version: FAT32
                   serial: 407d-fa66
                   size: 510MiB
                   capacity: 511MiB
                   capabilities: boot fat initialized
                   configuration: FATs=2 filesystem=fat
              *-volume:2
                   description: OS X ZFS partition or Solaris /usr partition
                   vendor: Solaris
                   physical id: 3
                   bus info: scsi@0:0.0.0,3
                   logical name: /dev/sda3
                   serial: a596e915-f558-4a0f-bcf1-052a5075388d
                   capacity: 465GiB
*-network
       description: Ethernet interface
       physical id: 1
       logical name: vmbr0
       serial: 4c:cc:6a:d5:26:c6
       capabilities: ethernet physical
       configuration: broadcast=yes driver=bridge driverversion=2.3 firmware=N/A ip=192.168.1.2 link=yes multicast=yes
 
Please post your storage config (/etc/pve/storage.cfg).
Once the ZFS storage is added again, you shoulld be able to add the disk as 'unused' via qm rescan. (See man qm for details)
 
Hello Mira, thanks for your quick reply.

Code:
root@homelab1: ~ # cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content snippets,images,iso,vztmpl,rootdir
        maxfiles 2
        shared 0

What should I add to this? dir: data, path ?. I'm unsure what path to enter.

( I'll read through the qm man page again when I get home, @work atm. )
 
In the GUI select Datacenter -> Storage. There add a new storage ZFS. Select a name (ID) for the storage and then for ZFS Pool select rpool/data.
 
I followed your instructions and named the ZFS storage "data".

This was added to /etc/pve/storage.cfg:


Code:
zfspool: data
        pool rpool/data
        content rootdir,images
        mountpoint /rpool/data
        sparse 0

I went to add a harddisk to vm100, select storage "data" and a new entry appeared in /etc/pve/qemu-server/100.conf:
Code:
scsi0: data:vm-100-disk-1,size=32G

Changed it to:
Code:
scsi0: data:vm-100-disk-0,size=20G

And the VM starts without a problem!
Very nice. =) Thank you!

The only thing is, now the total amount of storage is doubled again.
Is there a (quick) way I can fix this? I'm decent at programming and scripting so I don't know if Proxmox is open to suggestions from the community, else I might take a look at it this evening.

It might be as easy as traversing the symlinks to their physical disk, checking the total size of it and dividing the total amount of storage in "storage"by the number of entries in the "storage" list that traverse to this disk.

Now that I look at it again, It's weird, I thought the total storage was doubled. Somehow it's reaching the conclusion that the total amount of storage is: 819.14 GiB. That's slightly less than double.

Code:
root@homelab1: ~ # df -h
Filesystem        Size  Used Avail Use% Mounted on
udev               16G     0   16G   0% /dev
tmpfs             3.2G  9.1M  3.2G   1% /run
rpool/ROOT/pve-1  374G  3.8G  370G   2% /
tmpfs              16G   43M   16G   1% /dev/shm
tmpfs             5.0M     0  5.0M   0% /run/lock
tmpfs              16G     0   16G   0% /sys/fs/cgroup
rpool             370G  128K  370G   1% /rpool
rpool/ROOT        370G  128K  370G   1% /rpool/ROOT
rpool/data        370G  128K  370G   1% /rpool/data
/dev/fuse          30M   24K   30M   1% /etc/pve
tmpfs             3.2G     0  3.2G   0% /run/user/1000

It doesn't make sense to me.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!