Hi there...
I've installed PM on my testlab:
- 4-core xeon
- 32GB DDR3 ECC
- 2x600GB SAS (hardware PCI-E RAID1) - boot drive with PM itself + VM boot VD's...
- 6x1TB 2,5" mixed SATA drives (IT mode via mothreboard integrated sata ports)
My goal was to launch a VM NAS of some sort...
I've installed PM on HW raid volume, aswell with OpenMediaVault VM. Then under web-gui I created ZFS pool (RAIDZ aka RAID5) using 6x1TB SATA drives. On ZFS pool I've created another VD as dedicated storage drive for OMV.
What made me wonder was a warning during a PM reboot:
systemctl:
ZFS seems to look ok:
Soafter adding ca 4TB VD to OMV I've created a share and started a file transfer... It crashed after 30GB of transfer with notice that there's not enough space...
I couldn't create any (even tiny one) VD on ZFS later on.
Finally forced to remove VD and creating a new one but still out of space issue...
That was yesterday. Today (without reboot) I can create vd's normally so... any ideas?
ZFS Pool and all disks seem to be healthy...
Some other disks summary:
I've installed PM on my testlab:
- 4-core xeon
- 32GB DDR3 ECC
- 2x600GB SAS (hardware PCI-E RAID1) - boot drive with PM itself + VM boot VD's...
- 6x1TB 2,5" mixed SATA drives (IT mode via mothreboard integrated sata ports)
My goal was to launch a VM NAS of some sort...
I've installed PM on HW raid volume, aswell with OpenMediaVault VM. Then under web-gui I created ZFS pool (RAIDZ aka RAID5) using 6x1TB SATA drives. On ZFS pool I've created another VD as dedicated storage drive for OMV.
What made me wonder was a warning during a PM reboot:
systemctl:
Code:
zfs-import-cache.service loaded active exited Import ZFS pools by cache file
● zfs-import@ZFSQNAP.service loaded failed failed Import ZFS pool ZFSQNAP
zfs-mount.service loaded active exited Mount ZFS filesystems
zfs-share.service loaded active exited ZFS file system shares
zfs-volume-wait.service loaded active exited Wait for ZFS Volume (zvol) links in /dev
zfs-zed.service loaded active running ZFS Event Daemon (zed)
ZFS seems to look ok:
Code:
root@PROXTEMP:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
ZFSQNAP 2.79M 4.22T 153K /ZFSQNAP
root@PROXTEMP:~# zpool status
pool: ZFSQNAP
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
ZFSQNAP ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST1000LM035-1RK172_WL1969PY ONLINE 0 0 0
ata-WDC_WD10SPZX-08Z10_WD-WX61AA8DRJ83 ONLINE 0 0 0
ata-ST1000LM035-1RK172_WL1XHFF3 ONLINE 0 0 0
ata-ST1000LM035-1RK172_WL1LD5Q6 ONLINE 0 0 0
ata-WDC_WD10JPVX-60JC3T1_WD-WXC1A38D5T75 ONLINE 0 0 0
ata-TOSHIBA_MQ01ABD100_96PNT6KDT ONLINE 0 0 0
errors: No known data errors
Soafter adding ca 4TB VD to OMV I've created a share and started a file transfer... It crashed after 30GB of transfer with notice that there's not enough space...
I couldn't create any (even tiny one) VD on ZFS later on.
Code:
Aug 12 12:36:08 PROXTEMP pvedaemon[7136]: VM 100 creating disks failed
Aug 12 12:36:08 PROXTEMP pvedaemon[7136]: zfs error: cannot create 'ZFSQNAP/vm-100-disk-0': out of space
Aug 12 12:36:08 PROXTEMP pvedaemon[1990]: <root@pam> end task UPID:PROXTEMP:00001BE0:00035613:6114F997:qmconfig:100:root@pam: zfs error: cannot create 'ZFSQNAP/vm-100-disk-0': out of space
Finally forced to remove VD and creating a new one but still out of space issue...
That was yesterday. Today (without reboot) I can create vd's normally so... any ideas?
ZFS Pool and all disks seem to be healthy...
Code:
root@PROXTEMP:~# zfs get written
NAME PROPERTY VALUE SOURCE
ZFSQNAP written 153K -
ZFSQNAP/vm-100-disk-0 written 89.5K -
root@PROXTEMP:~# zpool status
pool: ZFSQNAP
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
ZFSQNAP ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-ST1000LM035-1RK172_WL1969PY ONLINE 0 0 0
ata-WDC_WD10SPZX-08Z10_WD-WX61AA8DRJ83 ONLINE 0 0 0
ata-ST1000LM035-1RK172_WL1XHFF3 ONLINE 0 0 0
ata-ST1000LM035-1RK172_WL1LD5Q6 ONLINE 0 0 0
ata-WDC_WD10JPVX-60JC3T1_WD-WXC1A38D5T75 ONLINE 0 0 0
ata-TOSHIBA_MQ01ABD100_96PNT6KDT ONLINE 0 0 0
Some other disks summary:
Code:
root@PROXTEMP:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 558G 0 disk
├─sda1 8:1 0 1007K 0 part
├─sda2 8:2 0 512M 0 part
└─sda3 8:3 0 557.5G 0 part
├─pve-swap 253:0 0 16G 0 lvm [SWAP]
├─pve-root 253:1 0 30G 0 lvm /
├─pve-data_tmeta 253:2 0 5G 0 lvm
│ └─pve-data-tpool 253:4 0 485.6G 0 lvm
│ ├─pve-data 253:5 0 485.6G 1 lvm
│ ├─pve-vm--100--disk--0 253:6 0 30G 0 lvm
│ ├─pve-vm--101--disk--0 253:7 0 30G 0 lvm
│ └─pve-vm--102--disk--0 253:8 0 60G 0 lvm
└─pve-data_tdata 253:3 0 485.6G 0 lvm
└─pve-data-tpool 253:4 0 485.6G 0 lvm
├─pve-data 253:5 0 485.6G 1 lvm
├─pve-vm--100--disk--0 253:6 0 30G 0 lvm
├─pve-vm--101--disk--0 253:7 0 30G 0 lvm
└─pve-vm--102--disk--0 253:8 0 60G 0 lvm
sdb 8:16 0 931.5G 0 disk
├─sdb1 8:17 0 931.5G 0 part
└─sdb9 8:25 0 8M 0 part
sdc 8:32 0 931.5G 0 disk
├─sdc1 8:33 0 931.5G 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 931.5G 0 disk
├─sdd1 8:49 0 931.5G 0 part
└─sdd9 8:57 0 8M 0 part
sde 8:64 0 931.5G 0 disk
├─sde1 8:65 0 931.5G 0 part
└─sde9 8:73 0 8M 0 part
sdf 8:80 0 931.5G 0 disk
├─sdf1 8:81 0 931.5G 0 part
└─sdf9 8:89 0 8M 0 part
sdg 8:96 0 931.5G 0 disk
├─sdg1 8:97 0 931.5G 0 part
└─sdg9 8:105 0 8M 0 part
zd0 230:0 0 3.9T 0 disk
root@PROXTEMP:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 16G 0 16G 0% /dev
tmpfs 3.2G 1.1M 3.2G 1% /run
/dev/mapper/pve-root 30G 8.4G 20G 30% /
tmpfs 16G 43M 16G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
ZFSQNAP 4.3T 256K 4.3T 1% /ZFSQNAP
/dev/fuse 128M 16K 128M 1% /etc/pve
tmpfs 3.2G 0 3.2G 0% /run/user/0
Last edited: