Hi all,
I try zfs for an temp-server to save my home fileserver data to an new install.
The pve-server has 6x1TB-disks in zraid-2 and the fileserver was filled with rsync.
After some time (the sileserver has app. 1.7TB data) the pve-host filled the root filesystem to 100% and now I need some help...
Strange is, that the VM should only used round 2.6TB:
The zpool hasn't enough free space:
Why used the client more space than allowed, and why - because 1.7TB used only??
Any hints?
Udo
I try zfs for an temp-server to save my home fileserver data to an new install.
The pve-server has 6x1TB-disks in zraid-2 and the fileserver was filled with rsync.
After some time (the sileserver has app. 1.7TB data) the pve-host filled the root filesystem to 100% and now I need some help...
Strange is, that the VM should only used round 2.6TB:
Code:
cat /etc/pve/qemu-server/310.conf
balloon: 0
boot: c
bootdisk: scsi0
cores: 2
ide2: local:iso/debian-stretch-DI-rc1-amd64-netinst.iso,media=cdrom,size=296M
memory: 2048
name: fileserver
net0: virtio=CA:6B:34:63:6A:91,bridge=vmbr0
numa: 0
ostype: l26
scsi0: local-zfs:vm-310-disk-1,size=6G
scsi1: local-zfs:vm-310-disk-2,size=2600G
scsihw: virtio-scsi-pci
smbios1: uuid=14c8f391-40a0-4a9d-ac49-f2dedb5810c0
sockets: 1
Code:
root@pve-temp:~# zfs list -t all -r -o space
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
rpool 0 3.51T 0 192K 0 3.51T
rpool/ROOT 0 2.54G 0 192K 0 2.54G
rpool/ROOT/pve-1 0 2.54G 0 2.54G 0 0
rpool/data 0 3.50T 0 192K 0 3.50T
rpool/data/vm-310-disk-1 0 3.15G 0 3.15G 0 0
rpool/data/vm-310-disk-2 0 3.50T 0 3.50T 0 0
rpool/swap 7.33G 7.44G 0 110M 7.33G 0
root@pve-temp:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 10M 0 10M 0% /dev
tmpfs 1.6G 11M 1.6G 1% /run
rpool/ROOT/pve-1 2.6G 2.6G 0 100% /
tmpfs 3.9G 46M 3.9G 2% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
rpool 128K 128K 0 100% /rpool
rpool/ROOT 128K 128K 0 100% /rpool/ROOT
rpool/data 128K 128K 0 100% /rpool/data
/dev/fuse 30M 16K 30M 1% /etc/pve
root@pve-temp:~# zpool status
pool: rpool
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
sda2 ONLINE 0 0 0
sdb2 ONLINE 0 0 0
sdc2 ONLINE 0 0 0
sdd2 ONLINE 0 0 0
sde2 ONLINE 0 0 0
sdf2 ONLINE 0 0 0
Code:
pveversion -v
proxmox-ve: 4.4-78 (running kernel: 4.4.35-2-pve)
pve-manager: 4.4-5 (running version: 4.4-5/c43015a5)
pve-kernel-4.4.35-1-pve: 4.4.35-77
pve-kernel-4.4.35-2-pve: 4.4.35-78
lvm2: 2.02.116-pve3
corosync-pve: 2.4.0-1
libqb0: 1.0-1
pve-cluster: 4.0-48
qemu-server: 4.0-102
pve-firmware: 1.1-10
libpve-common-perl: 4.0-85
libpve-access-control: 4.0-19
libpve-storage-perl: 4.0-71
pve-libspice-server1: 0.12.8-1
vncterm: 1.2-1
pve-docs: 4.4-1
pve-qemu-kvm: 2.7.1-1
pve-container: 1.0-90
pve-firewall: 2.0-33
pve-ha-manager: 1.0-38
ksm-control-daemon: 1.2-1
glusterfs-client: 3.5.2-2+deb8u3
lxc-pve: 2.0.6-5
lxcfs: 2.0.5-pve2
criu: 1.6.0-1
novnc-pve: 0.5-8
smartmontools: 6.5+svn4324-1~pve80
zfsutils: 0.6.5.8-pve13~bpo80
Any hints?
Udo