Hi,
We are running a new install of Proxmox VE 4.3-12
Our configuration is Dual E5-2630v4 w/ 256GB RAM & 8 x Samsung PM863a Drives.
We did our install using the Proxmox 4.3 ISO installer and completed a RAID Z+2 Configuration with our 8 Drives and all seem to go smoothly.
We've found some strange stats regarding disk usage.
We have moved around a number of vm's onto local storage
We have moved 2,740GB of volumes to our host however its showing up as almost twice the amount of storage has been used. We have no snapshots or anything like that being used as well.
The moving was done via live migration from a Ceph cluster to Proxmox local-zfs.
The 2 volumes which are very strange are
rpool/data/vm-425-disk-1 894G 819G 894G -
rpool/data/vm-425-disk-2 866G 819G 866G -
according to the UI, vm-425-disk-1 is only 500G & vm-425-disk-2 is 400G
I'm unsure what to make of it. However you can see below we have used nearly 4TB of storage with only 2740GB of volumes.
Another example
rpool/data/vm-426-disk-1 170G 819G 170G -
In the UI this volume is only 100G
Same with
rpool/data/vm-426-disk-2 175G 819G 175G -
In the UI This volume is only 100G as well.
Please see the attachment for easier formatting, So Any ideas whats going on here?
Thanks,
Quenten
We are running a new install of Proxmox VE 4.3-12
Our configuration is Dual E5-2630v4 w/ 256GB RAM & 8 x Samsung PM863a Drives.
We did our install using the Proxmox 4.3 ISO installer and completed a RAID Z+2 Configuration with our 8 Drives and all seem to go smoothly.
We've found some strange stats regarding disk usage.
We have moved around a number of vm's onto local storage
We have moved 2,740GB of volumes to our host however its showing up as almost twice the amount of storage has been used. We have no snapshots or anything like that being used as well.
The moving was done via live migration from a Ceph cluster to Proxmox local-zfs.
The 2 volumes which are very strange are
rpool/data/vm-425-disk-1 894G 819G 894G -
rpool/data/vm-425-disk-2 866G 819G 866G -
according to the UI, vm-425-disk-1 is only 500G & vm-425-disk-2 is 400G
I'm unsure what to make of it. However you can see below we have used nearly 4TB of storage with only 2740GB of volumes.
Another example
rpool/data/vm-426-disk-1 170G 819G 170G -
In the UI this volume is only 100G
Same with
rpool/data/vm-426-disk-2 175G 819G 175G -
In the UI This volume is only 100G as well.
Please see the attachment for easier formatting, So Any ideas whats going on here?
Thanks,
Quenten