Thanks for the reply, sorry ill put it in code
root@prometheus3:~# zfs get all rpool/data/vm-131-disk-1
NAME PROPERTY VALUE SOURCE
rpool/data/vm-131-disk-1 type volume -
rpool/data/vm-131-disk-1 creation...
interesting, did not know about the 8 diskz1 everything below 32k is a waste, so it also depends on how many disks of the server having and if its a raid10 or raid z1
Lets say i raid 10 striped mirror ( 4disks) and running a MSSQL VM the recomended virtual disk would be 4k?
Thanks for the reply, currently i have it by default 8k volblocksize with thin
root@prometheus3:~# zfs get reservation,refreservation rpool/data/vm-131-disk-1
NAME PROPERTY VALUE SOURCE
rpool/data/vm-131-disk-1 reservation none default...
Thanks for the reply, so if i understood correctly its always best to leave 8k volblocksize on proxmox and leave by default the windows NTFS and linux ext4 as its 4k volblocksize, and if i change on windows NTFS storage 64k i would leave on proxmox the 8k volblocksize
thanks for the reply, but what is the VM data storage of a mssql is 64k NTFS wouldn't the volblocksize on proxmox should be the same?
So the lower the volblocksize on zfs Proxmox the better?
do these blocksize matter in the I/OPS? so what i have been seeing is that normally there is going to be an alignment issue as normally windows and linux have 4k by default (except when install MSSQL which should be NTFS 64K)
Hi,
Currently i have Proxmox with raid 10 with a windows VM. The size on the windows VM is 128 gigs and using 3gigs but on proxmox says 87 gigs, which is odd because i have discard enabled.
rpool/data/vm-131-disk-1 87.9G 481G 87.9G -
rpool/data/vm-131-disk-1...
i was curious how come by default volblock size on Proxmox is 8k when all the vms normally are 4k the NTFS windows or ext4 woulnt there always be an alightment issue for space? or its it better 8k?
so i think i solved this issue, so i had to create volblocksize 64k and on the VM i have to add ashift 16 and now it seems to be showing the data correctly
thanks for the reply, so i would need to use block size 32k for that rpool/data/vm-145-disk-2 so within the VM it would show the correct storage?
as for the zfs pool inside of the VM would it need to be ashift 12? and the same block size?
Thank you so much for the reply, currently my host has 8 disks
root@prometheus2:~# lsblk -o NAME,PHY-SeC
NAME PHY-SEC
sda 512
├─sda1 512
├─sda2 512
└─sda3 512
sdb 512
├─sdb1 512
├─sdb2 512
└─sdb3 512
sdc 512
├─sdc1...
Thanks for the reply, i did read alot but the snapshots is what really helps alots inside of the vm
my rpool is a raid z-1
root@prometheus2:~# zfs get all rpool
NAME PROPERTY VALUE SOURCE
rpool type filesystem -
rpool creation...
Hi
I was wondering if someone could shed some light on the issue im having.
Currently i have proxmox working with ZFS and a few vms, I have a VM running ubuntu with OS ext4 but created another virtual disk and within that VM also has a ZFS for snapshots (zentyal)
now here is the issue as the...
Hi
i currently trying to import a vm from virtual to proxmox, but im getting this issue
root@prometheus2:~# qm importovf 167 GSM-TRIAL-21.04.4-VirtualBox.ova local-zfs
GSM-TRIAL-21.04.4-VirtualBox.ova:1: parser error : Start tag expected, '<' not found
GSM-TRIAL-21.04.4-VirtualBox.ovf
so then...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.