Hello,
im evaluating PBS for production and am using it for some non business critical VMs / PM hosts
Loving it, great work!
My question:
We do have an offline/offsite backup.
I know this can or should be a PBS remote at some time, but for now its simply a safe storage with vm dumps.
Is there...
In a linux vm, when using a .raw disk on a dir with discard on and mkfs.ext4 -b 4096 -E stripe-width=2 i did just copy a 17gb files from the HDD Mirror to itself with 100mb/s.
From this to a .raw on the SSDmirror i did copy with 270mb/s. (timed copy time)
Mabe im fine with this!
If i...
Thanks for your reply! The thing is, i don't have a raid controller and need encrpytion, that is why i use ZFS.
I could drop it, get a raid controller and use thinlvm true.
The idea was to skip the expensive raid controller by using some RAM^^
After i found some time for testing i followed your tip and used .raw disks on the directory storage with discard=on
Inside the windows VM i can now reach proper speeds (200mb/s+ file copy ect.)
@guletz apologies, i don't have the time right now to read more into ZFS so may i ask you:
Is it...
Thanks, will try to add a second volume and observe.
btw.: i could redo the pools still, if it saves me the trouble of altering many VMs.
Especially because it is planned to transfer existing VMs from other proxmox hosts.
adding to this: im running cisco VMs there where i cannot alter the...
root@pve-d:/etc/pve/local/qemu-server# zfs get all vmdata/encrypted/vm-223-disk-0
NAME PROPERTY VALUE SOURCE
vmdata/encrypted/vm-223-disk-0 type volume -
vmdata/encrypted/vm-223-disk-0 creation...
Its the onboard sata connectors of this board:
Supermicro mainboard H11SSL-i
AMD EPYC 7282 (2.80 GHz, 16-core, 64 MB)
128 GB (4x 32 GB) ECC Reg DDR4 RAM 2 Rank
4x960 GB SATA III Intel SSD 3D-NAND TLC 2.5" (D3 S4510) => RaidZ1 vmdata
2x6 TB SATA III Western Digital Ultrastar 3.5" 7.2k (512e) =>...
root@pve-d:/etc/pve/local/qemu-server# zpool status
pool: hddmirror
state: ONLINE
scan: scrub repaired 0B in 0 days 04:31:55 with 0 errors on Sun Oct 11 04:55:56 2020
config:
NAME STATE READ WRITE CKSUM
hddmirror...
Hello guys,
i hope you can help me with this, since i need to go into production soon:
tldr.:
Write speed on 2 ZFS pools is fine on the PM Host but inside VMs there are major issues with "larger writes".
What is the correct disk configuration for VMs on a ZFS pool for win10 and linux?
I've...
Youp! with ashift=12 i get better copressionratios and smaller diskusage:
Also a comparison between unencrypted "rpool" and encrypted "vmdata", not sure if size difference can be attributed to encryption or different ssd models:
rpool/data/vm-222-disk-0 2.05G 369G 2.05G -...
cool tip, will do!
@testing real data copy times seems hard, since with those fast SSDs there will be al lot other bottlenecks, i suppose but copy + comparing smarctl written data seems cool, will do that.
thx for the tips!
edit: on different SSDs (mircron5300) on hte root pool with ashift=12...
Hello,
reporting back and thinking of redoing the SSD mirror.
After a bit research I did the "hddmirror" with ashift 12 and the 4xSSD "vmdata" with ashift=13:
zpool create -f -o ashift=13 vmdata mirror sdb sdc mirror sdd sde
I think the ashift=13 was a major rookie mistake, maybe i...
I've been reading into ashift and its purpose for HDDs, and i meant to set up a mirrored raid to compare performance, but quickly learned that:
You can't really use dd if=/dev/zero with compression since it will not write anything :D .
Also repeated hdparm will benefit greatly from zfs cache...
Thanks for all the replies!
This is a big one, thanks for the info! Will read into it.
@1 i assume that the move disk command is just very aggressive, i did fail-configure my first PM server with raid5 HDDs and heavy IO limitations, so i might be over cautious here. I can imagine, that...
Hello there, ill be setting up a new Proxmox Server in the near future.
Supermicro mainboard H11SSL-i
AMD EPYC 7282 (2.80 GHz, 16-core, 64 MB)
128 GB (4x 32 GB) ECC Reg DDR4 RAM 2 Rank
4x960 GB SATA III Intel SSD 3D-NAND TLC 2.5" (D3 S4510) => RaidZ1 vmdata
2x6 TB SATA III Western Digital...
Bei mir liefen 2x1TB consumer SSD im Raid1 2 Jahre durch, (wenig load, Win AD und 2-3gb writes pro tag vom nem elk stack) aber vorgestern ist eine ausgefallen und bei der anderen warnt der raid controller seit dem resliver mit prefailure.
Beschrieben waren von dem 1Tb nie mehr als ~300gb.
Sie...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.