ZFS+GlusterFS+GlusterFS-Tier System+ Some modification for more IO for small file.. By the way where you want use that storage system. on virtualization side or in WEB server ? If in WEB server DRBD+OCFS2
This is not Proxmox issue this is storage issue NFS means Network File Server, so firstly learn this one also NFS have many special feature. QCOW2 is a virtual disk img file name not have more specific feature also you can not use QCOW some special feature on Proxmox any way..
NFS based system...
For your all speed issue, I was write one FIO test line here, test youse all POOL with this tool and you will see your disk real speed and do not forget open AHCI mode on your computer bios..
fio --randrepeat=1 --ioengine=libaio --direct=0 --gtod_reduce=1 --name=test --filename=test --bs=1M...
On this machine performance not more important for me also I can not add more RAM due CPU limit... I have another two server for all real scenario testing..
I think swap usage so normal for virtualization system if real hardware not have enought real memory for all.. This picture from homelab HP Miroserver GEN8 and this server only have 16 GB ECC buffered RAM ( I want grow up but this is cpu limitation anyway ) I using this server for my homelab for...
SLOG for FSYNC, another problem if you do not have SLC or MLC based SSD, your SSD ca not give to you enought IO. You can test that, please change sync option on your POOL from standart to always.. Also you can change first and second cache from all to metadata then you can see withouth ZIL, ZFS...
after all I forget write this one on last message. Proxmox use same SWAP area for all guest so use your SSD for SWAP area for unexpected RAM request from Guest because KSM and Memory balloning not a GOD they can not create more RAM for your system from nothing...
Disable Discard on your Guest option and activate compress feature on ZFS . For secheduler do not change anythink, that for SSD disk not for spining disk also if you use SATA, your limit is 32Q at same time that is all...
All people tell ZFS need one gigabyte ram for one TB data, it is big lie...
You not need install to disk via RAID card, you not need more more performance for Proxmox himself.. Bay the way Proxmox and KVM use same SWAP area for all guest so you only need high speed disk for swap area basically install Proxmox USB disk and use another disk for SWAP areaa...
OCFS2, you need build OCFS2 cluster on Proxmox.
1. Build OCFS2 cluster ( it is very very basci )
2. Mount OCFS2 folder on all host with same folder name like /mnt/SANSTORAGE
3. add that folder to data storage also select shared folder. Then all host can access to that storage area at same...
That was caused from your RAID card... I was experiance with my raid card alsoo. Some time at second try will ok some time not work... For basic solition install proxmox USB 3 USB Disk and do not create SWAP are at instalation time on USB after you can create SWAP area on high speed storage...
Proxmox not need high speed storage ( only use swap are on high speed storage or use zram instead of disk swap if you have enought CPU power..) Also please do not foget, Proxmox was use Debian with Linux core that means core will be upload to RAM at start time... So Proxmox use disk for LOG...
https://pve.proxmox.com/wiki/Cluster_Manager
You can grow up your cluster until 32 Node.. For management ? What you want from Proxmox ? What is your requiriment or expentations and what is you storage infrastructure, after all node menas CPU adn RAM and KVM easly transfer memory from network...
I talking about 12~20 gigabit real time traffic... moret han 14K session on only one country based TV Page.. I was survive that structure with dual A10 LB ...
If you have more time for manage HOST and GUEST whay not ?
For your purpose, best is management sofware for HOST and cluster... Also Management software not means only manage HOST / CLUSTER / GUEST; management software also means DRS, sDRS or similar system for easly total system source use...
At proxmox system, the cluster is exactly the host itself, it is mean cluster managign by all host.. But you can use diffrent ethernet for guest migration I think this feature was come with 6.**, for cluster system you really have very big host group it is like professional DC services... I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.