Hi everyone! i've been studying the difference for slog and zil. As my understanding, slog is for speeding up writes on drive and zil is for speeding up reads.
I've read the article from servethehome about setting up cache drives...
Hi,
I'm running a PVE cluster at home, mostly for fun and some local services. I deployed it on top of a couple of dell optiplex (for the low power consumption), which have 1 NVME slot and 1 SATA slot.
Initially, I put one 1TB NVME SSD inside both of the optiplex, partitionned with ZFS, and 16...
Looking for some thoughts and feedback before I make any changes to my system.
At home I have proxmox with several VMs, some of them are windows for my homelab testing needs, there is often I/O delays and slowness when updating / installing programs. I have a pair of SSDs on the system already...
[cross posted on STH]
I work in a research lab that we recently purchased a Threadripper 3970x workstation from a System Integrator.
It is a better better deal than Dell-Intel, which would have cost us twice as much.
The role is to run Proxmox as base Hypervisor and run multiple Windows and...
Hi, i am new to Proxmox VE. I've put 4 x 1TB nearline SAS drives in a Dell Precision T5500 on a PERC H200 and installed the pve on them in RAID10. I feel them a little slow and want to accelerate them with PCIe SSD accelerator SUN F20. The Prox sees 4 x 24GB SSDs and i want to use 2 of them in...
I know this subject was discussed so many times... I have read so many tutorials and so many "best practices" and "suggestions"... Sometimes they advise exactly opposite things... After reading another one I decided to ask community advice before I start :)
I am building Proxmox VE setup based...
Help! :)
I have four servers with 3x drive bays and have them populated with either 2x Samsung SM863 or 2x Intel S3610. The drives are set up as a ZFS mirrored vdev with both ROOT and data being on the pair. One bay is open on each server.
Normally I/O load on each server is low - iowait is...
I have an extra 120 GB SSD laying around and I would like to set it up for ZIL and L2ARC, but want to clarify something first.
In my setup, I have two ZFS pools. They are:
- 2x 480 GB SSDs in mirror (rpool) - Created during Proxmox install
This pool has Proxmox and all containers/VMs on it...
We have a Proxmox install where everything is up and running and up to date (5.2.5)… but with ridiculously slow performances (write) on the disks. For a dual socket with 256GB of RAM and SAS 10K disks this is really bad…
System has been formated with two SSD (Mirror) for the system on...
Install Proxmox on a Dell R510 server (12 SCSI/SATA bays) with the following criteria:
UEFI boot
ZFS mirrored boot drives
Large Drive support (> 2TB)
SSD-backed caching (a Sun F20 flash accelerator with 96GB of cache on 4 chips)
Home File Server
There were lots of “gotchas” in the process...
Hi,
I'm trying to find if my proxmox system (with ZFS) will benefit from adding dedicated SSD M.2 for the SLOG. So can I somehow profile my system and find the number of sync writes (O_SYNC), or better, find if my sync writes are bottlenecks? I don't want to benchmark, I want to get the info on...
Hello,
I´ve installed the latest Proxmox 4.1 via iso on one of my dedicated servers. The specs are:
CPU1: Intel(R) Core(TM) i7 CPU 950 @ 3.07GHz (Cores 8)
Memory: 24061 MB
Disk /dev/sda: 750 GB (=> 698 GiB)
Disk /dev/sdb: 750 GB (=> 698 GiB)
Disk /dev/sdc: 120 GB (=> 111 GiB)...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.