@Docop2
Mostly it was, beased in the following article:
Set the CPU affinity (we only used the 'real' physical cores at the time)
Disable Processor Vulnerability Mitigation
Disable C-States 2 and 3
That's it. All we done at the time.
Testing a Video Monitoring System in stock Windows 10 Pro 22H2, we did only what is avaliable within Proxmox GUI plus disabling Processor Vulnerability Mitigation and C-States 2 and 3 in a dual processor system:
Before: 2000ish fps with 20% frames dropped
After: 3000ish fps with 0% frames...
Our team is working on a version of kiler's script for multi-socketed systems with homogenous cpus, which is our only use case for now.
I don't have any specific ETAs but we'll post it here when everything is battle tested, maybe in a couple of weeks from now.
Hey Kiler, awesome work!
Just stumbled on your post and it's very helpful. I'm trying to reduce the CPU latency for a Windows VM running a proprietary Video Management System, which is ingesting as of now, near 4000 frames per second using pure CPU power.
Your script, out of the box, only...
Hey guys, I was wondering...
Why we use the git-scm/bugzilla for managing the code?
Would you consider switching to more 'mainstream' alternatives to help begginers? (Maybe we would get more people developing for Proxmox this way)
What about some in-depth series to help newcomers getting...
Jesus Christ, this took almost 2 full days to discover.
I can also confirm the issue and the solution (intel_iommu=off).
Tested on HPE Proliant ML310e Gen8 v2 with the SmartArray P420 (pci-e version).
There's a way to configure an advanced ZFS pool on install?
We would like to install it on 12x disks with 2x raidz2 with 6x each. Basically a raid60 alike pool.
Hey guys, I'm wandering, one could do an active-backup bond of two LACP bonds?
Like this:
auto lo
iface lo inet loopback
iface eno1 inet manual
iface eno2 inet manual
iface eno3 inet manual
iface eno4 inet manual
auto bond0
iface bond0 inet manual
bond-slaves eno1 eno2
bond-miimon...
It is not. I can transferir 2 streams of sequential data at 100MB/s without a sweat in that "another server".
Doing transfers between VMs generate the same absurd high loads.
I'm experiencing huge CPU loads when doing sequencial read/write. The server becomes unresponsive until the transfer ends.
My setup:
2x Xeon E5 2620 v3
96GB RAM
12x 2TB SAS 7200RPM in RAIDZ2
1x ZIL
1X L2ARC
4x 1Gbps in LACP
I can do various bonnie++ benchmarks with outstanding results (~600...
Hey guys, I was digging a little more into ZFS and got some questions:
- Isn't the IOPS for a HDD too high? (zpool iostat -v 1)
- With high I/O load the PVE web gui become unresponsive, there's a 'fix'? Maybe more ARC? (12x2TB disks in RAIDZ6 with ZIL and L2ARC ssd's limited to 24GB of ARC)
-...
Indeed snapshots are not backups.
But they're very handy in various cases.
Yesterday one of our clients accidentally updated without where (lol) a ton of records and because of the snapshots nobody noticed.
Some months ago we rolled back another client who got pwned by some ransomware.
About...
Sup guys
I was thinking, what about offer ZFS Auto Snapshot and scrubbing via GUI with some kind of cron support?
Do you think it is worth developing that?
Hi folks!
That's maybe not right place to ask that but:
Why use a private git server instead of using Github?
I think it would be a lot more friendly for newcomers to help the development.
Thanks!
I'm on PVE 5.2-11 with pve-container 2.0.29 and I'm must be missing something.
On a Ubuntu 18.04 container and the nesting and mouting features enabled.
Installed snapd and bam:
-- Unit snapd.service has finished shutting down.
Nov 27 15:50:31 gsm systemd[1]: snapd.service: Start request...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.