I find out that if a field is not listed it should be defined in the API data field.
As there are many DNS providers and API endpoints Proxmox VE automatically generates the form for the credentials for some providers. For the others you will see...
Another vote for OPNsense and I find using the adguard plugin (OPNsense community plugin) an excellent adblocker as well way to stop or limit teens from sites and other social media I might want to limit. Great for monitoring as well. Better than...
Makes sense. As mentioned, it is well known, that several LSI HBAs have severe ASPM issues. And now that I think about it, the first Intel 10G controller with working ASPM is X710. So consequently, the much older 82599 is showing problems.
All in...
Ja ist ganz einfach. Du baust ein eigenes VLAN und hängst eine kleine Routing VM in das Netz, welche NAT macht.
z.B. hat dein Server 192.168.1.10, dann kann die Kopie die gleiche IP haben und die Router VM macht ein NAT, z.B...
can u mention test bed (Host spec, OS and version) and tuning that if u have made.
Becoz. iam getting aroung 100k in RHEL 9.6 with AMD 9534F processor with Ceph Replication.
Also please provide the full fio command to cross verify.
Thanks.
Empfehlungen gibt es ganz viele. Mit 5 Nodes hast du den Vorteil, dass du einen Node in Wartung nehmen kannst und ein zweiter gleichzeitig ausfallen kann, aber auch nur wenn du die Verfügbarkeit in deinem Pool von 3/2 auf 4/2 anhebst.
Produktiv...
Noted, thank you. How should i then do your proposal?
The easy way would be to create a shared folder amongst the docker swarm vm's through NFS, but i have read that this also has performance issues.
CephFS works, but it’s metadata-heavy—great for shared POSIX files, not ideal for DB-like/container write-intensive workloads without careful MDS sizing/tuning.
To save the trouble tuning and reduce risk, I'll use RBD volumes (fast, block-level).
Hi,
I have very similar problem to https://forum.proxmox.com/threads/freeze-on-pfsense-vm-running-in-pve-9.171557/. I also upgraded PVE 8 to 9 and I started experiencing freeze of my VM FreeBSD 13x, 14x. (FreeBSD 9.1 works ok). It's not guest OS...
Thank you for your reply.
From what i have read so far, this is what i will do. I will create a new pool (named dockerswarm) and assign to it the OSDs of the NVME drives.
I have read comments that cephFS is not the best solution for a docker...
I found the solution for this myself. You have to do the following in the Windows VM:
1. Click the Start menu and type Component Services
2. Double click Computers > My Computer > Distributed Transaction Coordinator
3. Right click Local DTC >...
I have (or well, had) 3 nodes running in a cluster.
Two of these nodes only have one SATA port and just use the default LVM configuration. (one with a 256GB SSD and the other with a 512GB SSD)
The other node has a second SATA port so I figured...
Makes sense. As mentioned, it is well known, that several LSI HBAs have severe ASPM issues. And now that I think about it, the first Intel 10G controller with working ASPM is X710. So consequently, the much older 82599 is showing problems.
All in...
Before I go that route I would take two (or three!) USB-to-NVMe or -to-SATA Adapters and put some small known-good devices in there.
That's not really professional, and I have always said that USB is not a good idea in this context. I had run a...
I normally use NVMe running onboard without RAID because it is already on HA cluster mode for PVE host install for performance purpose.
Using SDCard sounds very risky to me.
i am in the process of moving from docker to docker swarm, in order to better utilize my hardware resources. For this, i have created a swarm cluster and i am investigating my alternatives for shared storage for the swarm cluster.
Currently, on...
ZFS “deadman” usually points to I/O stalls — the drives stop responding in time.
Are all disks connected through the chipset SATA ports (no port multipliers)?
Try checking for slow links with dmesg | grep ata or zpool status -v.
Also, any...