It's not an "issue" there's just a lack of communication from within the guest to the host. VM's that can run the agent will produce better feedback here. Proxmox has to treat any memory that has been "touched" as used without further information about what it has been used for. I would say that...
Another follow-up.
All of the same problems returned a few weeks later. Totally random loss of communication on some networks between nodes. Ceph and Corosync seem to work fine on the E810+S532F's, but VLANS are a disaster. Coms will work randomly on some nodes and not on others. It's like...
We had some growing pains trying to get E810 working but in the end, we did get them working. I don't think any of the issues we faced were actually related to Proxmox. Everything was related to low level hardware/port/operational/firmware settings at the cards and switches.
FEC mode may need...
Following up...
We disabled the LLDP agent and also disabled SR-IOV on both motherboards and cards since we don't use that feature anyway, then moved the networks back over to the new E810's.
The Cluster and all networking has been working now for several weeks and through a normal monthly...
Hi Jsterr,
We have disabled the LLDP agent on the E810's. Will make another attempt to move networks over later this week.
Anyone have any thoughts on FEC? Should we be using it (RS mode) or None? I wonder if that could cause issues here.... Will follow-up later when we know more. Thanks
Hi jsterr,
15 character limit on bridge port name forced use of linux vlans for this config. We tested the 2-digit vlans directly bridged to the the interface with .XX and it made no differences. We also tested SDN and it made no difference, same problems. I do not believe our interfaces file...
Hello Proxmox Community!
We are attempting to upgrade the core networking from 10Gb to 100Gb with a Dell S5232F at the helm for the cluster and a Dell S5248F for all of our network access switches to uplink to, intending to use 100Gb uplinks between the switches. The Switches have SONiC open...
I currently have ~400GB of DB/WAL space per 16TB drive on our cluster. A single NVME 2TB drive is servicing 4 X 16TB spinners in each server. It performed really well at first but then once they had been in service awhile the performance has suffered, especially in windows guests.
I intend to...
Hi Superfish,
The particular problems I have had with ceph write performance are mostly related to windows driver/cache stack issues. I get pretty good performance when running performance benchmarks directly on a pool.
Your problem sounds to me like "sync-write" performance limitations of...
I would double check that your VM boot disks are indeed on that SSD.
Ceph is software defined storage across a cluster of servers. I think it requires 3 servers minimum for testing / proof of concept, 4 minimum for low impact stuff like homelabs, 5+ for production clusters hosting commercial...
Slo
VM boot speed is primarily driven by the disk performance backing the virtual boot disk. If your VM boot disks are all backed by those spinning drives, boot speeds of the VM's will be slow. I always back all VM boot disks with a ceph pool that has an "ssd only device class" rule. VM's boot...
I would suggest leaving the proxmox configuration alone, and rather than trying to "fix" something that isn't a problem, simply create a proxy that does what you want it to do.
I use haproxy (on pfsense) to direct incoming host.domain requests to their appropriate servers and ports.
Don't try...
Does the USB-C hub have any storage in it? Like, was a thumb drive or flash drive installed in it? Wondering if maybe one of the required partitions was inadvertently installed on external media...
Just stopping in to thank the Proxmox Dev team and all contributors for doing great work as always. Both homelab and production clusters at work upgraded to 8 following the provided instructions without any issues. I really appreciate the addition of an enterprise ceph repo.
The option to define an upstream gateway for an interface only appears with the interface configured for static IP assignment. With DHCP, the upstream gateway should be "provided," however, if you are the administrator of both the pfsense you're configuring and the upstream gateway, you might...
Aha!
Thanks Neobin!
I wonder if I'm observing 2 separate issues here or are these related? I believe both issues cropped up around the same time.
When "host" is selected for these CentOS7 VM's, I get those virtio mapping errors during the boot sequence of the VM, when "EPYC-Rome" is...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.