Hi Jsterr,
We have disabled the LLDP agent on the E810's. Will make another attempt to move networks over later this week.
Anyone have any thoughts on FEC? Should we be using it (RS mode) or None? I wonder if that could cause issues here.... Will follow-up later when we know more. Thanks
Hi jsterr,
15 character limit on bridge port name forced use of linux vlans for this config. We tested the 2-digit vlans directly bridged to the the interface with .XX and it made no differences. We also tested SDN and it made no difference, same problems. I do not believe our interfaces file...
Hello Proxmox Community!
We are attempting to upgrade the core networking from 10Gb to 100Gb with a Dell S5232F at the helm for the cluster and a Dell S5248F for all of our network access switches to uplink to, intending to use 100Gb uplinks between the switches. The Switches have SONiC open...
I currently have ~400GB of DB/WAL space per 16TB drive on our cluster. A single NVME 2TB drive is servicing 4 X 16TB spinners in each server. It performed really well at first but then once they had been in service awhile the performance has suffered, especially in windows guests.
I intend to...
Hi Superfish,
The particular problems I have had with ceph write performance are mostly related to windows driver/cache stack issues. I get pretty good performance when running performance benchmarks directly on a pool.
Your problem sounds to me like "sync-write" performance limitations of...
I would double check that your VM boot disks are indeed on that SSD.
Ceph is software defined storage across a cluster of servers. I think it requires 3 servers minimum for testing / proof of concept, 4 minimum for low impact stuff like homelabs, 5+ for production clusters hosting commercial...
Slo
VM boot speed is primarily driven by the disk performance backing the virtual boot disk. If your VM boot disks are all backed by those spinning drives, boot speeds of the VM's will be slow. I always back all VM boot disks with a ceph pool that has an "ssd only device class" rule. VM's boot...
I would suggest leaving the proxmox configuration alone, and rather than trying to "fix" something that isn't a problem, simply create a proxy that does what you want it to do.
I use haproxy (on pfsense) to direct incoming host.domain requests to their appropriate servers and ports.
Don't try...
Does the USB-C hub have any storage in it? Like, was a thumb drive or flash drive installed in it? Wondering if maybe one of the required partitions was inadvertently installed on external media...
Just stopping in to thank the Proxmox Dev team and all contributors for doing great work as always. Both homelab and production clusters at work upgraded to 8 following the provided instructions without any issues. I really appreciate the addition of an enterprise ceph repo.
The option to define an upstream gateway for an interface only appears with the interface configured for static IP assignment. With DHCP, the upstream gateway should be "provided," however, if you are the administrator of both the pfsense you're configuring and the upstream gateway, you might...
Aha!
Thanks Neobin!
I wonder if I'm observing 2 separate issues here or are these related? I believe both issues cropped up around the same time.
When "host" is selected for these CentOS7 VM's, I get those virtio mapping errors during the boot sequence of the VM, when "EPYC-Rome" is...
Bumping this. "EPYC-ROME" option does not work on EPYC-ROME host hardware. Causing problems with some VM's if set to "host."
I waited for a round of updates to see if this would "Self resolve" but latest enterprise repo kernel was installed yesterday. Issue persisting.
I was experimenting with various possible causes....
Switched CPU type of the VM from HOST to EPYC-ROME which is what these servers are (7402P CPU's) and got the following error:
()
/dev/rbd32
/dev/rbd33
swtpm_setup: Not overwriting existing state file.
kvm: warning: host doesn't support...
Hello Proxmox Devs & Users!
Interesting problem this week. I went to reboot our Security Onion nodes and ran into some problems...
at VM boot:
Then after awhile of waiting:
Updated our cluster last week:
proxmox-ve: 7.4-1 (running kernel: 5.15.104-1-pve)
pve-manager: 7.4-3 (running...
Good question... Curious if anyone else is still struggling with these issues.
I thought we had this problem whipped but within a few months of adding the NVME drives to use for WAL/DB, write speed from windows guests to the spinning pool collapsed again. It's just terrible. Seems to be...
Thanks very much fabian!
That gives me a path forward. I'm basically going to do option 2 there, and choose to archive one that I can delete most of the backups in as soon as a new backup is made in "A"
In a few months, when we have a good recent history established in the new datastore, I'll...
I have created a bit of a mess for myself and looking for a way forward that makes sense.
At one time, I had a single datastore on our PBS server and a weekly backup schedule.
I wanted to move some VM's to a daily backup schedule, and for some reason I created a separate datastore on...
Interesting! Thanks for sharing that brudy!
I prefer not to modify anything on production proxmox servers unless the modification is part of the administration guide / documentation. In other words, the modification I make to fix something like this, should be in the scope of visibility and...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.