Recent content by AllanM

  1. A

    Intel E810-C 100G + Dell S532F-ON = Headaches

    Hi Jsterr, We have disabled the LLDP agent on the E810's. Will make another attempt to move networks over later this week. Anyone have any thoughts on FEC? Should we be using it (RS mode) or None? I wonder if that could cause issues here.... Will follow-up later when we know more. Thanks
  2. A

    Intel E810-C 100G + Dell S532F-ON = Headaches

    Hi jsterr, 15 character limit on bridge port name forced use of linux vlans for this config. We tested the 2-digit vlans directly bridged to the the interface with .XX and it made no differences. We also tested SDN and it made no difference, same problems. I do not believe our interfaces file...
  3. A

    Intel E810 NIC's

    We had to switch the FEC mode of the switch our E810's are connected to, to "RS" mode to get a link light.
  4. A

    Intel E810-C 100G + Dell S532F-ON = Headaches

    Hello Proxmox Community! We are attempting to upgrade the core networking from 10Gb to 100Gb with a Dell S5232F at the helm for the cluster and a Dell S5248F for all of our network access switches to uplink to, intending to use 100Gb uplinks between the switches. The Switches have SONiC open...
  5. A

    Poor write performance on ceph backed virtual disks.

    I currently have ~400GB of DB/WAL space per 16TB drive on our cluster. A single NVME 2TB drive is servicing 4 X 16TB spinners in each server. It performed really well at first but then once they had been in service awhile the performance has suffered, especially in windows guests. I intend to...
  6. A

    Poor write performance on ceph backed virtual disks.

    Hi Superfish, The particular problems I have had with ceph write performance are mostly related to windows driver/cache stack issues. I get pretty good performance when running performance benchmarks directly on a pool. Your problem sounds to me like "sync-write" performance limitations of...
  7. A

    Cpu cores and threads.

    I would double check that your VM boot disks are indeed on that SSD. Ceph is software defined storage across a cluster of servers. I think it requires 3 servers minimum for testing / proof of concept, 4 minimum for low impact stuff like homelabs, 5+ for production clusters hosting commercial...
  8. A

    Cpu cores and threads.

    Slo VM boot speed is primarily driven by the disk performance backing the virtual boot disk. If your VM boot disks are all backed by those spinning drives, boot speeds of the VM's will be slow. I always back all VM boot disks with a ceph pool that has an "ssd only device class" rule. VM's boot...
  9. A

    Possible to change Proxmox default port now?

    I would suggest leaving the proxmox configuration alone, and rather than trying to "fix" something that isn't a problem, simply create a proxy that does what you want it to do. I use haproxy (on pfsense) to direct incoming host.domain requests to their appropriate servers and ports. Don't try...
  10. A

    Proxmox won't boot after unplugging usb-c hub.

    Does the USB-C hub have any storage in it? Like, was a thumb drive or flash drive installed in it? Wondering if maybe one of the required partitions was inadvertently installed on external media...
  11. A

    Proxmox VE 8.0 released!

    Just stopping in to thank the Proxmox Dev team and all contributors for doing great work as always. Both homelab and production clusters at work upgraded to 8 following the provided instructions without any issues. I really appreciate the addition of an enterprise ceph repo.
  12. A

    How to configure Proxmox and PfSense VM so that all network requests go through PfSense

    The option to define an upstream gateway for an interface only appears with the interface configured for static IP assignment. With DHCP, the upstream gateway should be "provided," however, if you are the administrator of both the pfsense you're configuring and the upstream gateway, you might...
  13. A

    CentOS 7 based VM's won't boot anymore

    Aha! Thanks Neobin! I wonder if I'm observing 2 separate issues here or are these related? I believe both issues cropped up around the same time. When "host" is selected for these CentOS7 VM's, I get those virtio mapping errors during the boot sequence of the VM, when "EPYC-Rome" is...
  14. A

    CentOS 7 based VM's won't boot anymore

    Bumping this. "EPYC-ROME" option does not work on EPYC-ROME host hardware. Causing problems with some VM's if set to "host." I waited for a round of updates to see if this would "Self resolve" but latest enterprise repo kernel was installed yesterday. Issue persisting.
  15. A

    CentOS 7 based VM's won't boot anymore

    I was experimenting with various possible causes.... Switched CPU type of the VM from HOST to EPYC-ROME which is what these servers are (7402P CPU's) and got the following error: () /dev/rbd32 /dev/rbd33 swtpm_setup: Not overwriting existing state file. kvm: warning: host doesn't support...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!