Hello all,
Before I start down the road of trying this from scratch, I figured I'd ask to see if anyone else has tried this, and if they had any notes they'd be able to share on this.
I freely admit that this is probably something pretty exotic, but I see a lot of potential value in it.
Granted, I was using 5GbE USB adapters, but I was having all kinds of issues running on 3.0 ports. Once traffic got above 2.5 on my machine, but this wasn't on ProxMox; it was a CentOS machine with the latest 5.11 kernel at the time. Once I moved the interfaces to the 3.2 ports, they...
That's what I'm taking out of this as well...
It's a good opportunity to also reduce my VM footprint and stop the procrastination of moving stuff into containers. So many of these VM's are only running a single application that I could probably shrink the cluster disk space usage by 55%...
Backups are important, m'kayyyy
Yo.... seriously....... You just saved me my sanity!! I was going through my old NAS for old VMs, bringing them up one at a time to see how much data loss I was going to have to deal with, and then saw the response from @sippe.
As you suggested, I added the...
I'm guessing that the NIC is losing connection based on the USB cable... I had similar issues when using USB-to-5GbE adapters. Try the following:
Are you in a USB-3.2 port?
Can you swap the USB with another one?
Check dmesg for entries from the USB NIC. Make sure you don't see it repeating...
Please sticky this thread.
Also, I would suggest moving the RocksDB Resharding notice at the bottom of the instructions to the top of the https://pve.proxmox.com/wiki/Ceph_Octopus_to_Pacific document. It's also just anecdotal, but at this point, I've come across a number of posts about crashes...
Juniper QFX 5100 would be my recommendation, if you only need VXLAN Bridging. They were announced as EOL not too long ago, so you will be able to find them on the second-hand market. If you're going new, then go with either the 5110 or 5120.
Hello All,
I am new to ProxMox, and could use your help. I've done some troubleshooting, but don't know what to do next. Hoping someone can provide some guidance.
I was running a 4-host, 30-guest ProxMox 6.4 HCI cluster with Ceph Octopus for about a month that was working pretty well . I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.