I didn't bother with templates. I created a kickstart file with a local kernel and initrd file that calls my RHEL provisioning engine website (that I wrote in a relatively short amount of time.) I just lay down a new OS for each of my guests rather than cloning, which works out quite well...
I get what you're saying now. You didn't happen to change the interface information in the UI then change it manually in the OS before rebooting did you? The UI lays down a temporary file in the /etc/network directory (interfaces.new I think?) and if that file exists, on next boot it will copy...
It's not a glitch. If your NIC has a PXE interface on it, it's just going through the standard PXE boot process. During hardware startup, if your network interface has a higher priority for boot than your hard disk, the PXE interface will attempt to boot, which requires it to do a DHCP request...
Something regarding MTU sizes I ran into with my proxmox configuration. I have a SAN bridge (linux bridge) on vmbr1 that includes eno1 (either a bnx2x or igbxe interface, depending on the server) as a member interface. I had played around with MTU sizes and was getting sporadic results.
It...
It really depends on your environment. I never want anything to sit in what I consider a fail mode on my network. 169.254.x.x leaps out as a broken address in my environments, so I just make sure it's configured right in the first place.
To be fair, I have never touched a M5300. I *have*...
U stands for untagged. An untagged interface assumes the vlan tagging of the vlan it's associated with on the switch itself, provided it's not supplying a vlan tag at the interface itself. Many switches support dual mode tagging as well, meaning you can have an untagged and a tagged interface...
Thanks for the education. It was a pretty cursory examination of the Perl code. :)
It looked like the locking mechanism got passed off to the storage plugin. I'm sure I just misread the code.
Incidentally, that's some of the better Perl code I've seen in a long time. Kudos for that.
It uses an auth cookie (PVEAuthCookie) that's generated by the server. I assume it's a session cookie, since if I close and reopen my browser, I have to log in again.
They also have a cross-site prevention token (CSRFPreventionToken), but I'm not not exactly sure why it's necessary. Their API...
One side or the other probably has the DF (don't fragment) bit set. Your routing infrastructure may be blocking path MTU. Essentially, it either can't discover what the maximum MTU of the path is, path MTU is disabled, you're blocking path MTU, or you have don't fragment enabled somewhere...
How exactly are you mounting the SMB share and how are you presenting it to Proxmox? CIFS uses Windows locking mechanisms when it can. If you don't do anything to prevent it, it will use opportunistic locks, which will do client-side caching of files. You may want to look up "veto oplock...
I've now run for 5 days with the changes made and it's stable. I haven't added any shared LVMs to the infrastructure though. Kind of afraid of blowing up what I have. :)
No no, not necessarily LACP. That's only if you're aggregating a pair of interfaces. You need to do VLAN trunking on the interfaces of interest between the switches and OVS. You trunk whichever VLANs you want presented between each of the switches and they become part of the broadcast domain...
You're complicating things. Let your hosts (cluster nodes) all be members of all of the VLANs you're interested in. They don't need to have IP addresses on those VLANs, just need to be in the broadcast domain of those VLANs. Assuming your hosts (nodes) are all connected to a switching...
It's missing all of the tunable parameters they added for the 5.x drivers. In particular, it's missing all of the IRQ and RSS steering parameters that are pretty much required in order to get anywhere near 10Gbe in the interfaces. The 5.x driver was quite a long way ahead of the 4.x driver...
pve-kernel-4.10.15-1 from the iso install has ixgbe driver 5.0.4 (correct version)
I updated my host and the ixgbe driver in pve-kernel-4.10.17-1 has been downgraded to the 4.4.0-k, which is the very old and outdated ixgbe driver.
Any chance this can get rolled back into the 4.10.17-1 kernel...
Proxmox provides their own kernels. They have their own build environment for their kernels. Their kernel sources come from the Ubuntu kernel sources. You can find the kernel build environment in the Proxmox GIT repo. As long as you backport any changes they make specifically in their kernel...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.