The issue you are seeing is not related to LACP and not caused by the physical network. What you are running into is a VM topology and platform problem that becomes visible after an ESXi to Proxmox migration.
The i440fx machine type is a legacy...
Hi everyone,
I’m still fairly new to Proxmox and currently in the testing phase as part of a migration from VMware. While experimenting in my lab, I ran into a couple of design questions and wanted to get some feedback from people with more...
In principle software inside the VM can use the PCI(e) devices (passed through to the VM) to potentially read the all memory of the Proxmox host (via the devices still connected to the host).
Whether this actually works to read the host memory...
One last question.
This only applies if the ACS patch is active, right?
If you passthrough a GPU to a VM using the standard IOMMU groups (without patching), then there's no risk. Right?
To answer that question you need to understand what your services are doing.
I try to do a simple overview:
Monitors have 2 tasks:
1) vote on the condition of the cluster. Is the osd in/out and up/down . Do we have the quorum to decide that...
yes, 3 monitors are fine for small to medium (and 15 OSD nodes is definitely still that category for Ceph ;)) clusters. 5 is plenty, adding more is too much and much more likely to cause issues than help in any fashion. the managers are only...
You can do this with ZFS, albeit manually (not from webUI). You could also create 2 5-way mirror with 5 disks each, then create a RAID0 with those two vdev. Something like choosing stripping "vertically" or "horizontally". No idea on how it would...
@MechThumbs
Just joined the forum like 5 minutes ago.
Searched for X99 Designare, thinking not a chance in hell I'd get a hit. But low and behold I found this post.
Nearly identical rigs, what are the odds. I pulled this thing from the...
even after all of that:
truenas_admin@dlk0entsto801[~]$ ss -tulpn | grep 3260
tcp LISTEN 0 256 0.0.0.0:3260 0.0.0.0:*
tcp LISTEN 0 256 [::]:3260 [::]:*
This is the...
@warlocksyno
after several hours of AI research, here is our conclusion:
rueNAS Python middleware is simply refusing to generate the correct PORTAL_GROUP syntax required for IP isolation.
Since the middleware keeps overwriting your manual...
You always get this because devices connected to/via B550 are not properly isolated.
You are not patching the kernel. You are enabling the "break the groups" that is already in the Proxmox kernel.
This is unsafe because it makes it look like...
Thank you, that I don't know.
Thank you, that is clear to me. Even though I find it very difficult to accurately assess the risk for my specific use case.
In my opinion it is very good, that in this forum the mebers tell us, that it is risky.
In...
For anyone wanting to achieve max_output (heads) on virtio. You need to modify your conf file, and use qemu commands instead of the proxmox built GUI/config. It would be nice to have this as a front end option
You need to set VGA none, then...
I have sudden daily hangups with Kernel 6.17.4 (but not with 6.17.2) on a Intel(R) Celeron(R) N4500 @ 1.10GHz
I have no logs until now. Logs where in RAM only and i kernel-pinned back to 6.17.2
This is to do to get IOMMU groups.
And in most cases you get a big IOMMU-chipset group with B550 boards.
Then you can brake this isolation an become virualized seperate IOMMU groups for the chipset group by patching the kernel with:
"quiet...
how many mgr daemons do you have? are they running different versions of ceph?
You only need one mgr daemon. and make sure its on the same version as your monitors.
We reverted our settings as per your suggestion and applied the below only. When these settings were applied HA was not disabled (jfyi)
totem {
cluster_name: proxmox-prod
config_version: 80
interface {
linknumber: 0
}
ip_version...