Ah that works yes! Thanks @d.oishi!
I still feel it would also be logical to simply add a network bridge (a resource, much like a storage) to a resource pool, to make it available inside a pool.
Anyway: this works, thanks again!
We use templates for VMs per resource pool, templates come with NICs defined including virtual bridges they connect to and allowed VLANs. These NICs can not be edited by users after they "full cloned" their VM out of that template.
Tedious part...
Hi,
Wondering about resource pools, and resource separation. In our PoC, I created some test resource pools, added users, VMs and some storages to them. Now, when creating a new VM (as a resource pool member) I see correctly the available...
Hi Stefan,
Thanks for the quick follow-up! Looking at your requested output qm config 3346, I noticed it myself: firewall=1 was missing for the net0 device.
After turning it on, the firewall started behaving as expected! Apologies and thanks!
Note: input policy is set to DROP, on both the DC and the VM level.
I have now even created an explicit VM-level DROP-rule for port tcp/5403, and the behaviour has not changed. It seems the firewall rules don't apply for VMs in the same subnet as...
Hi,
Trying to understand something. I created a VM for Qdevice , in the same /24 as my proxmox hypervisors:
pve1: 192.168.33.44
pve2: 192.168.33.45
qdevice: 192.168.33.46
I understand from the docs that firewalling on the VM level should still...
Yeah, we reverted now to doing that as well. But vmbr names are now nice and descriptive, and bonds are just numbered. It would be nice to eliminate that restriction.
Thanks for checking and confirming back.
Ok, that's a pity, we would have liked to use meaningful names, including a VLAN identifier. We will have to use the comment field instead.
Does the same limitation apply to vmbr? I notice those DO get...
Yeah I added the bonds manually, not through the GUI. then later I edited something network through the GUI, and then all bond definitions were rewritten. (and broken, so after a reboot, the complete system was unreachable)
Hi,
To our surprise we see that proxmox rewrites (parts of) the network/interfaces file from this working version:
auto MNGT_bond0
iface MNGT_bond0 inet manual
bond-miimon 100
bond-mode 802.3ad
bond-xmit-hash-policy layer3+4...
AFAIK NetApp has one of the fastest and most stable NFS server implementation in the industry.
If you already have that I would definitely run some benchmarks on it.
Netapp has some docs around using there arrays with proxmox: https://docs.netapp.com/us-en/netapp-solutions-virtualization/proxmox/index.html
you're options are essentially NFS, SMB/CIFS, LVM+iSCSI LVM+NVMe over TCP, LVM+FC, or to come up with...
Ceph is the only "all features" supported shared solution easily available for PVE. Its also the most heavily worked on for other Virt platforms such as XCP-NG and various flavors of kvm. Your decision tree going forward heavily depends on WHY...