Thanks, it works. Three LXCs on three cluster-nodes on a new SDN virtual VLAN-bridge in parallel to a "traditionally" configured OVS VLAN device, really easy as you said - and no Murphy yet, great job!
(*EDIT*: I had to reboot all already running VMs and LXCs to regain net connectivity for...
Thanks Alexandre, ifupdown2 2.0.1-1+pve8 seems to work as intended - I just installed it and made some changes, applying without rebooting works.
How big are the risks of bringing down the cluster, if trying out SDN in a lab environment?
@spirit Thank you, this makes kind of sense - so configuration "in parallel" to existing VLAN configs should work, if I understand it right.
Hopefully I'll find some time to upgrade my old virtual 3-node cluster from 5.x to 6.x (@t.lamprecht :) ) , so I'll be able to test.
I know, I know... But it's not only setting up the virtual cluster (which is on my bucket list for a quite long time), but it's also the configuration of a plausible OVS VLAN environment, on top of whom I could test the SDN implementation :)
edit: just found an old virtual cluster (5.x)
@spirit @t.lamprecht ifupdown2: ok, thanks.
Alexandre, could you elaborate a bit further how this would happen? Or shall we rather wait for the documenation to contain this part? Unfortunately I haven't got a test-cluster right now to test SDN "over" (or in parallel to) already configured OVS...
Bonjour Alexandre and thanks for your work.
Two questions:
1. How does this work with already configured OVS Ports and VLANs? I guess it's not recommended to activate this SDN over OVS, or what's your opinion?
2. In the past I had some troubles when installing ifupdown2, but it's too long ago...
Hello,
I think what you're looking for is a VDI broker solution which basically would be a totally from PVE independent product. Such a solution allows to dynamically launch VMs on-demand with a predefined VM-Template. But most VDI-brokers I know are not really for free or not even open...
Maybe your NAS is running on an old SMB version. I thought e.g. Windows 10 stopped support for SMBv1.
https://en.wikipedia.org/wiki/Server_Message_Block
@Stoiko Ivanov
I'm not sure they already have Proxmox on their radar. Btw - I'm not affilated at all in any way with X.org, I'm simply a happy Proxmox user and would love to see you being an even greater brick in the entire open source wall.
@LnxBil
Same for me. As mentioned, maybe there...
Hi Proxmox Team,
X.org is complaining about the high costs for their cloud hosted GitLab instance (~$90k overall). So one of the solutions would be to host GitLab on premises. But they're "afraid" of the high OPEX for maintenance...
Some time ago I stumbled over a guide showing group separation for IOMMU, but I didn't manage to make it work: https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/
According to that guide the GRUB line would look something like this...
Hi Fabian,
thanks for the suggestions.
One suggestion would be to allow a setting (e.g. "Suspend and restart not migratable VMs? y/n"), where locally etc. storaged VMs would go into suspend mode and would automagically be continued as soon as the host's back up online.
I know, it's easy for...
First of all I would like to express my joy towards the elegance of the solution. Using the HA option and moving back the VMs to the shutdowned/rebooted node is a very simple but effective solution.
Is this the first step towards a "maintenance" mode for nodes? Could this mean maintenance mode...
Uploading of ISOs is done in the WebUI through the context of storages. Click on one of your storages (local or NFS). If it's configured as being able to host "ISO image" (context "Datacenter/Storage") then you are allowed to upload an ISO (context e.g. "Storage View/host/storage/content).
Hope...
Not really, see screenshot.
I chose "IvyBridge" as processor emulation, because it matches the machine with the lowest CPU version in my cluster, but I could have used default kvm64.
Hi,
using a cluster in HA (with CEPH) is not recomended when latency between the hosts is higher than ~2-3ms (corosync being quite sensitive about that) afaik.
But if the 2nd brandabschnitt is really "close", a HA cluster works great, without the need of an additional replication. Then you...
I had a similar case some time ago - I couldn't login to the 20190926 version (no keyboard issue), so I had to use the older version from 2017, which worked as always. Strangely, after a totally unrelated reboot of all cluster nodes and later on a retry to download and use the 20190926 version...
Hello,
as far as I know, there is no simple way of doing that.
What I did (more or less):
In oVirt
- Used an oVirt report to create an excel list with the inventory of all targeted VMs (ram, cpus, oVirt DiskUID, etc)
- Extended the excel list with a formula to generate the VM creation command...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.