Search results

  1. M

    proxmox 7.0 sdn beta test

    Thanks, it works. Three LXCs on three cluster-nodes on a new SDN virtual VLAN-bridge in parallel to a "traditionally" configured OVS VLAN device, really easy as you said - and no Murphy yet, great job! (*EDIT*: I had to reboot all already running VMs and LXCs to regain net connectivity for...
  2. M

    proxmox 7.0 sdn beta test

    Thanks Alexandre, ifupdown2 2.0.1-1+pve8 seems to work as intended - I just installed it and made some changes, applying without rebooting works. How big are the risks of bringing down the cluster, if trying out SDN in a lab environment?
  3. M

    proxmox 7.0 sdn beta test

    @spirit Thank you, this makes kind of sense - so configuration "in parallel" to existing VLAN configs should work, if I understand it right. Hopefully I'll find some time to upgrade my old virtual 3-node cluster from 5.x to 6.x (@t.lamprecht :) ) , so I'll be able to test.
  4. M

    proxmox 7.0 sdn beta test

    I know, I know... But it's not only setting up the virtual cluster (which is on my bucket list for a quite long time), but it's also the configuration of a plausible OVS VLAN environment, on top of whom I could test the SDN implementation :) edit: just found an old virtual cluster (5.x)
  5. M

    proxmox 7.0 sdn beta test

    @spirit @t.lamprecht ifupdown2: ok, thanks. Alexandre, could you elaborate a bit further how this would happen? Or shall we rather wait for the documenation to contain this part? Unfortunately I haven't got a test-cluster right now to test SDN "over" (or in parallel to) already configured OVS...
  6. M

    proxmox 7.0 sdn beta test

    Bonjour Alexandre and thanks for your work. Two questions: 1. How does this work with already configured OVS Ports and VLANs? I guess it's not recommended to activate this SDN over OVS, or what's your opinion? 2. In the past I had some troubles when installing ifupdown2, but it's too long ago...
  7. M

    Starting VM with RDP

    Hello, I think what you're looking for is a VDI broker solution which basically would be a totally from PVE independent product. Such a solution allows to dynamically launch VMs on-demand with a predefined VM-Template. But most VDI-brokers I know are not really for free or not even open...
  8. M

    Help with Windows 10 VM

    Maybe your NAS is running on an old SMB version. I thought e.g. Windows 10 stopped support for SMBv1. https://en.wikipedia.org/wiki/Server_Message_Block
  9. M

    Remote Spice access *without* using web manager

    Similar error for me. pve-manager/6.1-7/13e58d5e (running kernel: 5.3.18-2-pve)
  10. M

    X.org organisation GitLab instance

    @Stoiko Ivanov I'm not sure they already have Proxmox on their radar. Btw - I'm not affilated at all in any way with X.org, I'm simply a happy Proxmox user and would love to see you being an even greater brick in the entire open source wall. @LnxBil Same for me. As mentioned, maybe there...
  11. M

    X.org organisation GitLab instance

    Hi Proxmox Team, X.org is complaining about the high costs for their cloud hosted GitLab instance (~$90k overall). So one of the solutions would be to host GitLab on premises. But they're "afraid" of the high OPEX for maintenance...
  12. M

    GPU Passthrough success but monitor unable to detect GPU

    Some time ago I stumbled over a guide showing group separation for IOMMU, but I didn't manage to make it work: https://www.reddit.com/r/homelab/comments/b5xpua/the_ultimate_beginners_guide_to_gpu_passthrough/ According to that guide the GRUB line would look something like this...
  13. M

    Shutdown policy "migration" / maintenance mode

    Hi Fabian, thanks for the suggestions. One suggestion would be to allow a setting (e.g. "Suspend and restart not migratable VMs? y/n"), where locally etc. storaged VMs would go into suspend mode and would automagically be continued as soon as the host's back up online. I know, it's easy for...
  14. M

    Shutdown policy "migration" / maintenance mode

    First of all I would like to express my joy towards the elegance of the solution. Using the HA option and moving back the VMs to the shutdowned/rebooted node is a very simple but effective solution. Is this the first step towards a "maintenance" mode for nodes? Could this mean maintenance mode...
  15. M

    CEPH: outdated OSDs after minor upgrade

    Hi - imo either chose each OSD in a row and press "Restart" (for each one), or reboot the nodes. Don't forget to take the usual precautions :)
  16. M

    Uploading ISO Images via the web front end

    Uploading of ISOs is done in the WebUI through the context of storages. Click on one of your storages (local or NFS). If it's configured as being able to host "ISO image" (context "Datacenter/Storage") then you are allowed to upload an ISO (context e.g. "Storage View/host/storage/content). Hope...
  17. M

    RancherOS

    Not really, see screenshot. I chose "IvyBridge" as processor emulation, because it matches the machine with the lowest CPU version in my cluster, but I could have used default kvm64.
  18. M

    Replizierung der VMs

    Hi, using a cluster in HA (with CEPH) is not recomended when latency between the hosts is higher than ~2-3ms (corosync being quite sensitive about that) afaik. But if the 2nd brandabschnitt is really "close", a HA cluster works great, without the need of an additional replication. Then you...
  19. M

    I can't login into centos 7 when I use official template.

    I had a similar case some time ago - I couldn't login to the 20190926 version (no keyboard issue), so I had to use the older version from 2017, which worked as always. Strangely, after a totally unrelated reboot of all cluster nodes and later on a retry to download and use the 20190926 version...
  20. M

    Migrate VMs from Redhat Virtualization to Proxmox

    Hello, as far as I know, there is no simple way of doing that. What I did (more or less): In oVirt - Used an oVirt report to create an excel list with the inventory of all targeted VMs (ram, cpus, oVirt DiskUID, etc) - Extended the excel list with a formula to generate the VM creation command...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!