Some hints for implementing an own VDI solution:
- http://guacamole.apache.org/
- Each target VM to be remote controlled will need either RDP or e.g. VNC running, for being able to remote control the VM. For Windows-VMs RDP is integrated, Linux VMs need to have e.g. a VNC service installed.
-...
Hello,
where from and what service to? From the wide internet to RDP, http(s) or any other service on those machines?
In general this is totally independent of a machine being a VM (Proxmox or any other tech) or a bare metal machine.
As your screenshot shows, you're using 10. 100. 10.20/24 on proxmox 2, instead of 10. 10. 10.20/24.
It's "100" vs "10" on the second byte of your CIDR entry.
NATting the VMs is not the challenge - as long as the Router is configured to nat the subnet, the VMs get out. I guess what you mean is how to get in from the internet to those VMs. Dyndns and from the router port forward your needed service ports to the VMs or for multiple http(s) webservers...
I'm hoping for this too. K3S e.g. is so easy to install, just a few lines, but br_netfilter and overlay modules (?) not being able to load in the container kill the k3s-agent service.
Is there a workaround for 6.x?
edit: Seems to be coming in linux kernel 5.3...
This site here's even more detailed about this issue - and it nicely explaines portforwarding as well :)
https://raymii.org/s/tutorials/Proxmox_VE_One_Public_IP.html
Don't be to harsh on Proxmox, it may "simply" be a KVM thing :)
Is your VM a UEFI-one? If yes, afaik there's an option to enable (or disable, it depends) UEFI for VMs. Maybe this helps.
An idea would be to try to migrate this VM to a simple Debian host (even a desktop) with GUI (no Proxmox)...
Afaik, qemu-guest-agent for Win only contains some KVM/QEMU-to-VMGuest communication (get VM IP) and VSS "channels", but does not include the virtio drivers for storage, graphics, nic etc.
A good URL to follow is https://pve.proxmox.com/wiki/Windows_VirtIO_Drivers , but I think you already...
Linux does have KVM/storage drivers integrated into the kernel, Winows doesn't.
Try this, maybe this works:
1. Shutdown healthy XEN-VM, take snapshot, boot XEN-VM
2. uninstall all Citrix/Xen SW from "uninstall programs" & apply xxx.reg to remove XEN/Citrix devices - don't reboot/shutdown yet...
Btw, other (really expensive) products providing one-click-upgrade have solved the "CPU pinned VM problem" in a very simple way - before upgrading the cluster, an alert-window pops up: "Please shutdown all CPU-pinned VMs, as these cannot be rebalanced to other nodes!" ;)
So as a consequence...
Yes - thank you Proxmox team. One of the most and outstandingly flawless and trouble-free major upgrades of any HCI cluster (3-node CEPH setup) I've ever been able to pass through. I directly compare it to my awful upgrade sessions with ovirt/gluster in the past e.g., which had caused me a lot...
I would simply like to say I went down almost exactly the same path as @devinacosta (unfortunately I'm no RHCA;) . Healing Gluster on oVirt/HCI took me on several occasions either a week+ to get in a healed state, or even redeploying the entire HCI cluster. Since about an year, after having...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.