Can my hotspot send Internet into my hypervisor machine?

eiger3970

Well-Known Member
Sep 9, 2012
276
3
58
I have no Internet at home.

My mobile has a hotspot which lets me use my home desktop.

How do I get Internet to my home Hypervisor machine?

I could connect a Unifi Nano into the Hypervisor machine's NIC.

I could connect a Wi-Fi NIC into the Hypervisor machine's MOBO. This would be easiest as this is what I have done with my home desktop.

Not sure if I could bridge my home Desktop to the Hypervisor machine's NIC?
 
Hi!

I could connect a Unifi Nano into the Hypervisor machine's NIC.
This should be possible, the Unifi APs normally support using a WLAN uplink and so could in fact act as a wireless bridge between your hotspot and the LAN port of the Proxmox VE host (or a common network switch).

I could connect a Wi-Fi NIC into the Hypervisor machine's MOBO. This would be easiest as this is what I have done with my home desktop.
This is also possible, but one needs a few extra steps to get the network access for the VMs or Containers working with this method. A few possibilities are described here: https://pve.proxmox.com/wiki/WLAN
 
  • Like
Reactions: eiger3970
Not sure if I could bridge my home Desktop to the Hypervisor machine's NIC?
In general, yes, such things are possible too, with either the Hypervisor or the Desktop acting as router or bridge for the others.
Possible disadvantage, the Desktop would always need to run (or the PVE host if you use that one).


Oh, and you would not necessarily need an unifi AP, you could do with a RaspberryPi, even the older series have >100 MBps bandwidth performance, and should thus be enough for any hotspot - just meaning that the unifi AP could be a bit overkill.
 
  • Like
Reactions: eiger3970
Thanks. I use a virtualised router pfSense, so I thought I might need to have mentioned that.
Someone told me the Wi-Fi NIC will not work on Proxmox?
 
Someone told me the Wi-Fi NIC will not work on Proxmox?
It works, but not in the same way a wired NIC would do.
With a pure wired connection you just would add it as bridge port and be done, for Wi-FI NICs that is a problem as the Wi-FI AP will drop packets coming from the guest VM/CTs as it only knows about the MAC address of the host itself.
So, you need to do some extra setup work, like mentioned in the linked wiki article: https://pve.proxmox.com/wiki/WLAN
 
  • Like
Reactions: eiger3970
I try to use pfSense as my virtualised router, so just need wireless Internet to route through the NIC to the virtualised router.
To add yet another possible option, you could succeed by using PCI (or USB) passthrough of the Wi-FI NIC to the pfSense VM, as long as it supports that specific NIC - then it could do everything there.

In conclusion, there are quite a few possibilities, if you have the time and interest I'd suggest playing around with them and going for the one which works and feels the least complex.
 
  • Like
Reactions: eiger3970
To add yet another possible option, you could succeed by using PCI (or USB) passthrough of the Wi-FI NIC to the pfSense VM, as long as it supports that specific NIC - then it could do everything there.

In conclusion, there are quite a few possibilities, if you have the time and interest I'd suggest playing around with them and going for the one which works and feels the least complex.
Thank you, I agree. The least complex seems to be the Wi-Fi NIC into the PCI-e, then to the virtualised router pfSense I guess? I tried before, but had trouble with Proxmox setting up the Wi-Fi.
 
Looks good. I Google translated to English. I'm on Proxmox 6, so I can't use your Proxmox 4.x program. Thanks anyway. I try to use pfSense as my virtualised router, so just need wireless Internet to route through the NIC to the virtualised router.
If you send me a file Firewall.pm on ssa.codex@gmail.com then I can correct it. It should work. But I haven't tried it on such old versions.
 
  • Like
Reactions: eiger3970
Thanks, but I'm looking for a hypervisor that is a bit easier. Researching Docker or Esxi. I just need temporary Internet to install ZoneMinder. Virtualised pfSense is easy as it's an iso file I can transfer from my home Desktop with Internet via scp to the hypervisor machine with no Internet.
 
Trying to install iwd on Proxmox, but error:
Code:
root@proxmox:/home# dpkg -i /var/cache/apt/archives/iwd_0.14-2_amd64.deb
(Reading database ... 44237 files and directories currently installed.)
Preparing to unpack .../archives/iwd_0.14-2_amd64.deb ...
Unpacking iwd (0.14-2) over (0.14-2) ...
Setting up iwd (0.14-2) ...
Job for iwd.service failed because the control process exited with error code.
See "systemctl status iwd.service" and "journalctl -xe" for details.
Processing triggers for dbus (1.12.20-0+deb10u1) ...
root@proxmox:/home# systemctl status iwd.service
● iwd.service - Wireless service
   Loaded: loaded (/lib/systemd/system/iwd.service; enabled; vendor preset: enabled)
   Active: failed (Result: exit-code) since Thu 2021-01-14 14:41:45 AEST; 3min 2s ago
  Process: 19722 ExecStart=/usr/libexec/iwd (code=exited, status=127)
 Main PID: 19722 (code=exited, status=127)

Jan 14 14:41:45 proxmox systemd[1]: iwd.service: Service RestartSec=100ms expired, scheduling restart.
Jan 14 14:41:45 proxmox systemd[1]: iwd.service: Scheduled restart job, restart counter is at 5.
Jan 14 14:41:45 proxmox systemd[1]: Stopped Wireless service.
Jan 14 14:41:45 proxmox systemd[1]: iwd.service: Start request repeated too quickly.
Jan 14 14:41:45 proxmox systemd[1]: iwd.service: Failed with result 'exit-code'.
Jan 14 14:41:45 proxmox systemd[1]: Failed to start Wireless service.
root@proxmox:/home# journalctl -xe
--
-- A start job for unit pvesr.service has begun execution.
--
-- The job identifier is 5103.
Jan 14 14:44:00 proxmox systemd[1]: pvesr.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pvesr.service has successfully entered the 'dead' state.
Jan 14 14:44:00 proxmox systemd[1]: Started Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has finished successfully.
--
-- The job identifier is 5103.
Jan 14 14:45:00 proxmox systemd[1]: Starting Proxmox VE replication runner...
-- Subject: A start job for unit pvesr.service has begun execution
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has begun execution.
--
-- The job identifier is 5158.
Jan 14 14:45:00 proxmox systemd[1]: pvesr.service: Succeeded.
-- Subject: Unit succeeded
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- The unit pvesr.service has successfully entered the 'dead' state.
Jan 14 14:45:00 proxmox systemd[1]: Started Proxmox VE replication runner.
-- Subject: A start job for unit pvesr.service has finished successfully
-- Defined-By: systemd
-- Support: https://www.debian.org/support
--
-- A start job for unit pvesr.service has finished successfully.
--
-- The job identifier is 5158.
 
Proxmox > Shell >
Bash:
root@proxmox:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq master vmbr0 state DOWN group default qlen 1000
    link/ether 40:16:7e:37:21:af brd ff:ff:ff:ff:ff:ff
3: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr1 state UP group default qlen 1000
    link/ether 40:16:7e:37:21:b0 brd ff:ff:ff:ff:ff:ff
4: wlp7s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether ac:12:03:d7:ac:c1 brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 40:16:7e:37:21:af brd ff:ff:ff:ff:ff:ff
    inet6 fe80::4216:7eff:fe37:21af/64 scope link
       valid_lft forever preferred_lft forever
6: vmbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 40:16:7e:37:21:b0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.140/24 brd 192.168.1.255 scope global vmbr1
       valid_lft forever preferred_lft forever
    inet6 fe80::4216:7eff:fe37:21b0/64 scope link
       valid_lft forever preferred_lft forever
7: tap142i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr1 state UNKNOWN group default qlen 1000
    link/ether 96:51:ac:60:e5:07 brd ff:ff:ff:ff:ff:ff
8: tap145i0: <BROADCAST,MULTICAST,PROMISC,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master fwbr145i0 state UNKNOWN group default qlen 1000
    link/ether e6:00:3e:98:5b:57 brd ff:ff:ff:ff:ff:ff
9: fwbr145i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether a6:d5:c0:df:ac:58 brd ff:ff:ff:ff:ff:ff
10: fwpr145p0@fwln145i0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master vmbr0 state UP group default qlen 1000
    link/ether fa:83:cd:ec:8f:25 brd ff:ff:ff:ff:ff:ff
11: fwln145i0@fwpr145p0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master fwbr145i0 state UP group default qlen 1000
    link/ether a6:d5:c0:df:ac:58 brd ff:ff:ff:ff:ff:ff
 
Can't make the Wi-Fi PCIe NIC change status from DOWN to UP.
This is messing up the network configuration with Proxmox and the virtualised pfSense.
LAN NIC is connected, but pfSense reads the NIC as WAN vtnet0, but WAN should be vtnet1. No 2nd option found?
 
Debian forum says this is a Proxmox failure.
Moving on to wpa_suppliant until Proxmox make iwd work.
 
I think internet sharing might be better?
My home network topology is:
Internet > mobile > SIM > mobile hotspot on > Wi-Fi NIC > machine0 > machine0's Ethernet NIC > Ethernet CAT6e > machine1's Ethernet NIC > machine1.

The guides usually forget the configuration for the client computer (Proxmox in this sharing scenario).

If I change Proxmox's /etc/network/interface file to the Internet computer setup, the connection is lost.
Also, Proxmox GUI is not showing the Network setup.
Proxmox network no gui.png

Internet sharing.pngHotspot.pngWired.png

Internet computer:
Bash:
$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 88:d7:f6:c9:08:eb brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.120/24 brd 192.168.1.255 scope global noprefixroute enp3s0
       valid_lft forever preferred_lft forever
    inet6 fe80::87c9:bc68:3f90:4deb/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: wlx503eaae4a007: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 50:3e:aa:e4:a0:07 brd ff:ff:ff:ff:ff:ff
    inet 172.20.10.2/28 brd 172.20.10.15 scope global dynamic noprefixroute wlx503eaae4a007
       valid_lft 67422sec preferred_lft 67422sec
    inet6 fe80::d4e0:2248:6428:6ce/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
4: enp0s20f0u5c4i2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether fe:b6:d8:0f:d3:95 brd ff:ff:ff:ff:ff:ff
    inet 10.42.0.1/24 brd 10.42.0.255 scope global noprefixroute enp0s20f0u5c4i2
       valid_lft forever preferred_lft forever
    inet6 fe80::e295:34f6:8b0f:cd85/64 scope link noprefixroute
       valid_lft forever preferred_lft forever

Proxmox:
Bash:
root@proxmox:~# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 40:16:7e:37:21:af brd ff:ff:ff:ff:ff:ff
3: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master vmbr0 state UP group default qlen 1000
    link/ether 40:16:7e:37:21:b0 brd ff:ff:ff:ff:ff:ff
4: wlp7s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether ac:12:03:d7:ac:c1 brd ff:ff:ff:ff:ff:ff
5: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 40:16:7e:37:21:b0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.140/24 brd 192.168.1.255 scope global vmbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::4216:7eff:fe37:21b0/64 scope link
       valid_lft forever preferred_lft forever

Proxmox /etc/network/interfaces:
Bash:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

iface enp1s0 inet manual

iface enp2s0 inet manual

auto vmbr0
iface vmbr0 inet static
        address 192.168.1.140/24
        gateway 192.168.1.170
        bridge-ports enp2s0
        bridge-stp off
        bridge-fd 0

allow-hotplug wlp7s0
iface wlp7s0 inet dhcp
        wpa-ssid :-)
        wpa-psk 4e52a35fa5e49362c9966033cd882cc7be0eac2ca4b052feb898055b3a92601b
~                                                                                                                            
~                                                                                                                            
~                                                                                                                            
~                                                                                                                            
~                                                                                                                            
~                                                                                                                            
~                                                                                                                            
~                                                                                                                            
"/etc/network/interfaces" 30 lines, 809 characters
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!