For testing and experimenting, I installed Proxmox as a virtual machine on my TrueNAS server and noticed that I cannot ping or access anything on my TrueNAS host from the virtualized PVE.
My TrueNAS IP: 10.0.1.10/24
Virtualized PVE: 10.0.1.203/24
My TrueNAS network setup is straightforward - just one VirtIO interface, same as my VM with Proxmox - nothing fancy:
From PVE I can access anything in my home network and outside except the TrueNAS host:
I know that running Proxmox as a VM is not advised, but I wanted to have a testing and learning environment before I go deep with my moving to the Proxmox 3-node cluster, and inability to access my host NAS data (some services, NFS) blocks a lot of my test ideas.
EDIT - solved:
it was TrueNAS misconfiguration, not Proxmox related. But for future reference: https://web.archive.org/web/2022061...ireference/virtualization/accessingnasfromvm/
My TrueNAS IP: 10.0.1.10/24
Virtualized PVE: 10.0.1.203/24
My TrueNAS network setup is straightforward - just one VirtIO interface, same as my VM with Proxmox - nothing fancy:
From PVE I can access anything in my home network and outside except the TrueNAS host:
Code:
root@pvm:~# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master vmbr0 state UP group default qlen 1000
link/ether 00:a0:98:37:a4:63 brd ff:ff:ff:ff:ff:ff
altname enp0s3
3: vmbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:a0:98:37:a4:63 brd ff:ff:ff:ff:ff:ff
inet 10.0.1.203/24 scope global vmbr0
valid_lft forever preferred_lft forever
inet6 fe80::2a0:98ff:fe37:a463/64 scope link
valid_lft forever preferred_lft forever
root@pvm:~# ip route
default via 10.0.1.1 dev vmbr0 proto kernel onlink
10.0.1.0/24 dev vmbr0 proto kernel scope link src 10.0.1.203
root@pvm:~# ping 10.0.1.1
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
64 bytes from 10.0.1.1: icmp_seq=1 ttl=64 time=2.01 ms
64 bytes from 10.0.1.1: icmp_seq=2 ttl=64 time=0.445 ms
^C
--- 10.0.1.1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.445/1.227/2.009/0.782 ms
root@pvm:~# ping 10.0.1.10
PING 10.0.1.10 (10.0.1.10) 56(84) bytes of data.
From 10.0.1.203 icmp_seq=1 Destination Host Unreachable
From 10.0.1.203 icmp_seq=2 Destination Host Unreachable
From 10.0.1.203 icmp_seq=3 Destination Host Unreachable
^C
--- 10.0.1.10 ping statistics ---
4 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3047ms
pipe 3
root@pvm:~# ping 10.0.1.20
PING 10.0.1.20 (10.0.1.20) 56(84) bytes of data.
64 bytes from 10.0.1.20: icmp_seq=1 ttl=64 time=0.676 ms
64 bytes from 10.0.1.20: icmp_seq=2 ttl=64 time=1.46 ms
^C
--- 10.0.1.20 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 0.676/1.067/1.459/0.391 ms
root@pvm:~# ping google.com
PING google.com (172.217.16.46) 56(84) bytes of data.
64 bytes from muc03s08-in-f46.1e100.net (172.217.16.46): icmp_seq=1 ttl=58 time=9.57 ms
64 bytes from waw02s14-in-f14.1e100.net (172.217.16.46): icmp_seq=2 ttl=58 time=11.0 ms
^C
--- google.com ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1002ms
rtt min/avg/max/mdev = 9.573/10.304/11.035/0.731 ms
I know that running Proxmox as a VM is not advised, but I wanted to have a testing and learning environment before I go deep with my moving to the Proxmox 3-node cluster, and inability to access my host NAS data (some services, NFS) blocks a lot of my test ideas.
EDIT - solved:
it was TrueNAS misconfiguration, not Proxmox related. But for future reference: https://web.archive.org/web/2022061...ireference/virtualization/accessingnasfromvm/
Last edited: