> However with Proxmox 5.0, only the most recently created bridge passes traffic (that I can see). So pvemanager (and GUI running on remaining 4.4 machine), which uses the bridge on v1901, cannot see the 5.0 machines, but they do form part of the quorum via v1910. Also how I have SSH access.
If...
you don't need a physical NIC to create a virtual switch, for instance if you want to want to create a private networking only for vm-to-vm communication.
yes this forum is the right one :)
this kind of problems tend to happen with either buggy bios or buggy drivers
make sure your bios is up to date
after the reboot, try to see if you have in the ouput of
dmesg -T
something which could give you a hint of the problem
> # ip addr show eno52
9: eno52: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state
DOWN group default qlen 1000
link/ether 3c:a8:2a:e7:87:bf brd ff:ff:ff:ff:ff:ff
> obviously ethernet cable is connected...
maybe not ?
I don't have this hardware available so cannot so much but as...
this can have a number a cause:
* first please check if the boot sequence is completed
systemctl list-jobs should return
No jobs running
* then if the boot sequence is finished, maybe your VM is not configured to display a login prompt on the console
run the command
systemctl status...
the VirtIO nic is a 10 Gbit nic, so network traffic between two VMs on the same node should reach that speed.
no Network hardware is involved when doing so it is only work for the host CPU
I cannot reproduce those results: from FreeBSD VM to Linux VM I got
------------------------------------------------------------
[ 3] local 192.168.16.24 port 61552 connected with 192.168.16.75 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-60.0 sec 180 GBytes 25.8...
This is possible by editing the config from the command line (pc net is very old, I see it is recommended for the virtualization of Windows 3.1 ... )
For this:
find out the config file of your VM like /etc/pve/qemu-server/505.conf
remove the existing net0 device if any
add the following...
OK these numbers are a bit better but still not that expectional, so no wonder your backups are taking time
there is not magic making a fast NFS server apart from using SSDs und 10G/e ethernet :)
do you still have the messages "task blocked for more than 120s ? I would suggest to look at the...
also if you want to debug what is wrong at startup you can use
# systemd-analyze blame
5min 154ms networking.service
177ms postfix@-.service
8ms systemd-logind.service
7ms dev-hugepages.mount
7ms systemd-remount-fs.service
6ms...
the problem is that the getty service which display the login prompt, needs the network configuration to be applied before running.
you can check that by entering the started container with
pct enter <CTID>
systemctl list-jobs
you will see
47 networking.service start...
The Debian9 templace is working fine for me.
Can you post here your container configuration ?
You can also enter the container namespace, bypassing the login/console with pct enter <CTID>
Then you should check that you have at least one agetty process. This process provides you with a login...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.