To change from e1000 to virtio you need to stop-start the VM, a reboot is not enough.
Do you get any error when you run "service networking restart" and/or "ifconfig eth0 up" in the VM?
That's all? If that's it, it really looks like a DNS issue, but strange that DNS seems to work fine for resolving Google (does it resolve unknown domains also fast?) You should get something like:
root@node02:~# telnet localhost 22
Trying 127.0.0.1...
Connected to localhost.localdomain.
Escape...
Thanks. But currently no plans for out-or-the-box integration I guess? I don't like the commercial philosophy of CloudLinux, so I'm not very excited to buy a license from them. Don't get me wrong, it's not that I absolutly don't want to pay for software (or support) I use (I also pay for PVE...
e1000 is your network driver. Try to change it to virtio (also gives better performence in most setups). The order is just a syntax issue and both are valid, if you want you can change it in /etc/pve/qemu-server/VMID.conf.
On the VM without internet connection, can you ping the gateway IP?
Is SSHd running (ps waux | grep ssh)? What happens when you run "telnet localhost 22"?
Is your (old) IP hardcoded in /etc/ssh/sshd_config? If yes, change it and run "service ssh restart".
What are the nameservers in /etc/resolv.conf? If they are from your old ISP you may need to change them...
Since Linux 4.x kernel there are built-in posibility's to upgrade the Linux kernel without a reboot. Since PVE 4.x uses Linux 4.x kernel, are there plans to support this feature out-of-the-box? And if yes, is there an ETA for this?
I think this feature is less important on HA clusters, but can...
I think that's a bad argument. It's not good to shout on someone that posts in a different language, but you may tell they need to post in English and only answer questions in English. If someone shouts to another user there are mods (Proxmox-staff) to take care. If I post in Dutch, do we have a...
With all due respect, but I think this is a very bad development and is a big disadvantage to all non-German users. With this new German forum it's possible that the solution for a problem you face is already available, but you don't know because it's posted in a language you don't understand...
I'm also using the X520(-DA2) 10Gb NIC's in one of my clusters (on all 5 nodes) on PVE 4.2-5 without any problem. So I don't think that's the problem here.
Huh? I've upgraded a couple of 4.1 clusters already to 4.2 with live migration without any problem. As far as I know only live migration from 3.x to 4.x isn't possible because of corosync 1.x to 2.x.
Only thing I've done is live migrate VM's to another node, upgrade and reboot the empty node...
So, next version will be:
virtio0: SSD-cluster:vm-103-disk-1,size=40G
net0: virtio=3A:A8:A8:DB:EC:F5,bridge=vmbr0209
ide2: none,media=cdrom
Like it was in 4.1-22, correct?
Okay :) I know for 100% sure it works with 4.2 when you change to migration unsecure. Also I know for sure it doesn't work with 4.1 and older (tested it this morning with a 4.1-22 cluster, didn't work, upgraded to 4.2-5 and it did work).
Why is it that almost every update there are VM config format changes? And if needed, why don't automaticly change the config format for all VM's that exists already? For example:
On PVE 4.1-22 it was
virtio0: SSD-cluster:vm-103-disk-1,size=40G
net0: virtio=3A:A8:A8:DB:EC:F5,bridge=vmbr0209...
Works like it should be here. I can use my keyboard (1.JPG) and start install without a problem (2.JPG).
Only difference is I've used the back-USB ports and you the front-USB. Should make no difference, but to be sure you can check if you also have problems with back-USB.
How are your BIOS...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.