@Lucas Brugneroto
pve 5.0 has a command qm importdisk, which will import a vmdk image to any pve supported storage
you need to create a VM first, and import the disk the disk to that VM
see
qm help importdisk
hi james
bridge setup does not work like this
you need only one bridge vmbr0 (our default setup ) which is like a virtual switch.
then when creating your VMs, you add their NICs to vmbr0, similar to pluging the cable in the virtual switch
IP address configuration is done inside the guest, not...
@jassmith87
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1711251
the fix is included in our 4.10.17-24 kernel, available in pve-test repository
on which kind port is connected the enp12s0 device of your bridge vmbr1000 ?
is that a trunk port ? ( ie the port becomes all frames )
in that case it is enough to add the VLAN id to the container net0 config
is the port assigned to the VLAN 1000 ?
then you should not need to do anything
as...
the kernel pve-kernel-4.10.17-4-pve on the pvetest repository may fix the virtio connectivity issue
this kernel fixes a bug which occurs with virtio guests having to process a large number of connections
it would be interesting to have a feedback if the pve-test kernel I mentioned fixes the issue or not that way the workaround mentionned would not be needed
if you can ping the container, verify that the port where your application is running is open with a port scanner
I use nmap for that
testing for instance if the port 8080 on IP 192.168.16.75
nmap -p 8080 192.168.16.75
PORT STATE SERVICE
8080/tcp open unknown
of course you want that...
rmmod virtio_net
modprobe virtio_net
Had to restart the network. ens18 was missing. ifup didn't work (already configured):
/etc/init.d/networking restart
yes you have to ifdown, then ifup in that case
@helloworld @nowrap
can you test if the kernel pve-kernel-4.10.17-4-pve on the pvetest...
by the way I see that your default gateway is outside your subnetwork
pve is able to autoadjust this by adding a route to the gateway via the configured device
how does the routing table look like on the container ?
you should have something like
default via
37.187.173.254
dev eth0 onlink...
could it be that the virtual NIC of the container is attached to the wrong bridge ?
please check with
pct config CTID | grep net0
that the virtual nic is connected to a bridge connected to the outside world
example on my system:
container config
pct config 103 | grep net0
net0...
two remarks:
* remember you have to the wkB/s numbers on all your disks to get the throughput of your raid Z2 setup. Or use zpool iostat
* during normal VMs use, you have a workflow which is rather a mix of random read and writes, like load from the disk the /bin/ls binary from that sectors...
oh yes, actually this documention is avaible on each installed pve host, so you can replace the intern.lab address with your PVE hostname, so you will have the documentation which matches the installed version of PVE.
@nowrap: if unload / reloading virtionet helps, try to find a way to reproduce the issue
Btw if using a load balancer on the VM, it might sense to use the Muliqueue option on the VM nic
see https://pve4.intern.lab:8006/pve-docs/chapter-qm.html#qm_network_device
Most of these freezes happen when the IO wait is too high (the disks are not abled to handle the IO you're asking)
check if your IO wait in the host summary is low all the times
well it is not only a question of performance
what about throughput vs data consistency for instance ?
however for distributed storage we recommend Ceph, this is the storage you will the more support from us and the Proxmox community
Hi, if you add to add multiple IP addresses to a device, you do this via Ethernet Aliases. See
https://www.cyberciti.biz/faq/linux-creating-or-adding-new-network-alias-to-a-network-card-nic/
The question now is why would you want to do that ? If you add private addresses on an interface with is...
good analysis !
did the WRITE DMA commands timeing out made the kernel switched the associated mount to read-only ? ( necessiting a reboot afterwards)
or did the VM just hanged in your case ?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.