This ended up being a proprietary storage server issue. They use a special driver that doesn't seem to work with virtio. It works fine with vmxnet3...but now there's no possibility of 10Gb speed. :-(
Hey all,
I'm setting up a lab with a Proxmox/ceph 3 node cluster.
WAN is being provided by a pfsense VM on a different PVE (running multiple pfsense instances for different uses) where the rules are set to block all traffic in the lab LAN except for a specific range of management IP addresses...
Hey all,
I have a 3 node cluster (6.0.4) each with dual 10Gb Intel NICs. On both Windows and CentOS 7.x VM's I have VirtIO 10Gb networking installed.
I have zero issues with Windows. On CentOS I find that any files that are moved to/from a network share fail checksum. There also seems to be...
I just dropped in an OPNsense VM on PVE to replace an EOL WatchGuard appliance at my small office. It works great! I found no docs on how to do this, I just pieced it together from several sources as I had need. I still have some loose ends to tidy up, but so far I'm very happy with it.
Good...
I've found the issue to be the Vivaldi web browser running on my Debian laptop.
If I use it to connect with noVNC, the mouse/pointer has issues. If I do the same with Firefox on my laptop, I have no issues.
Here's what I've found out so far.
I run Debian with Cinnamon DE on my laptop and use Vivaldi as my browser. On Debian, Manjaro and Centos testing VMs I could launch a noVNC web page and control the distro's desktop until I tried a right click. After this, most all clicking functionality (left...
I had to rebuild my Proxmox server recently and now when I install any Linux distro VM, I lose mouse control as I do the first right click. Any ideas?
Kernel Version Linux 4.15.15-1-pve #1 SMP PVE 4.15.15-6
PVE Manager Version pve-manager/5.1-52/ba597a64
Thanks!
Can anyone look at my config below and tell me why my FreeNAS VM is very slow to POST when it starts?
This old thread talks about kernel bugs, but it's surely fixed by now: https://forum.proxmox.com/threads/proxmox-4-0-pci-passthrough-broken-in-several-ways.24178/
Thanks!
# pveversion -v...
Not yet. It's hard to find the time with all the other hats I wear.
I must be thinking it's harder to set up than it is. I'll eventually get around to giving it a try. Thanks!
So there's no configuration I need to place on the Proxmox 10Gb to make the other VLANs accessible? Does anyone have an example config I can look at to get my head wrapped around this? Thanks!
Hey all,
I have a functioning 10Gb Proxmox server.
I have multiple VLANs configured on my switch.
Proxmox/VMs are only communicating with a single VLAN.
If I wanted to have all VLANs connect to the server via the 10Gb port I can tag the port on the switch, but what kind of config do I need on...
Hello all,
I've replaced esxi on a RS140 with Proxmox.
For some reason when I install debian mate, I quickly lose my mouse right click.
I've installed qemu-quest-agent and enabled the qemu setting on the vm. Still didn't help.
I then tried ubuntumate which had the same issue until I decided to...
The switch is a Dell n3024 with the following config:
switch#show interfaces port-channel 1
Channel Ports Ch-Type Hash Type Min-links Local Prf
------- ----------------------------- -------- --------- --------- ---------
Po1 Active: Gi1/0/19, Gi1/0/20...
@Symbol Just so you know, I did not see the that kind of read speed with multiple VMs if you use layer2+3.
With 3 VMs writing, my CPU(s)24 x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz (2 Sockets) will jump to 14.6% and then quickly drop to 5%.
Changing to bond_xmit_hash_policy encap3+4 did not make a difference.
So would changing the LAG to something other than LACP get me any more bandwidth?
UPDATE:
Changed to bond_xmit_hash_policy layer3+4 and I'm seeing sustained 325MBps aggregate read speed!
I'm only seeing 80MBps aggregate...
I know it's not the switch. Same switch with physical clients has maximum sustained bandwidth.
I'll try the bond_xmit_hash_policy encap3+4 when I get a chance...currently traveling.
Do you think it's worth trying a different type of LAG than LACP?
Thanks!
Hey everyone,
I'm currently building a "proof-of-concept" for work using Proxmox.
I have a 4x1Gb LACP config (see below), but I get slow performance from my VMs.
I'm using a 400MBps NAS storage that works perfectly when a 10Gb non-vm client is tested.
On Proxmox I have 3, Win 2012 r2 VM's with...
I'm getting 12 emails about 12 disks that are being passed through to FreeNAS:
Subject:
SMART error (FailedOpenDevice) detected on host: hostname
Body:
This message was generated by the smartd daemon running on:
host name: hostname
DNS domain: local.local
The following warning/error was...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.