Hi All,
I have converted Qnap TS 470pro, TS 259/459pro & TS 251 to PVE
I have ordered http://www.amazon.com/gp/product/B00ODYFA2M?psc=1&redirect=true&ref_=oh_aui_detailpage_o00_s00 to replace the original 512mb eUSB DOM
I also tested on using SanDisk Ultrafit 16 GB and it works as well.
So...
Its very interesting issues that you had with ZFS on linux, this is because I have been using ZFS in production environment for the past 3 yrs and recently I have convert 4 vsphere exsi 6.0+zfs nodes to pve 4.1+zfs nodes without much hassle. And currently plan to add another 2 nodes into current...
that article only applies to amazon env
https://www.openstack.org/summit/vancouver-2015/summit-videos/presentation/high-performance-neutron-using-sr-iov-and-intel-82599
cpu usage... and also network througput...
Anyone knows how to pass sr-iov VF Nic to LXC in pve 4.1?
The VF nic highlight in *** are the ones that I want to pass to LXC
root@pve-dol1:/sys/class/net/eth6/device# lspci |grep Eth
03:00.0 Ethernet controller: Intel Corporation Ethernet Connection X552/X557-AT 10GBASE-T
03:00.1 Ethernet...
I have the same questions, but i want to passthrough sr-iov vf nic of intel x520.... I know it needs to mount to /dev of the lxc but there is no /dev/eth @ hostpve
ifconfig -a ( i have intel 350 with sr-iov VF enabled)
root@pve-dol1:~# ifconfig -a
bond0 Link encap:Ethernet HWaddr ba:dd:fa:01:df:d8
inet6 addr: fe80::b8dd:faff:fe01:dfd8/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0...
thanks sili
intel x520 da-2 is assigned to eth6 and eth7
auto lo
iface lo inet loopback
auto eth2
iface eth2 inet manual
auto eth3
iface eth3 inet manual
auto eth6
iface eth6 inet static
address 192.168.0.228
netmask 255.255.255.0
auto eth7
iface eth7 inet static...
Hi all,
PVE 4.1 its ixgbe does not work with intel x520-da2... however if I use ubuntu 15.10 live cd or centos 7.2 both distribution works but PVE 4.1 does not. dmesg does not output any error msg and shows the ixgbe is loaded and assign NIC with eth0 and eth1, however, after PVE 4.1 boots...
For SRPTOOLS setup is more difficult than SRP Target. The default srptool from debian 8.1 repo does not work. Also the key component ibsrpdm -c failed to probe and return any ib device, which is very frustrating.
The work around is to use the srptools from Mellanox, the package that I use is...
I don't know how many still using SRPT but I am using it as my main protocol to share LUN instead of iscsi/iser.
Package Info
root@nas:~# pveversion --verbose
proxmox-ve: 4.1-28 (running kernel: 4.2.6-1-pve)
pve-manager: 4.1-2 (running version: 4.1-2/78c5f4a2)
pve-kernel-4.2.6-1-pve: 4.2.6-28...
sigxcpu.... the following test is done on the same system using the same hardwarek without any SSD as LOG (zil) but just using ubuntu 15.10
root@nas:/vmdisk# dd if=/dev/zero of=zerofile.000 bs=4k count=16000k; sleep 30 ; dd if=zerofile.000 of=/dev/null bs=1M
16384000+0 records in
16384000+0...
System 1: Proxmox VE4.1
root@nas:~# zfs get all rpool
NAME PROPERTY VALUE SOURCE
rpool type filesystem -
rpool creation Mon Dec 21 2:18 2015 -
rpool used 67.0G -
rpool available...
in PVE 4.1, for LXC to mount host file system, it need to add config
mp0: /target/test,mp=/target
However, I found it limits @ mp9 and does not process mp10 and forth
Another huge performance issue I found out that some how zfs performance seemed to be capped!
I ran following command on...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.