Search results

  1. elurex

    GUIDE: Proxmox VE and pfSense (dual NIC)

    http://kaven.no/nb/blog/1510/dual-nic-proxmox-ve-40-beta-and-pfsense-22 My only suggestions would be for Pfsense best optimal performance is to pci passthrough the wan and lan eth port. Or use the srv-iov pass vf to Pfsense. Sent from my Nexus 6 using Tapatalk
  2. elurex

    [SOLVED] PCI passthrough of LSI HBA not working properly

    Can you output the iommu grouping info and also lspci -vnn ? Sent from my Nexus 6 using Tapatalk
  3. elurex

    Configuring Proxmox VE 4 as an iSCSI target

    i actually have success compile scst on pve and my gold is using srpt... but I believe iscsi is easier than srpt.... I used an order svn version of scst 6187 (current is 66xx) due to later version uses ofed 3.0 whereas intree kernel uses 2.4 only. Sent from my Nexus 6 using Tapatalk
  4. elurex

    Proxmox 4.0 ZFS disk configuration with L2Arc + Slog. 2nd opinion

    if you install pve4 on zfs as root fs, you can do live backup with zfs snapshot and then use zfs send/recv or zfssync. It's better and faster than rsync. To take full advantage of zfs, you really need more RAM. and RAM to zil is 1:1 so your zil should be 32 gb , and RAM to L2arc is 1:2 and if...
  5. elurex

    Proxmox 4 ZFS Install

    yes.. lz4 is great compression
  6. elurex

    [SOLVED] PVE4 ovs-bridge not active

    issue solved the ovs bridge must have ip/mask/gateway... its different from virsh...
  7. elurex

    [SOLVED] PVE4 ovs-bridge not active

    Any has clue why my pve4 ovs-bridge vmbr0 is not active? how to make it active? on ubuntu I know is using virtsh net-define vmbr0.xml to make it active and virsh net-autostart vmbr0 , but how to do it in pve? root@pve-dol1:~# ovs-vsctl show 7d96264f-ce54-4efd-9137-ea70c221a72d Bridge...
  8. elurex

    zfs

    just a suggestion mirror /dev/sda <---2 tb drive /dev/sdb <---2 tb drive cache /dev/sdc1 <--- 150gb out of 300 log /dev/sdc2 <--- 150gb out of 300 This is because you only have 32 gb of ram and typically cache and log...
  9. elurex

    PVE4 kernel issue, sr_iov disabled on PV4 kernel

    tried root@pve-kac:/sys/bus/pci/devices/0000:03:00.0# cat sriov_totalvfs 64 root@pve-kac:/sys/bus/pci/devices/0000:03:00.0# echo 4 > sriov_numvfs -bash: echo: write error: Operation not supported still failed.. it must be 4.1.5 driver code issue
  10. elurex

    ZFS boot RAIDZ with 3 SSD

    Becareful on ssd trim issue on zfs Sent from my Nexus 6 using Tapatalk
  11. elurex

    PVE4 kernel issue, sr_iov disabled on PV4 kernel

    I will try from system pci bus instead of system net class Sent from my Nexus 6 using Tapatalk
  12. elurex

    How to install Proxmox VE 4 as mirrored ZFS ?

    zpool add rpool cache /dev/sda3 zpool add rpool cache /dev/sdb3 then your cache will be in a raid0 mode but please make sure both partition are the same size zpool add rpool log mirror /dev/sda4/ /dev/sdb4 then your log will be in raid1 mode but please make sure both partition are the same...
  13. elurex

    How to install Proxmox VE 4 as mirrored ZFS ?

    during the install, you can select ZFS pool type RAID1. and for the SSD using as cache or zil are added later via command line. You can gpt your ssd into different partition sda1 sda2 sda3... example are below NAME STATE READ WRITE CKSUM...
  14. elurex

    PVE4 kernel issue, sr_iov disabled on PV4 kernel

    with PVE 4.0 and kernel 4.2.3 failed, complained SR_IOV is not supported root@pve:~# uname -r 4.2.3-1-pve root@pve:~# dmesg|grep ixgbe [ 4.070688] ixgbe 0000:03:00.0 eth6: MAC: 5, PHY: 6, PBA No: 020000-000 [ 4.070692] ixgbe 0000:03:00.0: 0c:c4:7a:74:09:18 [ 4.070695] ixgbe...
  15. elurex

    PVE4 kernel issue, sr_iov disabled on PV4 kernel

    The same server hardware SuperServer 5028D-TN4T when install on Ubuntu 14.04.3 with kernel to 4.2.3 SR-IOV is successful with Intel X557 but failed on PVE 4.0 with kernel 4.2.2-1-pve, how to fix this issue? I can enable SR_IOV on for my 10 GBe X557 nic and with successfully pci passthrough...
  16. elurex

    Proxmox 4 ZFS Install

    @command line: zfs create rpool/storage @web gui: created "ID" "zfs-storage" and selected "ZFS Pool" "rpool/storage"
  17. elurex

    Proxmox 4 ZFS Install

    you can use the command zfs create rpool/[new directory] create a seperate dataset which later you can snapshot/backup with don't mix it with your ROOT and boot OS dir
  18. elurex

    PVE4 & SRP storage from another ZFS pool via SRPT

    That why i am not fond of ceph solution... i guess i will stick with srp and Infiniband network Sent from my Nexus 6 using Tapatalk
  19. elurex

    Prox4 - Pcie passthrough - loosing pcie devices on boot

    try to avoid high point and also marvel controllers... compiling and patching kernel can be easy on regular ubuntu, but it is very difficult on pve since it's modify/fork from the ubuntu repository already The logic of the kernel patch is pretty simple, if 1103:0641, exclude from iommu, do not...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!