Search results

  1. T

    virtual machines kernel panic with Proxmox 5.1

    Perfect, I got: 00:05.0 SCSI storage controller: Red Hat, Inc Virtio SCSI Thanks again for your time, best regards
  2. T

    virtual machines kernel panic with Proxmox 5.1

    vm config: root@pve5:~# qm config 105 boot: cdn bootdisk: scsi0 cores: 4 ide2: none,media=cdrom memory: 8192 name: dibbi1-xl net0: virtio=...,bridge=vmbr0 net1: virtio=...,bridge=vmbr1 numa: 0 ostype: l26 scsi0: stovm:105/vm-105-disk-1.qcow2,size=52G scsi1...
  3. T

    virtual machines kernel panic with Proxmox 5.1

    The kernel panic I got was is a vm with VirtIO SCSI controller. With lspci I see: 00:05.0 SCSI storage controller: LSI Logic / Symbios Logic 53c895a In the Proxmox guide I read that with VirtIO SCSI Proxmox emulates by default a LSI 53C895A controller. Could you please explain what you meaned...
  4. T

    virtual machines kernel panic with Proxmox 5.1

    Hi Tom, thank you for your answer. Since Proxmox 4.4 I have been using the default controller virtIO SCSI which seems to be the recommended one (this is the first kernel panic I got in years). 90% of my vms are Debian based (mostly jessie and stretch). Why do you advise to use the older...
  5. T

    virtual machines kernel panic with Proxmox 5.1

    Hi, I also got a kernel panic (Proxmox 5.1) on a vm Debian 9.3 with PostgreSQL 10 (a OS disk and a data disk, both SCSI). I recently upgraded the vm kernel to Debian 4.9.65-3+deb9u2 (2018-01-04). I am trying to understand what it's causing the problem. From what I see it could be related to...
  6. T

    Poor network performance with Zeroshell VM

    Latest tests: openvpn with tcp and udp showed similar values (8-11MB/s). udp performed better but still close to tcp values openvpn without encrpytion (--cypher none): 12.5 MB/s In another datacenter with a openvpn server (tcp, encrypted) installed on a brand new physical server I got 24 MB/s...
  7. T

    Poor network performance with Zeroshell VM

    I did some test with iperf3 It seems a problem with the VPN logical/software interface (maybe related to openvpn?) On the public physical NIC (ethx) I get 65 MB/s, which is reasonable value. (In the private NIC I get 90 MB/s) On the VPN logical NIC (VPN99) I get the same 10-12 MB/s Could be...
  8. T

    Poor network performance with Zeroshell VM

    test with different CPU core and driver: VMXNET3 driver 2 core (CPU set to default kvm64)-> 10 MB/s (CPU 70% under file transfert, default use CPU 5%) E1000 driver 1 core (CPU set to default kvm64) -> 6 MB/s (CPU about 130% under file transfert, default use CPU 7%) 4 core (CPU set to default...
  9. T

    Poor network performance with Zeroshell VM

    log vpn 2018-01-24 16:44:39 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key 2018-01-24 16:44:39 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication 2018-01-24 16:44:39 Control Channel: TLSv1.2, cipher TLSv1/SSLv3 ECDHE-RSA-AES256-GCM-SHA384, 2048...
  10. T

    Poor network performance with Zeroshell VM

    This is the speed of a file transfert from a physical server to my pc through VPN (zeroshell vm) - encrypted performance.
  11. T

    Poor network performance with Zeroshell VM

    This is VM config with virtio driver (which I generally use in linux vm): bootdisk: scsi0 cores: 1 ide2: none,media=cdrom memory: 4092 name: zeroshell net0: virtio=...,bridge=vmbr0 net1: virtio=..,bridge=vmbr1 net2: virtio=...,bridge=vmbr2 numa: 0 ostype: l26 scsi0...
  12. T

    Poor network performance with Zeroshell VM

    Hi Everyone, I am using a OpenVPN server (Zeroshell 3.8.2, Linux distribution) on a Proxmox VM environment (enterprise repo). The problem is that the vm has really poor network performance on a full Gb network. Best (unexpected) results are with Intel e1000 and vmxnet3 driver, but we are...
  13. T

    Proxmox cluster on a shared SAN storage

    As advised, I developed a small virtual environment with a 3 node Proxmox cluster and a FreeNAS 9.3. FreeNAS is configured as iSCSI target (40GB zvol) which is connected to the Proxmox cluster (not LUNs directly). On the top of it there is LVM. The 40GB zvol is automatically “formatted” with...
  14. T

    Proxmox cluster on a shared SAN storage

    Hi everyone, I am currently using a 4 node Proxmox 3.4 cluster on a NFS storage, mainly used for web applications. I would like to expand the cluster and above all use it on a SAN storage. The problem is that I do not find a lot of info (manuals, books, forum, etc.) on how to develop a SAN...
  15. T

    Performance of PostgreSQL VM on local storage vs NFS

    Thank you very much for your replies. I just bought "Mastering Proxmox", Ceph could be a good solution, I am going to read about it. I don't like local storage too, but with a RAID 1, spare and physical backup I don't think it's so bad..
  16. T

    Performance of PostgreSQL VM on local storage vs NFS

    Hi everyone, I would like to virtualize a PostgreSQL database in production environment (KVM) and I would like to know if someone of you has been using it running on local disk (for example RAID 1 SAS, where Proxmox 3.3 is already installed) or NFS storage (or other storage solution..). I read...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!