Search results

  1. T

    virtualbox install guide fail - can't access web gui

    verify web server is listening: apt install net-tools netstat -al | grep 8006 Should give you results like this: tcp 0 0 0.0.0.0:8006 0.0.0.0:* LISTEN You might try restarting webserver: service pveproxy restart And either you made a typo, or you are...
  2. T

    virtualbox install guide fail - can't access web gui

    You may have the wrong url. The page should be at https://ipaddress:8006 Other than that... need more info, are you running 4.4 stable? if you are running one of the betas, you might need to setup the testing repository and run updates to patch a potential bug. On the pinging, yes most likely...
  3. T

    Bug in 5.0b2 disk move

    That worked, thanks Tom.
  4. T

    Bug in 5.0b2 disk move

    I have a 5gb VM on zfs storage, I moved the drive live to another ext4 directory store on the same host using the gui. (~1tb free on storage), the task got to 100% in ~3 minutes, but it continues 3 hours later, still at 100%, the data transfer size keeps increasing by very small bytes, and the...
  5. T

    USB install better options (needed!)

    I have to agree with this, the instructions in the wiki are terrible - suggesting I download a 75mb utility or osforensics which is totally cryptic in how that works - that is ridiculous when Rufus is only 2mb. Thankfully the Rufus DD method works, confirmed on 5.0 beta2
  6. T

    OVS adding VLANs without reboot

    Can someone verify the my understanding of OVS vs old Linux network stack- If I set a Linux Bridge to VLAN aware, I could add as many VMs with different VLAN ids as I want, and any time I want, without rebooting the host. Correct me on this part- On OVS, documentation seems to state I have to...
  7. T

    VLAN trunking not working

    I am at a loss, for some reason I cannot get trunking to work from my VMs to my switch. I made VLAN 11 on my switch, and made a member on the port for eth1, and set the port mode to "TagALL/Trunk". In proxmox I made the following: iface eth1 inet manual auto vmbr1 iface vmbr1 inet manual...
  8. T

    10GB network performance issue

    yes, I tried netcat also and a raw nc file transfer is giving 690 MB/s, but a ZFS send through netcat only gets 133MB/s (might as well be 1GB LAN) .... also tried mbuffer, comes out around 230MB/s. Example- zfs send rpool/data/vm-161-disk-1@rep_test161_2016-09-08_21:38:57 | nc -q 0 -w 20 pve2...
  9. T

    10GB network performance issue

    So I already installed the HPN patch with the NONE cipher option, supposedly that turns off all encryption accept the initial handshake: scp -c NONE vm-116-disk-1.qcow2 pve2:/mnt/data2/ That gives me 340-370MB/s, I dont understand why its still so slow if no encryption is happening, I guess I...
  10. T

    10GB network performance issue

    shoot - sorry, my fault. edited MB.... 330MB still sounds low to me on a server that has 1GB/s+ of local performance on the filesystem. I cp'd 35gb between 2 arrays on the same raid controller in 56 seconds = 625MB/s - which is worst case because its killing the raid card cache to read/write to...
  11. T

    10GB network performance issue

    Thanks for the comment Adam, but I missed your point there, your example shows 291MB/s, I am getting ~330MB/s, but on iperf I can move 11gb in close to 10 seconds. Your higher end CPU appears to show the same encryption bottle neck as mine or worse. Anyone know why I can test out AES-128-GCM @...
  12. T

    Software RAID6 as Local VM Storage Very Slow

    ext4 on lvm is the default install from the iso.... lvm-thin adds some other handy features that are probably preferred.... if you decide on thin, look for that option in the storage gui when you add the directory.
  13. T

    10GB network performance issue

    Some more testing on this.... still not sure on if openssl is using aes-ni, but found other details: root@pve1:~# apt-get install -y gnutls-bin root@pve1:~# gnutls-cli --benchmark-ciphers Checking cipher-MAC combinations, payload size: 16384 SALSA20-256-SHA1 0.21 GB/sec...
  14. T

    10GB network performance issue

    ouch... arcfour gave ~150mb/s on a scp, so that didnt work well. The cpu is a Xeon e5-2620 v3 (6core/2.4-3.2ghz), I know its low end, but appears to support most major features, was spec'd because VM workload did not require much, its only 1 small centos, a 2k8, and an xp, but I need fast sync...
  15. T

    10GB network performance issue

    And I see pve-zsync is also encrypting... is there a way to disable encryption for zfs send? doh! - of course the send is piping through ssh, any thoughts on sending through netcat instead?.... Im not big on perl, so not sure how far I can get on zsync.
  16. T

    Software RAID6 as Local VM Storage Very Slow

    Shutdown all VMs, Run pveperf /mnt/raid6dir to check your baseline reads. Run this to check writes: dd if=/dev/zero of=/mnt/raid6/somefile bs=1024k count=8192 conv=fdatasync Run "top" on PVE as you start your VM.... watch gui for io delay graph on host machine summary. For linux VM raid6...
  17. T

    10GB network performance issue

    I have 2 Dell R730, 1cpu, 32ram, 6x ssd raid, and 3 different 10gb nics to choose from, going through a Nortel 5530 with 2x 10gb XFP, 9k frames, and switch ports VLAN'd (untagged) from all other ports. iperf tests look great: dd performance on the box locally is good: But an scp from box to...
  18. T

    [SOLVED] Import/convert/export raw images to ZFS volume

    Note for others in this situation on qcow files can convert more quickly using an nbd device. Pre-build your vm in the gui, then overwrite the zfs dev with dd- modprobe nbd max_part=63 qemu-nbd -c /dev/nbd0 /mnt/oldstorage/images/100/vm-100-disk-1.qcow2 dd if=/dev/nbd0...
  19. T

    VZDump backup failed

    I am having the same issue also, 2 VMs backup just fine 100% always, but one always fails with the same vma_queue write error at around 75%. Backing up to local 5TB sata drive. Running PVE 4.2-15/6669ad2c kernel: 4.4.13-1-pve
  20. T

    Anyone tested Open Compute nodes?

    Just curious as a lot of the facebook servers seem to be hitting the grey market now. I know power and rack size can be an issue if you dont buy the whole rack, but these might be good lab machines for testing.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!