Search results

  1. D

    Poor network performance on guest

    Nope, 2x six-core CPUs, both with HT. The results have been pretty variable, and have improved since I moved the PVE host to "new" (old, but more powerful, and new to me) hardware, but are still significantly less than before. The chain from FreeNAS to VM is pretty short: FreeNAS <-> switch...
  2. D

    Poor network performance on guest

    I'm afraid I don't understand the question. The guest appears to be using the virtio_net driver, if that addresses your question. If not, how could I better answer it?
  3. D

    Poor network performance on guest

    root@pve:~# cat /etc/network/interfaces # network interface settings; autogenerated # Please do NOT modify this file directly, unless you know what # you're doing. # # If you want to manage part of the network configuration manually, # please utilize the 'source' or 'source-directory' directives...
  4. D

    Poor network performance on guest

    tl;dr: I'm seeing poor network performance on a CentOS 6.7 guest over 10GbE. Bare metal on the same hardware would see 6+ Gb/sec using iperf; now the average is closer to 1 Gb/sec. Before installing Proxmox, I ran my home network on a Centos 6.7-based firewall/server/router (SME 9.0...
  5. D

    License key - 1 socket -> 2

    Great, thanks. I've contacted them.
  6. D

    License key - 1 socket -> 2

    Yes, I understand that. My question is how to do that.
  7. D

    License key - 1 socket -> 2

    I could write a bit of background, but I guess it isn't really relevant. I installed PVE 4.0 on a single-socket server, and bought a community license key for it. I then found that that hardware didn't have the grunt that I needed, so I moved the drives to a two-socket box. Everything's...
  8. D

    Urgent: High cpu usage in Proxmox ve 4 with ZFS

    I've also inadvertently found that a scrub seems to trigger this. System 1: X9SCL-F motherboard, i3-3240 CPU, 16 GB RAM, 1 running CentOS 6 VM with 4 GB/balloon to 8 GB, 1 running CentOS 6 VM with 512 MB/balloon to 1 GB, two-disk ZFS mirrored pool with no SLOG device. Scrub results in ~100%...
  9. D

    Urgent: High cpu usage in Proxmox ve 4 with ZFS

    ...but of course limiting the ARC size (especially limiting it to only 512 MB) means you're not doing nearly as much caching of your reads. I'd been running into the same issue with a CentOS 6 guest. Whenever the guest would try to run a backup, it would behave as described here--the host CPU...
  10. D

    Caching when using ZFS storage?

    True, but the example in the wiki is when a file is being used as a virtual disk. In that case, using no cache will give an error, since ZFS doesn't support the O_DIRECT flag. But when using zvols for storage, this issue doesn't arise.
  11. D

    Caching when using ZFS storage?

    Thanks. I'm familiar with ARC and L2ARC (the latter of which I don't have on my system, at least at this time), but those are read caches, and the options seem to deal with write cache.
  12. D

    ZFS filesystem cannot store ISO's?

    I don't know the reason for that design decision, but you can certainly create local storage on the ZFS volume, and store the ISOs there.
  13. D

    Caching when using ZFS storage?

    Thanks for the info. I was kind of hoping there would be a "80+% of the time, you want X" answer, but it sounds like that isn't to be.
  14. D

    Caching when using ZFS storage?

    Running Proxmox VE 4.0-57 on ZFS, and I've created local ZFS storage for my VMs, so they're using zvols for their virtual disks. How should cache be set for those? The default is no cache, but other options are write back, write through, and direct sync. I don't see a page on the wiki that...
  15. D

    pve-zsync real time??

    What do you mean by "replicate in real time"? Do you mean that both servers would would be completely in sync at all times (i.e., that server B would have a "live" copy of what's on server A)? If so, then no, ZFS replication can't do that. The default configuration for pve-zsync syncs every...
  16. D

    gpg error on apt-get update, packages from jessie main missing

    I am getting the same error. This is an installation of Proxmox VE 4.0 from the ISO, with a community subscription. apt-get clean;apt-get update results in a bunch of stuff that I can't keep the forum software from trying to combine into a single paragraph. But the end is:Reading package...
  17. D

    Urgent: High cpu usage in Proxmox ve 4 with ZFS

    The wiki doesn't distinguish among host filesystems or storage types when it says "as long as your guest supports go for virtio"; it just makes that as a blanket statement. There's certainly nothing on that page that says it's "not for zfs storage", and given the many ways storage can be...
  18. D

    Urgent: High cpu usage in Proxmox ve 4 with ZFS

    If you've configured ZFS storage, there will be no format--you won't be able to choose raw/vmdk/qcow2. The system will create zvols for your VMs. For the bus choice, the wiki (http://pve.proxmox.com/wiki/Installation#Virtual_Machines_.28KVM.29) says to use virtio. In the thread I started the...
  19. D

    Proxmox pfsense best Practice?

    I'm working on something similar, though I have three NICs, and I'll be using a different software package as the router. Here's what I understand: When you install Proxmox, you'll configure the network, which will be on eth0/vmbr0. Your additional network card will be unconfigured. Once you...
  20. D

    How to migrate VMS on ZFS file system or could i move them by creating cluster?

    pve-zsync looks like it should do what you're looking for: http://pve.proxmox.com/wiki/PVE-zsync