As LnxBil says, that's not a good benchmark, especially if you're using ZFS compression. Writing zeros to a ZFS dataset using compression will yield overly exaggerated good results.
I ran into the same thing with an AlphaSSL wildcard certificate. As stated in this post https://forum.proxmox.com/threads/certificate-startup-problems.25815/#post-129349, adding AlphaSSL's intermediate cert to the end of the file /etc/pve/pve-root-ca.pem resolved this problem for me.
I ran into this as well with 4.2.8-1-pve. I use a mdraid1 boot device with ZFS storage for the VM's. No LVM in my case though. Rebooting using the 4.2.6-1 kernel worked for me as well.
Yes. I have a Supermicro X10DRW-i with the LSI 3108 add-on module, and Proxmox 3.X and the 4.0 betas work fine and recognize disks attached to it in both JBOD mode and defining RAID arrays with the device. It's supported by the megaraid_sas module.
You probably want to apply vmbr0 instead of eth0 to your nat rule. I prefer to use SNAT in those situations, so you could alternatively try this:
post-up iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -o vmbr0 -j SNAT --to-source 178.251.229.125
post-down iptables -t nat -D POSTROUTING -s...
Take a look at the docs for lxc.mount.entry. The config files for LXC containers are in /etc/pve/lxc/$CTID/config. You can add a line like:
lxc.mount.entry = /some/dir/on/host container/mountpoint none bind 0 0
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.