....maybe this is similar to your problem? http://forum.proxmox.com/threads/10575-PROBLEM-Two-USB-keys-(same-brand-model)-trying-to-be-forwarded-to-two-different-VM?p=59145#post59145
...and follow that thread to the end for newer PVE...
You are dead wrong, sorry.
Their goal is a business and this means growth.
Even if you think with buying a license means getting insurance for future development, you're dead wrong.
All the "BIG-Five" are shopping the world with vast amounts of cash, buying competitors (and their products) or...
iSCSI wil be the fastest option, although with NFS you should be able to saturate a GBit-Link.
Take a look into napp-it for a web-based management GUI -> http://www.napp-it.org/index_en.html
..what about using a ZFS based NAS/Filer with iSCSI?
Use ZFS-Volumes on the NAS and export these as targets via iSCSI.
ZFS will give you the ability to snapshot the targets this way.
By the looks the PVE installer does not see the boot-device as a cdrom.
I cannot comment on using this particular PXE Installer/Server environment,
but you could always try an alternate install based on Debian Wheezy -> http://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Wheezy
If you can...
...I *think* I've seen another topic around that issue floating around.
The fix - in simple words - is to set startup and shutdown sequences for each component in the correct order.
...see: http://forum.proxmox.com/threads/15387-proxmox-ve-zfs-does-not-mount-on-boot?p=80005#post80005
...cannot comment on this, as I am not using PVE+ZFS in a production environment and/or with heavy load.
You might have a good point for VZ Containers and ZFS not being supported.
I also like ZFS and found it very reliable, even on its Linux implementation.
But i can follow your arguments...
AFAIK it will use the RAM as cache...all RAM there is, but I've never seen it causing the system to swap...also I've never seen ZFS preventing another application from sfarting with a "not enough memory" issue.
Well, with ZFS the only thing that goes better than RAM with ZFS is ...even more RAM...
+1 for this option.
As you already know your way with ZFS and tuning it for VM storage, this sound appropriate, doesn't it?
I'd start with the Debian Wheezy install of Proxmox and - before adding PVE - install ZFS-on-Linux and apply that with LVM to the storage array as required by PVE.
..answering my own post..
- If not on a subsription, disable enterprise repos and enable no-subscription repos, as described in the wiki -> http://pve.proxmox.com/wiki/Package_repositories#Proxmox_VE_No-Subscription_Repository
then do # apt-get update
# apt-get dist-upgrade (this should get you...
Well, I did not check versions but the OP stated that this was a fresh and clean install of PVE3.1 and the reported kernel version in use was reported as this one.
However, if the repo is left out of the list intentionally...What procedure to get access to the actual kernel headers is the right...
-> instructions as per wiki did not work? -> http://pve.proxmox.com/wiki/ZFS
Edit: bummer!...what a coincidence..looks like you're not the only one... -> http://forum.proxmox.com/threads/16326-pve-headers-install-problem
Edit2: check your /etc/sources/list ..pve-repos might be missing
...this is not a real big deal.
Based on your first post, I've gathered that you only want to employ a single disk.
If you're happy with /boot being *not* on zfs, use the Debian way and use zfs for all filesystems on LVM, including root and storage
...if you only have that *one* and single disk, you could try and setup LVM on that disk first.
You may need to install Proxmox via the Debian install method ...not sure if wheezy installer nowadays does support booting from lvm though.
second option is to employ a small USB stick for boot
disable the ROM completely (enter at host boot) or flash the card without rom-file.
I don't know if there are options in VM-Bios at all, like with VMware product.
not in that combination, but in general it should work.
...you are sure that you have a vt-d capable system (CPU, mobo and BIOS settings)?
If so, it should work....maybe try relocating the M1015 to another slot that is not located behind a PCIe-bridge.
As for the USB errros...disable the USB...
...my 2 cents:
With a real Fileserver (being virtualized or not) you want a RAID-like setup with your disks....otherwise you possibly just need a Filesharing solution, like install SAMBA on the host and you're done.
Using virtual filesystems to tunnel all that to the VM is IMHO not a good...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.