I can create containers. I had installed proxmox from the installation GUI with ZFS RAID 1 on two disks, I didnt do anything post install to the disk layout. Just didnt realise I had to manually create a ZFS pool for containers.
yep doing that now. creating... done. lack of sleep and not understanding the new disk system got me into this corner. gentle suggestion of some kind of reminder that 'containers cant be created on default local storage if zfs was used for installation' might be good in the GUI.
thanks again...
root@tesla:~# pveversion
pve-manager/4.0-48/0d8559d0 (running kernel: 4.2.2-1-pve)
Im following this thread after I tried to create a container and got exactly the same as this thread:
https://forum.proxmox.com/threads/23884-Can-t-create-LXC-container-on-ZFS-local
which led me to here...
thanks for your patience. here's the original post of my storage.cfg
dir: local
path /var/lib/vz
content images,iso,rootdir,vztmpl
maxfiles 0
zfspool: data1
pool rpool
content rootdir,images
I added and removed containers, which seems to toggle...
note this is all a stock install right out of the box, these are the first actions i've taken since install:- create CT as per normal under 3.4 etc - watch it fail- find out i need to create ZFS storage for it, try to install, cant choose anything but local, reboot- try again with all combos...
also tried under /zpool/ directly, the only thing I didnt try. doenst work, with thin provision on or off, with Disk Images deselected only leaving Container, and rebooting. Any help appreciated.
And i destroyed the ZFS cuz it was on /rpool/ROOT/pve-1, and put it on /rpool/ROOT, rebooted, still cant choose it. Forgot to remove "disk images" from the content to leave only "Container", so removed that, rebooted again, and STILL cant choose ZFS from the CT create dialog.Looking around...
Didnt work for me, still cant choose anything but local for containers. In fact, I went back and made sure ONLY "Container" (not "Containers" as Proxmox support said here) was allowed in the list of types for the ZFS, and rebooted again, and still can't select it.
Of course. Just didnt want to have to modify them, and use a trusted image from proxmox with it's reputation behind it instead ;) I've modified it slightly for my uses. Thanks.
true enough, sorry, but I saw many posts here pointing to such images for 64 bit use - are there any created by Proxmox that are suitable for use as 64 bit containers? Is there really such a huge loss in ram due to 64 bit that they're not supplied?
Would be great to have them in the templates...
the debian 7.0 minimal template here: http://wiki.openvz.org/Download/template/precreated
ie http://download.openvz.org/template/precreated/debian-7.0-x86_64-minimal.tar.gz
is debian 7.6 per /etc/debian_version
(so why not name it as such?)
Same deal with the 'full' image...
old thread, sorry but this is the first ive seen that discusses what I need - and this is relevant.
you have to manually edit the fstab on each container to mount the shared mountpoint? I actually dont want it shared, I wanted to split out an ssd as a seperate mount on each container - which...
hmm true enough. how about a bind mount to another dir for each image then? the bind mount would only be for external viewpoint, the dirs in private/### would be left as is (the vz id #) - any issues with that?
I previously posted about renaming instances from numbers (100, 101, etc) to names. Got no reply :/
One lesser solution is to rename the root trees for the CT's and images of VM's. So I've done mv 100 name; ln -s name 100 in /var/lib/vz/private, I think that'll work.
What'd be nice if in...
Would be even better to not use ID#'s at all. When working on the box seeing config files or dirs, instead of "116" I could see its "CustomerX" and know what Im dealing with.
Is there any special reason they're numbers instead of words/tags? Is there any way they could be alphanumeric tags...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.