Worked perfectly. If I would have looked bit closer before posting I would have found that here as well.
https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC
Also, when I do not assign a container any IP info at all I assume that then it is simply bridged? Well when I do that I can get IPv6 to work but cannot get IPv4 to work. This is on Centos 7 again.
I think I did it that way. Now I have something like this after creating a 200GB Centos 7 container. I also created /backups on a 2TB third drive in this machine. Does this look right?
Datacenter - Storage
backups : Directory : VZDump backup file : /backups
local : Directory : ISO Image...
Under datacenter storage I created "ID" "zfs-storage" and selected "ZFS Pool" "rpool". I then disabled container and disk_image on local. Is this the correct way to do this? Is there better way?
Any progress with this?
Seems like now when I created a new container with IPv4 and IPv6 the IPv6 did not appear in ifconfig. After editing network settings in Proxmox then putting back it showed up in ifconfig. Although it shows up I cannot ping past IPv6 gateway. Doing service network...
So did you get it working? Perhaps step by step directions for setting up Proxmox 4 with ZFS and RAID along with adding LXC containers. That would be great. Perhaps a youtube video from ISO install to first container creation?
So on the server I have:
# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 3.62T 859M 3.62T - 0% 0% 1.00x ONLINE -
# cat /etc/fstab
# <file system> <mount point> <type> <options> <dump> <pass>
/dev/zvol/rpool/swap none swap...
Is there a how to on getting started with ZFS on Proxmox 4? Youtube video?
I installed proxmox 4 on a couple of SATA drives with ZFS RAID1. Downloaded the centos 7 container template too local. I then tried to create a Centos 7 container and get this error.
Warning, had trouble writing out...
Say I have a Server with four 4TB drives. I want to use one for backups and the other 3 in a raid1 array for my containers. How would I set that up on Proxmox with ZFS? Totally new to ZFS but have used linux software RAID before.
I installed Proxmox 4 on a 120GB SSD. I then created a 4TB software raid 1 partition on 3 enterprise sata drives as /dev/md0 and formatted with ext4.
I then altered fstab.
I commented out
#/dev/pve/data /var/lib/vz ext3 defaults 0 1
Added
/dev/md0 /var/lib/vz ext4 defaults 0 0
Should...
I see this.
ls -la /var/lib/vz/images/100
-rw-r----- 1 root root 214748364800 Oct 20 09:45 vm-100-disk-1.raw
Does that mean when I create a 200GB container it will take up 200GB disk space?
Strange that when I look at backup it looks like this.
ls -ls /backups/dump
163936 -rw-r--r-- 1 root...
I am assuming that is the *.raw file under images? In past with openvz you could over subscribe disk space. If I set container for 2TB but it only used 80GB on disk I would have that space to use elsewhere when that container did not use it. Is that not the case anymore? That was nice...
I am installing Directadmin on LXC container. How do I insure there are enough inodes available? As I recall on openvz you could specify this. Also, how can you specify the number of users a container is allowed to have? I believe under Openvz this was called UGID?
I created a Centos 7 LXC container. I assigned IPv4 and IPv6 space to it. IPv6 did not work. Looking at container it added the IPv4 address too /etc/sysconfig/network-scripts/ifcfg-eth0 and added IPv6 address too /etc/sysconfig/network-scripts/ifcfg-eth0:0. Moving the IPv6 address too...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.