Can't create LXC container on ZFS local

gkovacs

Active Member
Dec 22, 2008
503
45
28
Budapest, Hungary
This is my first round of testing LXC in PVE 4. Had to stop kind of early on...

- Installed PVE 4 on ZFS RAID10 (4 disks)
- Updated to latest packages + reboot
- Downloaded Debian 8 LXC template
- Trying to create CT on local storage

Code:
Formatting '/var/lib/vz/images/100/vm-100-disk-2.raw', fmt=raw size=10737418240
Discarding device blocks:    4096/2621440               done                            
Creating filesystem with 2621440 4k blocks and 655360 inodes
Filesystem UUID: 13b2c657-77b9-43d6-b1a9-6f4ca55f47dd
Superblock backups stored on blocks: 
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables:  0/80     done                            
Writing inode tables:  0/80     done                            
mke2fs 1.42.12 (29-Aug-2014)
Creating journal (32768 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information:  0/80     
[COLOR=#ff0000]Warning, had trouble writing out superblocks.TASK ERROR: command 'mkfs.ext4 -O mmp /var/lib/vz/images/100/vm-100-disk-2.raw' failed: exit code 144[/COLOR]
The RAW disks get created under /var/lib/vz, but the container creation process stops with Error: unexpected status.

Apart from the solving the above problem, probably a more graceful way of handling errors during the container creation process would benefit users.
 
Last edited:

gkovacs

Active Member
Dec 22, 2008
503
45
28
Budapest, Hungary
Ok, so after some forum reading I found out that I need to create a new storage (type zfs) for containers, with it's root set to rpool (otherwise it won't work). This is far from being user-friendly, as there are no warnings during installation or container creation, and people expect simple functions (like Create CT) work out of the box.

I suggest that the installer create a storage called "containers" automatically if installation was on ZFS, and the container creator should either select it by default, or at least there should be a warning or error (red outline) on the container storage if set to local.
 
  • Like
Reactions: MikeP

tom

Proxmox Staff Member
Staff member
Aug 29, 2006
13,673
426
83
Okay, so storing containers on ZFS local storage does not work.
Hopefully restoring OpenVZ backups to LXC containers stored on newly created ZFS storage will.
Just use the ZFS plugin for LXC container, works great.
 
  • Like
Reactions: MikeP

sigxcpu

Member
May 4, 2012
433
9
18
Bucharest, Romania
I can attest to that. I've always used "dir" on ZFS for containers before 4.0. Now that I have them separate, each on its dataset is beautiful. Actually, that made me migrate back from SmartOS :)
 
  • Like
Reactions: MikeP

gkovacs

Active Member
Dec 22, 2008
503
45
28
Budapest, Hungary
Just use the ZFS plugin for LXC container, works great.
I guess you mean storage plugin. So if I understand correctly, I have to restore the LXC containers with the "--storage" option to the newly created ZFS type storage if I want them to work.

So let's say my ZFS type storage is called "zfs-containers", then I would restore my OpenVZ backups to LXC with the following command:
Code:
pct restore 100 vzdump-openvz-100-2015_10_12-11_22_33.tar --storage zfs-containers
You might want to add this to the wiki at https://pve.proxmox.com/wiki/Convert_OpenVZ_to_LXC
 
Last edited:

MikeP

Member
Feb 14, 2016
47
4
8
49
Ok, so after some forum reading I found out that I need to create a new storage (type zfs) for containers, with it's root set to rpool (otherwise it won't work). This is far from being user-friendly, as there are no warnings during installation or container creation, and people expect simple functions (like Create CT) work out of the box.
And, how do you do that? -- create a new storage, type zfs, with root set to rpool.
I've got a zfs mirror SSD set for boot, but another ZFS mirror that I want to put basically everything on. I get the error and can't make a container.
Thanks in advance.
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,399
528
113
PVE's LXC Storage plugin - it is what is used when you create a new storage of type ZFS in the web interface (and subsequently use it in pct/qm commands, LXC and VM config files, etc)
 

MikeP

Member
Feb 14, 2016
47
4
8
49
PVE's LXC Storage plugin - it is what is used when you create a new storage of type ZFS in the web interface (and subsequently use it in pct/qm commands, LXC and VM config files, etc)
Oh, I see. I just considered that part of the system, rather than something that I needed to add as a plugin, like Firefox and Chrome plugins must be downloaded and plugged in separately.
 

MikeP

Member
Feb 14, 2016
47
4
8
49
I've got a zfs mirror SSD set for boot, but another ZFS HDD mirror that I want to put basically everything on. I get the error "failed: exit code 144" and can't make a container or a VM disk image, "failed: exit code 1".
Thanks in advance.

root@pve2:/# zpool list
NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
rpool 111G 1.11G 110G - 0% 0% 1.00x ONLINE -
z3tred 2.72T 190M 2.72T - 0% 0% 1.00x ONLINE -


root@pve2:/# zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 15.9G 91.7G 96K /rpool
rpool/ROOT 1.10G 91.7G 96K /rpool/ROOT
rpool/ROOT/pve-1 1.10G 91.7G 1.10G /
rpool/swap 14.7G 106G 64K -
z3tred 190M 2.63T 104K /z3tred
z3tred/backups 96K 2.63T 96K /z3tred/backups
z3tred/containerstorage 96K 2.63T 96K /z3tred/containerstorage
z3tred/containertemplates 189M 2.63T 189M /z3tred/containertemplates
z3tred/diskimages 96K 2.63T 96K /z3tred/diskimages
z3tred/iso 96K 2.63T 96K /z3tred/iso


root@pve2:/# pvesm status
local dir 1 97287552 1157248 96130304 1.69%
sambadisk1 dir 1 2930081788 2505639992 424441796 86.01%
zbackups dir 1 2827814784 0 2827814784 0.50%
zcontainerstorage dir 1 2827814784 0 2827814784 0.50%
zcontainertemplates dir 1 2828008576 193792 2827814784 0.51%
zdiskimages dir 1 2827814784 0 2827814784 0.50%
ziso dir 1 2827814784 0 2827814784 0.50%
 

fabian

Proxmox Staff Member
Staff member
Jan 7, 2016
3,399
528
113
You need to add your zfs pools as "zfspool", not as "dir" storage in PVE. PVE will then create subvolumes for snapshots, container and vm images, .. automatically.
 

MikeP

Member
Feb 14, 2016
47
4
8
49
You need to add your zfs pools as "zfspool", not as "dir" storage in PVE. PVE will then create subvolumes for snapshots, container and vm images, .. automatically.
Ok, Thanks.
In DataCenter, Storage, Add, ZFS, the only pool shown is /rpool and it's subs. My other ZFS pool didn't appear.

I tried this and got an error:
root@pve2:~# pvesm add zfs -type zfspool -pool z3tred
400 not enough arguments


root@pve2:~# pvesm add zfs z3tered -type zfspool --pool z3tred
missing value for required option 'blocksize'


Finally, I got to this command which seems to be working:

root@pve2:~# pvesm add zfspool z3tred -pool z3tred -content images,rootdir -sparse
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE and Proxmox Mail Gateway. We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!