Container won't create

sienar

Well-Known Member
Jul 19, 2017
48
8
48
44
Hi all, I'm new to PVE and the forums, please go easy on me. I've searched for this issue here and in google and none of the solutions are working for me or maybe not explained well enough. I'm having issues with creating containers on my test system. I installed PVE on a mirror ZFS pool and I can create containers there, but I also created a ZFS pool, separate from the mirror, that has several disks in raidz2. We'll call that my main storage pool. I've created ZFS storage in the Datacenter Storage settings, I've created Directory storage in a ZFS dataset in this pool, I've tried setting everything suggested on every post I've found with similar errors, nothing lets me create a container on the main storage pool. Can anyone point me in the right direction on a fix for this?

Below is a screenshot of the datacenter storage config and the output of the task that fails to create the container:

http ://imgur.com/veozOJU

Code:
()
Task viewer: CT 103 - Create
Output
Status
Stop
Formatting '/red/pxhost01_CTStorage/images/103/vm-103-disk-1.raw', fmt=raw size=8589934592
mke2fs 1.43.4 (31-Jan-2017)
Discarding device blocks:    4096/2097152               done                           
Creating filesystem with 2097152 4k blocks and 524288 inodes
Filesystem UUID: e9455c9d-fc84-44b8-9e4c-d4825033d556
Superblock backups stored on blocks:
    32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables:  0/64     done                           
Writing inode tables:  0/64     done                           
Creating journal (16384 blocks): done
Multiple mount protection is enabled with update interval 5 seconds.
Writing superblocks and filesystem accounting information:  0/64     
Warning, had trouble writing out superblocks.
TASK ERROR: command 'mkfs.ext4 -O mmp -E 'root_owner=0:0' /red/pxhost01_CTStorage/images/103/vm-103-disk-1.raw' failed: exit code 144
 
Anybody have any advice, or are containers just fundamentally broken in PVE 5.0?
 
I think I had mentioned in the post, I've tried both configurations for storing the containers on the secondary ZFS pool. No matter what I try, I can't get a container to work on the secondary (large/non-boot) ZFS pool.
 
I have a lx ccontainer on a secondary ZFS pool in a server with proxmox 5, so the feature is working.

If you configure the pool with ZFS (not as directory) what is the error log?
 
  • Like
Reactions: sienar
If you were able to open my screenshot (sorry for the annoying way to post that, only way a first time poster can post an image I guess), I have a storage location called ZFS_pxhost01_VMStorage and it is a ZFS storage with Disk Image and Containers enabled on it. When I point the Create CT wizard to that location the error I get is this:

Code:
Task viewer: CT 103 - Create
Output

cannot share 'red/pxhost01_VMStorage/subvol-103-disk-1': smb add share failed
TASK ERROR: zfs error: filesystem successfully created, but not shared

I will point out that the samba package(s) is not installed on my PVE 5.0 instance. It did not install automatically when the system was installed. I was actually going to ask in a separate thread if that is normal or not and if installed from apt-get is the correct way to have samba running on PVE.
 
If it helps, here's the output of the package versions that were installed by the installer:

Code:
proxmox-ve: 5.0-16 (running kernel: 4.10.17-1-pve)
pve-manager: 5.0-23 (running version: 5.0-23/af4267bf)
pve-kernel-4.10.15-1-pve: 4.10.15-15
pve-kernel-4.10.17-1-pve: 4.10.17-16
libpve-http-server-perl: 2.0-5
lvm2: 2.02.168-pve2
corosync: 2.4.2-pve3
libqb0: 1.0.1-1
pve-cluster: 5.0-12
qemu-server: 5.0-14
pve-firmware: 2.0-2
libpve-common-perl: 5.0-16
libpve-guest-common-perl: 2.0-11
libpve-access-control: 5.0-5
libpve-storage-perl: 5.0-12
pve-libspice-server1: 0.12.8-3
vncterm: 1.5-2
pve-docs: 5.0-9
pve-qemu-kvm: 2.9.0-2
pve-container: 2.0-14
pve-firewall: 3.0-2
pve-ha-manager: 2.0-2
ksm-control-daemon: 1.2-2
glusterfs-client: 3.8.8-1
lxc-pve: 2.0.8-3
lxcfs: 2.0.7-pve2
criu: 2.11.1-1~bpo90
novnc-pve: 0.6-4
smartmontools: 6.5+svn4324-1
zfsutils-linux: 0.6.5.9-pve16~bpo90
 
The usage of LXC in Proxmox is triky at all, the Storage where you create your Root FS have to be a ZFS Storage and not as I use a Directory located on an ZFS Store.

But the realy bad thing is that the official proxmox debian 9 template is broken and will not start anymore if you update to the current Release 9.1 -->
unsupported debian version '9.1'
 
  • Like
Reactions: sienar
If you were able to open my screenshot (sorry for the annoying way to post that, only way a first time poster can post an image I guess), I have a storage location called ZFS_pxhost01_VMStorage and it is a ZFS storage with Disk Image and Containers enabled on it. When I point the Create CT wizard to that location the error I get is this:

Code:
Task viewer: CT 103 - Create
Output

cannot share 'red/pxhost01_VMStorage/subvol-103-disk-1': smb add share failed
TASK ERROR: zfs error: filesystem successfully created, but not shared

I will point out that the samba package(s) is not installed on my PVE 5.0 instance. It did not install automatically when the system was installed. I was actually going to ask in a separate thread if that is normal or not and if installed from apt-get is the correct way to have samba running on PVE.

sounds like you have the "sharesmb" property set somewhere on that pool and it gets inherited to newly created datasets. this is not a proxmox error, but a misconfiguration.
 
  • Like
Reactions: sienar
sounds like you have the "sharesmb" property set somewhere on that pool and it gets inherited to newly created datasets. this is not a proxmox error, but a misconfiguration.

That is correct. I actually ended up figuring that out on my own. Given that it worked on the boot mirror, I figured there had to be something wrong with the dataset I was trying to use. So I created a new one and it worked there. When I started comparing the properties of the two datasets, the only difference was the sharesmb setting. Which after that was set back to off, containers were able to be created on the same dataset.

Thanks all for the input and assistance!
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!