Hi everyone,
So I've had the following setup working for quite some time.
1.) ZFS pool on host shared via NFS to QEMU VM.
2.) QEMU VM mounts NFS share, reverse encrypts the file system using encfs and exports it via NFS to a Ubuntu Server LXC container.
3.) LXC container runs cloud backup software, which syncs the encfs encrypted directory structure to offsite backup.
My reason for this setup was because I didn't trust the security of th eoffsite backup. (My philosophy is, never trust anything if you aren't the sole holder of the key) So I encrypt it before presenting it to the 3rd party backup software.
So this has worked fine for almost a year, until my recent periodic update cycle when I ran all my updates in my LXC containers, VM's and on the host and rebooted.
Today my offisite backup email notification notified me via email that my file selection for nbackup was down from a large size to 0B.
I started with the LXC container in #3, and confirmed that the encrypted folders were no longer mounted.
So this tells me one of three things happened:
1.) My updates inside my LXC container did something to break it's ability to mount NFS shares
2.) My updates inside my QEMU VM did something to break it's ability to share the NFS folders; or
3.) My updates on the host somehow changed the VM or LXC configurations to stop this from working.
While inside the LXC container, trying to mount them manually by using the mount location defined in /etc/fstab gives me the following error:
If I do a grep on the IP address in dmesg I find the following:
So this seems to give me conflicting data.
The console command outputs that the server is rejecting the connection, but dmesg is telling me that it is the LXC apparmor configuration that is blocking the mount...
Can anyone suggest what might have changed in my recent host/vm/container system updates that might have broken this previously working configuration?
My guess right now is that I changed something on the host regarding apparmor to allow these shares to mount inside of LXC i the first place when I first set this up, and that when I ran the recent updates on the host, the apparmor config file was overwritten and I lost this configuration, leading to the current situation. Problem is, I can't remember what I did the first time around
I'd appreciate any suggestions that might help me fix this.
Thanks,
Matt
So I've had the following setup working for quite some time.
1.) ZFS pool on host shared via NFS to QEMU VM.
2.) QEMU VM mounts NFS share, reverse encrypts the file system using encfs and exports it via NFS to a Ubuntu Server LXC container.
3.) LXC container runs cloud backup software, which syncs the encfs encrypted directory structure to offsite backup.
My reason for this setup was because I didn't trust the security of th eoffsite backup. (My philosophy is, never trust anything if you aren't the sole holder of the key) So I encrypt it before presenting it to the 3rd party backup software.
So this has worked fine for almost a year, until my recent periodic update cycle when I ran all my updates in my LXC containers, VM's and on the host and rebooted.
Today my offisite backup email notification notified me via email that my file selection for nbackup was down from a large size to 0B.
I started with the LXC container in #3, and confirmed that the encrypted folders were no longer mounted.
So this tells me one of three things happened:
1.) My updates inside my LXC container did something to break it's ability to mount NFS shares
2.) My updates inside my QEMU VM did something to break it's ability to share the NFS folders; or
3.) My updates on the host somehow changed the VM or LXC configurations to stop this from working.
While inside the LXC container, trying to mount them manually by using the mount location defined in /etc/fstab gives me the following error:
Code:
$ mount /home/encrypted/data
mount.nfs: access denied by server while mounting xxx.xxx.xxx.xxx:/exports/data
If I do a grep on the IP address in dmesg I find the following:
Code:
$ dmesg |grep -i xxx.xxx.xxx.xxx
[437505.126241] audit: type=1400 audit(1483663289.018:180): apparmor="DENIED" operation="mount" info="failed type match" error=-13 profile="lxc-container-default-cgns" name="/home/encrypted/data" pid=11172 comm="mount.nfs" fstype="nfs" srcname="xxx.xxx.xxx.xxx:/exports/data" flags="rw, nosuid, nodev, noexec"
So this seems to give me conflicting data.
The console command outputs that the server is rejecting the connection, but dmesg is telling me that it is the LXC apparmor configuration that is blocking the mount...
Can anyone suggest what might have changed in my recent host/vm/container system updates that might have broken this previously working configuration?
My guess right now is that I changed something on the host regarding apparmor to allow these shares to mount inside of LXC i the first place when I first set this up, and that when I ran the recent updates on the host, the apparmor config file was overwritten and I lost this configuration, leading to the current situation. Problem is, I can't remember what I did the first time around
I'd appreciate any suggestions that might help me fix this.
Thanks,
Matt