Hi everyone,
So I've had the following setup working for quite some time.
1.) ZFS pool on host shared via NFS to QEMU VM.
2.) QEMU VM mounts NFS share, reverse encrypts the file system using encfs and exports it via NFS to a Ubuntu Server LXC container.
3.) LXC container runs cloud backup...
Hmm. After doing these steps, restarting the OpenVPN service fails.
# service openvpn start
[FAIL] Starting virtual private network daemon: server failed!
I wanted to look in /ver/log/openvpn for the log files to see what went wrong, but the folder is empty.
I have tried rebooting (the...
Much appreciated. I will try these now.
I'm curious though, since the tar.gz packages on turnkey linux are specifically for LXC use, is there a reason they don't create these on their own?
--Matt
If I may ask, how did you solve this? Was net.ipv4.ip_forward=1 in sysctl the only thing?
I currently have a turnkey linux template based LXC container with openvpn on it, and t is exhibiting hte same issues, not accepting connections.
Much appreciated,
Matt
Hey all,
Can anyone help me troubleshoot this? I downloaded the turnkey linux openvpn template from the PVE web interface and installed it into a new LXC container.
I believe I set it up as a host correctly using the first time configuration in the console, and my port forward rule for port...
Hey all,
I'm having a little bit of difficulty getting something to work the way I need it to.
I have two containers. Let's call them LXC1 and LXC2.
LXC1 runs a software that creates files I need to access in LXC2.
LXC1 writes these files to a folder on the host.
The problem is, that the...
Thanks for you help, some follow up questions if you don't mind:
Do you know how to check this? I was looking for an "nfs -v" or similar command, but can't find one. If I look through my mounts, they only tell me major NFS revision of the mount (2, 3 or 4) not point releases.
Understood...
Hey all,
I often use my workstation to transfer large amounts of data to and from my Proxmox server, and anything I can do to speed it up would greatly help me.
My workstation runs Linux Mint 18, with two intel NIC's bonded in 802.3ad mode with my Procurve switch.
My Proxmox server has the...
For Linux based systems like Proxmox, I wouldn't even bother with the software any of thr UPS makers provide, and instead jump straight to either NUT or apcupsd.
None of these companies support Linux well but I find that either Nut or apcupsd usually do the job.
So, yes, I just tested, and this is totally reproducible with latest Enterprise packages in 4.2 and an Ubuntu 14.04 LXC container
Steps to reproduce:
1.) Create LXC Container, with the following lines in the config file:
lxc.network.type = phys
lxc.network.link = *an available host ethernet...
Yeah, the problem is not that I don't understand why the container won't start.
Let me clarify.
The container started just fine the first time, and used the physical interface I configured properly.
It failed to start subsequent times, because once restarted, the physical NIC i told it to use...
Thanks for the help.
If anyone else is trying to solve the same problem, here is how I wound up doing it: (from the console in the host)
Check the current limit:
cat /proc/sys/fs/inotify/max_user_watches
For me this was 8192
Temporarily test if increasing the value fixes things: (this will...
My apologies.
When I first posted both of these I saw them as different topics, one where I was chiming in specifically recarding phys interface mode, and the other where I was looking for more specific guidance on promiscuous mode and LXC containers. Re-reading my posts I realize they were a...
Hey all, I am trying to run ntopng in an LXC container, but no matter how I try
My setup
router -> Switch -> everything else.
Using the switch I have mirrored the port connected to the router to another port. I ahve connected an Ethernet cable to this port, and connected ti to a dedicated...
Ditto regarding this.
I've been trying to run ntopng in a container (from a wswitch port copied from my WAN port) for the longest time, and I have not been able to get the Physical NIC -> VMBR bridge -> LXC container configuration to all be properly promiscuous no matter what I do.
I read in...
Ahh, I think that is the cause.
I have Crashplan running in a LXC container, and it seems to want to use A LOT of watches.
I'm probably going to boost it up to 1048576, and see if that helps. RAM is really not an issue for me, I have 192GB in my server. I upgraded it from 96GB before I...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.