You didn't post any of your NAT settings. First, check that all routing works ok prior to being natted, including what you claim not working. Of course, check ip_forwarding is enabled. And check things that should NOT be natted, but forwarded.
I have exactly the same case as you, but I get the same newuidmap error. I think the /etc/subuid and /etc/subgid must have more than the root:33:1 line that I'm missing. Can you (or anybody else) post if this is wrong and the missing contents?
Thanks in advance.
No luck with that method.
I've tried something different. I've just assigned at the host the user 100000 and the group 102120 to the shared dir and set permissions accordingly. It's been necessary to add both full uid and gid mappings in the config file, like this:
lxc.idmap: u 0 100000 65536...
Ok. I think that it's the same using 1005 than 2120, so I've done this:
# grep lxc 120.conf
lxc.idmap: u 2120 2120 1
lxc.idmap: g 2120 2120 1
# cat subuid
root:2120:1
# cat subgid
root:2120:1
But now I get this error:
# lxc-start -F -n 120
lxc-start: 120: conf.c: chown_mapped_root: 2902...
Surely there's some silly concept I'm misunderstanding, but I'm unable to find out what I'm doing wrong. In my specific case, I just need to preserve host gid 2120 to the same gid in the container . Do I need to remap *all* uids and gids? What is mandatory for /etc/subuid and /etc/subgid, and...
Yes, I realized that difference about replacing "=" with ":" , I guess it's proxmox software itself which does the changes automatically. I've tried rewriting with "=" but I still get the same result.
Also, I'm not sure if I could do nothing about uid remapping, since I don't find it necessary...
Thanks for your answer. Here is what I've tried:
I just have reserved a gid in the host (2120) for the CTID 120 and I've just chgrped the folder. Then, I've set in the config file
lxc.idmap: u 0 100000 65535
lxc.idmap: g 0 100000 2120
lxc.idmap: g 2120 2120 1
lxc.idmap: g 2121 102121 63415...
Hi. I read this and there's something that I'm unable to understand. In an unprivileged container, I have a bind mount. From the host, I don't mind the uid/gid (it's a local btrfs filesystem), but I just want the root user in the container to be able to manage the shared tree for storing...
Ok, but there's at least one problem: I have to back up that data (of course to another encrypted disk) and it's several hundreds GB, so I don't think a vm provides a real working solution with this scenario.
And also there has to be a real simple solution for the current permissions issue...
I'm aware I can do that at host level and by being root (or sudo), but it's for a user who has access to the guest via proxmox and tries to prevent that others can access to the data if the vm isn't "properly booted". The reason why is beyond, but take that someone steals the server from the...
Thanks for your answer.
It looks like it MUST be a perl script, because a bash script returns a error code 1. I run it from the terminal and prompts for password, but this doesn't happen in the console, what I need to do. I took the example with perl (I'm not good at perl) and I customized it...
Hi. I'm using Proxmox 6.1 with Debian buster as host (and as guest while this is possible). I use several containers and in one I need to have a crypted mount point (it could even be the whole container). So, I use a lvmthin for just the system and a non-lvmthin for the data, and this is what...
My two cents about this: I don't think that, and mostly talking about Debian, are any issues between major versions. I say this because as long as a template exists (and there's one for 9.0), upgrading to 9.x shouldn't be an issue. Also, as long as there aren't any news about any kind of...
I've been playing for a while with the command line "pct restore ..." with a test container having a root disk and just one mount point on different storages and I haven't been able to restore the way that I wanted, on a specific storage different from local storage. A command line "pct restore...
Well, I didn't test the command line, but the Proxmox UI. If you mean that there's a simple way to get a restore taking into account the original storages, then it sounds like it's a feature/bug in the UI, by not allowing the mentioned behavior and forcing everything to go into a specific...
Thanks for your reply. Anyway, I think it's perfectly possible, taking that the original mp info is kept, to provide a way to "respect" those settings. I guess that the current behaviour is parsing the settings to see the mps and restore them all, but this parsing "forgets" to watch the...
Hi. I use Proxmox 4.4-13/7ea56165 (updated yesterday) on Debian Jessie. I have the PVE VG defined as LVM storage and a thin pool storage for the root disk of containers. I have some mount points with big data separated from the root disk, all of these on PVE. I choose to backup some of these...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.