There seems to be a problem with pg 1.0 and my understanding of placement groups and pools and OSDs.
Yesterday, I removed osd.0 in an attempt to get the contents of pg 1.0 moved to another osd. But today it was stuck inactive for 24 hours, so my attempt resulted in resetting the inactive state...
2,5 years later. We are working on the same problem. At the moment, we resorted back to auditing the host system and correlate audit messages to their respective containers. Auditing in container might be possible by applying CAP_AUDIT or CAP_AUDIT_CONTROL to the container thus giving them...
Oh, well. For what it's worth: The problem disappeared after rebooting. Maybe just restarting the lxc service might have sufficed but Iused the opportunity to update to 6.4 and then to 7.0. Mostly without problems (only a Debian 8 container requires cgroups v1 and didn't work after first...
I found out, I had to change the config in /etc/pve/lxc/<vmid>.conf instead of /var/lib/lxc/<vmid>/config.
But while doing so, I seem to have broken something. First I couldn't start my test container as LXC couldn't acquire access to my chosen idmap
lxc.idmap = u 0 200000 65536
lxc.idmap = g 0...
Hi,
for better auditing what's happening in our containers I'd like to change their idmaps to distinct values. Besides their filesystems, is there something I'd need to consider doing so? Will it break PVE and result in a flaming pile of hardware?
Bests,
Masin
Hi, while checking my backup emails I noticed that the issued command varies the order of arguments although no changes where done to the backup job:
vzdump 500 --mailnotification always --mailto <e-mail recipient> --mode snapshot --node appserver3 --storage backupserver1 --quiet 1 --compress...
I scheduled a backup of most of our VMs and containers on Saturday at 00:15. I just read the result mail:
masin@SI-C001:/Users/masin/Projekte % grep -e "avgerage" -e "transferred" vzdump.log
100: 2021-01-16 00:16:42 INFO: root.pxar: had to upload 171.86 MiB of 3.36 GiB in 99.82s, avgerage speed...
More information:
PVE version 6.2-10 (yeah, I know, but I'd have to shutdown all VMs to update to 6.3)
Backup Server version 1.0-6
The PBS originally was a Debian Buster I extended by your PBS repos. I followed the installation guide. The storage is an encrypted LVM on HW RAID-5, IIRC. Because...
PBS is configured to use a local volume. PVE, of course, is configured to backup to PBS.
I have to add that vzdump is that slow even when backing up locally.
Since two days ago we backup to a Proxmox Backup Server. Setting it up is really a breeze, so kudos, Proxmox Team!
But when we received our first backup report e-mail (of a lx container) we noticed that the average speed was only at 27 MB/s. That's not really fast.
Of course, I first checked...
As giving 100000 write permissions is the same problem I face at the moment, I might add some thoughts here.
Easiest and by far the most unsecure would be to give 'others' rwx permissions as in mode '777': chmod o+rwx dump/
Better would be to use filesystem ACLs and allow only UID 100000 to...
Exactly my problem at the moment! Great!
In this regard: Do all LX containers run under the same uid? So, is it sufficient to add only one uid to the ACL?
Edit: I see ACL support in NFS seems unreliable.
How to configure ZFS over iSCSI using LIO and targetcli:
Requirements:
Initiator/client needs SSH access to target/server
ZFS pool and filesystem on target/server
As far as I understand it, PVE uses SSH access as a control channel to create ZFS datasets and shares them as iSCSI LUNs.
First...
Still working on it. As far as I understand, the documentation is lacking in this regard. It's late now, but I'll put some stuff together for other poor admins. Tomorrow.
After now having a working iSCSI target and PVE storage I think I overcomplicated this. If I am right I should have exported the LV I used as a PV for the nested VG. Then, PVE could simply manage the VG in there.
Hi,
After working all day on this matter and resorting to the forum for getting a solution, I made progress during writing this forum post. I guess it's still valuable for people searching for solutions to the problems I had :). That's why I moved the remaining relevant paragraphs to the top...
If this substitution was configurable the respective admin would be responsible for that. In my case, we have a history of using spaces in AD groups for years now. No application has had any problems with that till now. You might understand why I'm hesitant to change our naming and favor a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.