Figured it out (easy to test - just run the script!). For a delay of 10 seconds, I added the following line:
/etc/pve/nodes/<node>/config
startall-onboot-delay: 10
I was trying to figure out how to delay the startup of VMs & containers and happened upon this little script (used by the pve-guests.service):
/usr/share/pve-manager/helpers/pve-startall-delay
#!/usr/bin/perl
use strict;
use warnings;
use PVE::INotify;
use PVE::NodeConfig;
my $local_node =...
Also interested in what @Lioh uncovers, as we have a similar situation at work with users sourced from Active Directory, and most day-to-day operations can be achieved via non-root users. Root via local ssh is always a fallback for any advanced configuration changes or diagnostics.
Seems like...
According to the APC manual, apcupsd is looking for nodes in /dev/usb/hiddev*. So, you may want to try passing that through instead of /dev/bus/usb/xxx/yyy. I finally got this working for my APC Back-UPS XS 1500G after doing the following:
On the host:
chown root:100000 /dev/usb/hiddev0
chmod...
Nevermind, I found an example of how to determine VM/LXC type in API2/BackupInfo.pm:
my $vmlist = PVE::Cluster::get_vmlist();
my $type = $vmlist->{ids}->{$vmid}->{type};
my $node = $vmlist->{ids}->{$vmid}->{node};
my $conf;
my $name = "";
if ($type eq 'qemu') {
$conf =...
Thanks for that, works great! How would I adapt the script to handle both LXC and VMs? I have used this script on both. I see the equivalent perl module under /usr/share/perl5/PVE/LXC/Config.pm, but not sure how to check which one to use based on $vmid.
I have a VM that has a snippet configured with pre-start and post-stop actions. The VM is currently stopped. When my backup job runs (which includes all VMs & containers), the pre-start action in the snippet is executed. Is this intentional? How do I distinguish between pre-start due to...
Did you edit /etc/subuid and /etc/subgid on the host to include the mapped user id you want to use? By default, only user ids 100000-165535 can be mapped. I think this is what the wiki indicates to do, but it wasn't clear to me at first. I thought it was saying to edit subuid & subgid...
Good to know there are others doing PBS in a container. I arrived at the conclusion that I needed to do this after banging my head against how to get better throughput on a host-guest shared folder. 9p, NFS, and SMB all proved vastly inferior to bare-metal and LXC bind-mount performance.
I would hope that verifying multiple snapshots that have few incremental changes in succession would result in only marginal increases in verification time (we are verifying the chunks themselves, no?). If this is true, then it seems it would be better to run verifications in batches rather...
Thanks, this is the approach I took with my own 100% full backup drive. Seems to work just fine, although I'm still running the verify job after garbage collecting & moving chunks back.
No, I'm not using an encrypted ZFS dataset. I only have the same issue with regard to not starting up the container after reboot, with the same error message, "Permission denied - Failed to create "/dev" directory". I think the issue I had could also present itself with encrypted datasets.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.