Hey,
no, kickstart is not possible. But you can take a look at [1], should be what you are looking for.
[1] https://pve.proxmox.com/wiki/Automated_Installation
Hey,
your data is fine, migration doesn't move the data, it stays in the same place(s), after the migration it is just used from somewhere else. If ceph has its own network you shouldn't really notice anything more than what you currently see performance wise.
Do you have physical access to the server? If yes, could you login on the console and post the output of cat /etc/network/interfaces and ip a? Can you ping your router and other devices on the network from there? And what is the output of systemctl status pveproxy?
Inside .chunks/? The <DATASTORE>/.chunks/ directory is part of a datastore. And if the creation task finished successfully it was created. Could you post the logs of the task when you created the datastore?
Hey,
why, and how did you create .chunks manually? It is created and setup correctly on datastore creation. Is .chunks filled with directories? If yes make sure everything is owned by the backup user. You can check that with ls -la .chunks.
Maybe, but you probably want the PARTUUID, not the UUID, since it is the partition you would be mounting.
But if you'll only need it in the VM, don't mount it on the host at all.
Hey,
could you check with blkid if the UUID you have set in the fstab file matches the ones of the drive? Generally you can have the drive mounted on the host, or have it assigned to the VM, not both.
Hey,
you can pass the device (/dev/ppp) through to the container("Resources" > "Add" > "Device Passthrough") and set the owner permissions it should have within the container. Is there a reason this is not an option for you?
Hey,
for what you're trying to do [1] should give you a good overview.
[1] https://pve.proxmox.com/wiki/Unprivileged_LXC_containers#Using_local_directory_bind_mount_points
The thing is you have to tell PBS it is not supposed to use the disk anymore, just pulling the disk out under its feet can lead to problems. In your specific case it is just a file handle for the lock file that is kept, but you'd have the same problem if a backup was running.
Hey,
this is likely due to PBS keeping the .lock file open. You can check with lsof /<DATSTORE>/.lock. If your installed version is at least 3.1.5-1 putting the datastore into "Offline" maintenance mode should clear this up. After setting it you can check with lsof again.
Ja, das geht mit
proxmox-backup-debug api create admin/datastore/<store>/verify --ignore-verified=1 --outdated-after=30
, hier wäre noch <store> entsprechend anzupassen. --ignore-verified=1 und --outdated-after=30 sind nur beispielhaft, die verfügbaren Optionen gibts unter [1].
Jobs sind aber...
Hey,
you can login directly on the console with root as user and the password you've setup. After doing so, could you post the output of ip a and cat /etc/network/interfaces? Can you ping your router?
Hey,
you can just create a new Linux Bridge and attach it to all the VMs you want to connect, the PVE host doesn't need an IP on it. Then just statically assign IP addresses to the VMs on the same network. It also doesn't need any physical ports, if you only want traffic between the VMs
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.