Yes. That's the only resource type you can not simply over-commit and be fine.
The only way to have 16 GiB inside the VM while the host only needs to supply way less is to use a VM internal swap file. Of course this is the slowest approach...
Hi, this depends on your DNS Interface. In my case i just added a TXT Record with the name *SelectorName*._domainkey.*Domain*. and copy/pasted the Value beginning with "v=DKIM1;h=sha256;" into the DNS Value and everything worked right away.
So...
You can’t migrate (live) between AMD and Intel unless you have a common CPU set.
The only common CPU set according to QEMU documentation is qemu64 or qemu32.
The x86-64-v2-AES CPU you are pointing at will use more modern CPU instructions than...
I have a user with Audit permissions on both sides.
Don't use that on my test machine here, so I cannot tell.
Yeah, I've seen that list and use most of then in other languages.
Great!
I can run it, but I get zero output. Probably wrong permissions for my token, on the PBS-side. We'll see...
Should it work with the backups being in a sub-namespace? The root-namespace is empty...
I hacked something together that will just display the stuff. You need to write a bit around to get an actual check:
#!/usr/bin/env python3
from proxmoxer import ProxmoxAPI
import time
from datetime import datetime, timedelta
pve = ProxmoxAPI(...
Großartig. Hat funktioniert. Der Parameter -f war entscheidend. Danach über Datacenter als Storage eingehängt und alles ist super. Vielen Dank an Euch.
Gruß
Christoph
So there are several approaches to check for some backups on a specific node or a specific PBS. That's great and @Impact's oneliner is a great start.
What I really would like to see is a cluster-wide check, telling me which VM is not backup'ed...
The scheduler is running? Like so:
~# systemctl status pvescheduler
● pvescheduler.service - Proxmox VE scheduler
Loaded: loaded (/lib/systemd/system/pvescheduler.service; enabled; preset: enabled)
Active: active (running) since Fri...
This wish isn't new there is already a quite long debate:
https://forum.proxmox.com/threads/docker-support-in-proxmox.27474
tldnr: Use a VM for application containers like podman or docker together with an managment interface of your choice...
While native Docker support in the Proxmox web UI would be convenient for some use cases, Proxmox VE is designed for system-level virtualization rather than the application-level containerization Docker is typically used for. Running Docker...
If you have the ballooning device enabled, and the guest reports back detailed memory info, then the memory shown as used, should match very closely, what the guest is showing as used.
Splitting it up more could be done, but will always be...
Maybe this helps: https://pve.proxmox.com/wiki/PVE-zsync
It offers to specify an arbitrary destination pool/dataset: root@zfs1:~# pve-zsync sync --source 100 --dest 192.168.1.2:tank/backup
Disclaimer: not tested (by me)...
Ja, klar :)
Danke, den Artikel kannte ich noch nicht.
Aber zum testen ist das sicher egal; ZFS-on-ZFS ist ansonsten natürlich immer eine schlechte Idee.
Standardmethode sieht dann auch so aus, dass man auch den vier Festplatten Partitionen anlegt mit derselben Größe Punkt und daraus dann ein ZFS Pool generieren lässt. Das ZFS special device ist für HDDs ein Must Have, sonst hat man keinen Spaß an...