proxmox already has immutable backups - just run a remote sync PBS, the source PVE host and source PBS know nothing about the remote PBS since it works in a pull manner from the primary, they have no ability to access the remote PBS.... Or rsync/rclone/restic your PBS data store to rsync.net who...
I already use that method, but a few people have said "just use PBS to backup the host", which it does backup, but it does not restore. So if there was a one stop option to always keep my data backed up on the PBS, including the host, with versioning, and quick disaster restore of everything...
I am backing up my pve host successfully, but I wanted to test a restore to another machine-
proxmox-backup-client restore host/pve1/2024-01-03T04:53:23Z pve-etc.pxar /etc/
I get this error, it's a brand new machine I just added the PBS storage to:
Error: error extracting archive -...
Just wondering if there has been any progress on this, testing and I am not sure how to restore a fresh host,
First I ran this on my production pve host:
export PBS_PASSWORD="4theB4ackups"
export PBS_LOG=warn
export PBS_REPOSITORY=root@pbs@10.0.0.9:backup1
proxmox-backup-client backup...
This is pretty troubling, I have tried to install on a R720, and now R730 also fails with 7.1-1 iso image. I am booting the iso via the iDrac IPMI, click on Install Proxmox and on the next screen it fails:
waiting for /dev to be fully populated...| ACPI Error: No handler f0331/evregion-130)...
PVE 7.0-11
Just testing a Win10 VM, hotplug/numa enabled, running guest agent. I added 4096 to the existing 4096 ram and it fails. Trying to add a CPU core, it just adds it and waits for a reboot to apply the new CPU.
Parameter verification failed. (400)
memory: hotplug problem - VM 110 qmp...
Strangely this time booting up set to 5.2 it worked without error.
Working 5.0:
/usr/bin/kvm \
-id 100 \
-name PLserver \
-no-shutdown \
-chardev 'socket,id=qmp,path=/var/run/qemu-server/100.qmp,server,nowait' \
-mon 'chardev=qmp,mode=control' \
-chardev...
This is a brand new installation, I did not change the machine type, and I made 3 VMs that all had the same problem.
proxmox-ve: 6.3-1 (running kernel: 5.4.103-1-pve)
pve-manager: 6.3-6 (running version: 6.3-6/2184247e)
pve-kernel-5.4: 6.3-7
pve-kernel-helper: 6.3-7
pve-kernel-5.4.103-1-pve...
I just loaded up the latest 6.3.1 and ran dist-upgrades to the very latest on a Dell R730, when I try to start a new Win2019 VM with the bare basic defaults selected it fails to start with error:
() kvm: no-hpet: unsupported machine type
It seems the default machine type 5.2.0 is the issue...
I have 3 pools:
VMdata: Main pool with zvols for VMs, daily snapshots running here
Backup: Backup pool where snapshots get replicated (Sanoid)
Pool3: Other empty pool
I want to clone a VM from snapshots on the Backup pool to Pool3, without affecting anything on the main pool. I want to spin up...
Are they 2.5 laptop drives?? Even if they were *higher end* desktop drives, theres a reason why most servers have SAS ports, just skip right to a decent SSD, look for something that was common on Dells, avoid "read optimized SSDs" if possible, there are many decent 2nd hand Dell Toshiba SAS SSD...
Actually, things have come a long way thanks to PVE dev team, all of that can now be done in the gui in just 1 step.
That is it, you are done....
If there is no available disk in the drop down you may need to wipe it, verify the serial number before plugging in the disk, verify its letter and...
That works - YEA!!!, but that is contradicting. I am not allowed to use zfs *directory* storage for CT, but the mount *directory* matters??
Maybe consider this a low priority adjustment in future code.... something like zfs get mountpoint $poolname in the pve code prior to sending a CT to...
I tried that:
root@pve1:~# zfs set mountpoint=none VMdata1
root@pve1:~# zfs mount VMdata1
cannot mount 'VMdata1': no mountpoint set
So the pool no longer has a mount point, and it is not listed as a mounted fs, but still same error when trying to send a CT to the pool with double // in front of...
Was reading this point (which I interpret it to mean he is using directory storage), so for giggles I set:
zfs set mountpoint=/mnt/VMdata1 VMdata1
Same error with double //
"not work on zvol" - so it doesnt work on anything zfs, pool, nor dir. Moving to a zpool (storage type "ZFS", I get this error:
Task viewer: CT 361 - Move Volume
TASK ERROR: cannot open directory //VMdata1: No such file or directory
- VMdata1 is the destination (from LVM), why does it have...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.