Just do a sync job from your main datastore to the USB drive. Doing two backup jobs to two different PBS storages invalidates drity-map every time, voiding one of the advantages of using PBS.
The only sensible solution here is to simply disable HA (and wait for CRM and LRM to automatically stop after ~10 minutes) when you have to do maintenance in the cluster, so you don't risk node fencing due to lack of quorum, specially in small clusters. Any other option has drawbacks and it's...
Moving between namespaces is very easy and fast, as it just moves a few files with the snapshot index, owner, etc, as the chunks of the backup are in the same datastore. You can even do it from the filesystem if you feel confident to do so, but seldom recomend it as doing from the gui avoids...
Yes, you can use sync feature to copy a group between namespaces of the same datastore.
To sync backups among different datastores you have to add the same server as a Remote (i.e. using 127.0.0.1 or any other local IP of the PBS server). The you can use the sync feature. Unfortunately using...
It obviosly doesn't :) There are too many moving parts here to pinpoint what could be the problem. I recommend that you find a provider that can rent a bare metal server, install your own PVE on it and run your VMs.
It's unclear to me: are you trying to run nested virtualization with a rented VPS?
VPS -> PVE -> VMs with issues?
If this is the case, whether the PVE VMs work or not fully depend on how that VPS is configured and how the underlying hardware is configured.
What would you do if the NVMe drive that hosts both the VMs and their backups fails? What if the whole host fails? Backup must be outside PVE host to be useful.
Step 1 must be moved to 4, so the order would be 2, 3, 4, 1. You can't sync from a deleted datastore :)
No, PBS does not auto...
Faced the same issue with a VM with 3 and 7 TB disks. I ended up using sdelete -z while being sure that the VM uses a VirtIO SCSI single controller and the Discard checkbox was ticked. Once the sdelete finished, the space got thin provisioned again in the storage.
Try adding a directive to change the host header before forwarding the request from Nginx to PVE:
server {
listen 80;
server_name proxmox.internal www.proxmox.internal;
rewrite ^(.*) https://$host$1 permanent;
}
upstream proxmox {
server 10.0.1.1:8006;
}
server {
listen...
For a 2x1G bond you don't need to change MTU at all and the possible CPU benefits will be negligible.
You can check if MTU is working with ping -M do -s 2972, which should work among the Ceph IPs of your hosts. Run it on every host, as MTU is applied on sending server.
You can keep the backup job in PVE, but you will have to edit it to use the new PBS datastore that you will have to create on the USB drive.
Absolutely: use a sync job to copy all your backups to the new drive.
Just tick the Remove vanished checkbox in the replication job.
No, it is not deleted from PBS. PBS will keep as many snapshots as set in the retention policy, no matter when they were taken or which day is present at any time. You will have to remove them manually from PBS.
It is not that...
This is a very bad idea on it's own: if your PVE fails you will have to recover the Truenas VM first to be able to access the disks were your PBS backups are... from where? Where would you create your Truenas backups? Would them include all the drives, even those used for the PBS datastore...
ZFS RADIz's padding in action. There are dozens of forum posts about this [1]. You will have to find the right balance between volblocksize and write amplification (which depends on you applications).
For me RAIDz is a no-go for VM storage, unless the performance requirement is low. AFAIK...
You simply should not have PVE exposed to the internet and if you really have to, please, use a firewall. PVE has a nice on integrated in the GUI that will help you protect your server.
When you deploy Qdevice from PVE cluster, a public key is installed in the QDevice so it can talk with every node in the cluster. For some reason when you deployed Qdevice something failed. Make sure apt install corosync-qdevice is run on all PVE nodes, remove the QDevice with pvecm qdevice...
Please, add an option to removable datastores to automatically start a sync job + GC job + verify job when the disk is plugged in, so no need to add udev rules, write scripts or manual intervention just to click a few buttons. Also, an automatic disconnect of the drive once all the jobs are done...
Be very careful with this: VMs with IDs 11xx.conf will be using disks with ID 1xx: when you delete VM 11xx it will delete the replicated disk and/or potentially leaving a dangling zvol.
Don't reinvent the wheel: use HA and use PBS as QDevice. You already need a cluster for replication, and it's...
Install mergeide.zip [1] on VMWare and reboot the VM. Then on VMware change disks to be connected to IDE. If it boots ok, migrate to PVE and make sure it is using IDE on PVE too. If it doesn't boot, you may need to use a temporary IDE disk to make Windows load the driver. It should boot ok...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.