You asked for a solution without a script but you use a script... Of course, having both types of backups (PBS and vzdump) add redundancy and data safety. If you prefer that, it's a perfect choice. Mine would be to use a script that gets invoked...
With just 3 nodes, if one fails no rebalancing will happen. There will be some rebalancing if one OSD fails, though. Check this thread with some detailed expanations about this [1].
Add it to the cluster then with OSD's and let Ceph deal with...
Sorry, still don't get the full picture... ;)
Ok, there's an external storage that you connect via iSCSI to... one PVE of the 22 in your cluster? another PVE that does not belong to the cluster?
You mention it is "shared to the rest of the...
Wait for all running tasks on that datastore to finish, from webUI set datastore to offline maintenance mode [1], then wait like one minute for proxmox-backup-server service to close all files in the datastore, unmount the drive and then you...
Sorry, I don't fully understand how is your cluster configured, which storage(s) are you using and what are your trying to accomplish. I would need details to give you accurate recomendations.
Yes, create a datastore on the usb drive and set a local sync from source DS to the USB drive. Remember to set datastore offline before unplugging the USB drive.
If all your storage is in a single iSCSI box, you have a single point of failure, which kinda defeats the whole purpose of a cluster.
You will need to plan for downtime, stop everything on PVE, log out iSCSI from each PVE (not sure if needed...
3 node Ceph works beautifully, but has very limited recovery options and requires a lot of spare space in the OSD if you want it to self-heal. I will try to explain myself to complement @alexskysilk replies:
It can't: the default CRUSH rule...
If you mean about the total Ceph usage, then yes, 10.5TiB * 3 = ~31.5TiB. The usable capacity is for the whole cluster: you store 3 copies of every bit, one copy in each of your 3 servers. Thats why I divided by 3 the whole gross capacity of all...
How many OSD do you have in each host?
If you are only using size=3 replicated pools, your net space would be ~41.92TiB/3 = 13.97TiB. Keeping in mind that in general Ceph will start complaning when an OSD is at 85% full, the net usable capacity...
Ah, I see... There were already some users in the forum which did perform backups using the PBS client inside the WSL, but this comes with its own set of issues [0, 1].
Please note: neither do we test such approaches, nor is this expected to...
I know it won't run natively, but I'm in the hopes that the all-in-one versión of proxmox-backup-client can run without much trouble inside Windows' WSL enviroment, which essentially is an Ubuntu. That probably won't allow backups of Windows...
I know your pain... Try with dmesg -n 1. From man dmesg:
-n, --console-level level
Set the level at which printing of messages is done to the console. The level is a level number or abbreviation of the level name. For all...
Meanwhile the statically linked client is ready, take a look at this third party implementation in Go [1]. Haven't used it, so no idea how/if it works.
[1] https://forum.proxmox.com/threads/proxmox-backup-client-for-windows-alpha.137547/
So this backup was done without dirty-bitmap because the VM is off before the backup. QEMU had to read the whole disk, compress it and get the checksum of each block. If you check the log, your PVE took ~32 minutes to read just ~58 GiB, which is...
Double check that your PVE hosts can reach your PBS server at 10.178.53.135 port 8007/tcp. That error simply means that your PVE can't reach that PBS IP : port. Check routes, firewall, etc.
I've had similar behavior when using NFS storage, either for VMs/CTs or for backups and NFS didn't work properly on some node(s): pvestatd had to wait a lot for the storage to reply or timeout, making the webui to show question marks. I would...