VictorSTS's latest activity

  • VictorSTS
    VictorSTS replied to the thread Sync.
    Can't be done fully from webUI, there's no support in UI for such tasks, particularly the udev part.
  • VictorSTS
    VictorSTS replied to the thread Sync.
    You asked for a solution without a script but you use a script... Of course, having both types of backups (PBS and vzdump) add redundancy and data safety. If you prefer that, it's a perfect choice. Mine would be to use a script that gets invoked...
  • VictorSTS
    VictorSTS replied to the thread CEPH client only node.
    With just 3 nodes, if one fails no rebalancing will happen. There will be some rebalancing if one OSD fails, though. Check this thread with some detailed expanations about this [1]. Add it to the cluster then with OSD's and let Ceph deal with...
  • VictorSTS
    Hehe, no problemo. We all sometimes need a hand to find out that from time to time we all become John Snow for a while and "know nothing".
  • VictorSTS
    Sorry, still don't get the full picture... ;) Ok, there's an external storage that you connect via iSCSI to... one PVE of the 22 in your cluster? another PVE that does not belong to the cluster? You mention it is "shared to the rest of the...
  • VictorSTS
    VictorSTS replied to the thread Sync.
    Wait for all running tasks on that datastore to finish, from webUI set datastore to offline maintenance mode [1], then wait like one minute for proxmox-backup-server service to close all files in the datastore, unmount the drive and then you...
  • VictorSTS
    Sorry, I don't fully understand how is your cluster configured, which storage(s) are you using and what are your trying to accomplish. I would need details to give you accurate recomendations.
  • VictorSTS
    VictorSTS replied to the thread Sync.
    Yes, create a datastore on the usb drive and set a local sync from source DS to the USB drive. Remember to set datastore offline before unplugging the USB drive.
  • VictorSTS
    If all your storage is in a single iSCSI box, you have a single point of failure, which kinda defeats the whole purpose of a cluster. You will need to plan for downtime, stop everything on PVE, log out iSCSI from each PVE (not sure if needed...
  • VictorSTS
    3 node Ceph works beautifully, but has very limited recovery options and requires a lot of spare space in the OSD if you want it to self-heal. I will try to explain myself to complement @alexskysilk replies: It can't: the default CRUSH rule...
  • VictorSTS
    If you mean about the total Ceph usage, then yes, 10.5TiB * 3 = ~31.5TiB. The usable capacity is for the whole cluster: you store 3 copies of every bit, one copy in each of your 3 servers. Thats why I divided by 3 the whole gross capacity of all...
  • VictorSTS
    How many OSD do you have in each host? If you are only using size=3 replicated pools, your net space would be ~41.92TiB/3 = 13.97TiB. Keeping in mind that in general Ceph will start complaning when an OSD is at 85% full, the net usable capacity...
  • VictorSTS
    Ah, I see... There were already some users in the forum which did perform backups using the PBS client inside the WSL, but this comes with its own set of issues [0, 1]. Please note: neither do we test such approaches, nor is this expected to...
  • VictorSTS
    I know it won't run natively, but I'm in the hopes that the all-in-one versión of proxmox-backup-client can run without much trouble inside Windows' WSL enviroment, which essentially is an Ubuntu. That probably won't allow backups of Windows...
  • VictorSTS
    I know your pain... Try with dmesg -n 1. From man dmesg: -n, --console-level level Set the level at which printing of messages is done to the console. The level is a level number or abbreviation of the level name. For all...
  • VictorSTS
    Meanwhile the statically linked client is ready, take a look at this third party implementation in Go [1]. Haven't used it, so no idea how/if it works. [1] https://forum.proxmox.com/threads/proxmox-backup-client-for-windows-alpha.137547/
  • VictorSTS
    VictorSTS replied to the thread Slow backup with PBS.
    So this backup was done without dirty-bitmap because the VM is off before the backup. QEMU had to read the whole disk, compress it and get the checksum of each block. If you check the log, your PVE took ~32 minutes to read just ~58 GiB, which is...
  • VictorSTS
    Double check that your PVE hosts can reach your PBS server at 10.178.53.135 port 8007/tcp. That error simply means that your PVE can't reach that PBS IP : port. Check routes, firewall, etc.
  • VictorSTS
    I've had similar behavior when using NFS storage, either for VMs/CTs or for backups and NFS didn't work properly on some node(s): pvestatd had to wait a lot for the storage to reply or timeout, making the webui to show question marks. I would...
  • VictorSTS
    So obvious that my brain completely forgot that's the way o_O