You can also get the disk using CLI [1] to automate your DR pipeline:
proxmox-backup-client restore vm/127/2021-09-09T07:46:38Z drive-scsi0.img.fidx vm-127-disk-0.raw --repository 192.168.20.180:datastore
[1] https://forum.proxmox.com/threads/restore-single-virtual-disk-from-pbs.95868/post-415847
From WebUI, in Ceph -> OSD, you can enable noscrub and nodeepscrub flags so no scrub activity will be started. You can then monitor your latencies.
Every time I've seen latency spikes has been because of dying disk(s) or misbehaving OSD process that got solved restarting them one by one and...
Immich alone recomends 6GB ram with a minimum of 4 [1]. And Nextcloud is barely usable with 1GB. IMHO you need more ram to properly run all those services.
[1] https://immich.app/docs/install/requirements/
Did you edit /etc/hosts on each node to reference the new IP of each one?
Did you reboot the node after the changes?
Also check /etc/pve/datacenter.cfg, maybe there's the old network there as the migration network.
GC will not remove chunks that have a timestamp under 24h + 5 minutes [1]. You will have to wait at least that time to run GC again so it actually removes expired chunks. I don't recommend setting server datetime to a future time as that will affect the prune and GC behavior during that period...
Forgot to suggets you to read about how QEMU backup works and use fleecing [1] as a good method to avoid problems with PBS disconnecting in the middle of a backup.
[1] https://pve.proxmox.com/pve-docs/chapter-vzdump.html#_vm_backup_fleecing
If dirty map is lost for any reason, the PVE host has to read the whole VM disk but it sends the chunk to PBS only if it doesn't exist there. That is, losing dirity map doesn't generate "a lot" more traffic than a backup with dirty map.
All traffic between PVE and PBS is encrypted with TLS, it...
Simplifying, the Ceph performance is determined by the slowest node in the cluster. That very old node may limit your Ceph performance.
That said, simply add that third node to your cluster an use it just to run Ceph services. You have no obligation to run VMs/CTs on it and still get to manage...
Shouldn't that raise the steal time counter in top and similar commands in the VM? I mean, if the host doesn't let it run it's processes because the host is either overloaded or artificially limiting the amount of CPU available to the VM (i.e. using CPU Limit), the steal time in the VM should...
That's a problem with some app running on that VM (Firebird DB by chance?) that will make the kernel punish such misbehaving apps by slowing them down artificially. That very page has the possible options [1] to sort out the problem but be aware of the implications.
Such apps will reduce the...
Check the CPU speed in the host:
cat /proc/cpuinfo | grep MHz
Some BIOS settings may force the CPU to run at very low speed, and processes need much longer real time to execute. I would check the power/performance settings in BIOS and set it to something similar to "performance" or "balanced...
If you mean that VMs must keep running and be up even if the host running them goes down, then you are out of luck as there is no equivalent of VMWare's Fault Tolerance feature.
On PVE, HA will start the VMs again on a remaining node of the cluster if the host running them dies or gets isolated...
You could easily add one or two of the free ports of your quad 10g card in a Layer3+4 LACP bond, move vmbr1 to the the bond and get improved network capacity, which should yield increased Ceph speeds as the drives seem to be capable of it.
As a side note, you should definitely use redundant...
Currently, using POM to install Ceph is tricky due to this bug [1]: PVE insists on changing the repo file contents to use the internet ones instead of POM. The bug report has a workaround that may work for you:
[1] https://bugzilla.proxmox.com/show_bug.cgi?id=5244
It's easier to simply change the Display device in the VM to "standard", so the default noVNC viewer is used. Makes no sense to me to use a SPICE display if you are not going to use the proper client to make use of it. And if you need to access the console of the VM from a PC that doesn't have...
If your server breaks, what's your recovery plan? Hope you also have backups outside of that server.
Your backups depend on a PBS VM that depends on a TrueNAS VM, both hosted in the very same server. If you are lucky, if the server breaks you will still have PBS data in some disk(s) and maybe...
Although I speak spanish this an english forum :)
Use the smallest stripe size possible in the RAID configuration to reduce read/write amplification for PBS workloads. Then install PVE using LVM (default) in that host and then PBS alongside it (add the repos and apt install...
Is that same server the PVE that runs your production VMs or they run on different host(s)? I mean, is that server dedicated to PBS?
Which disk controller does it have (HBA, RAID)?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.