I use a PBS client but mostly for file reference restores. My recovery is reinstall an rejoin cluster but I do agree with your sentiments.
Curious as to what backup solution you would be considering for this configuration.
Test of iperf3-ver-tailscale between two LXCs on the same machine show great performance:
09:42 user@samba:~ > iperf3 -c ts.ip.same.machine1 -t 5
Connecting to host ts.ip.same.machine1, port 5201
[ 5] local ts.ip.same.machine2 port 59294...
Thanks for your help. None of that seemed to apply, so I deleted the pool/directories completely, installed windows server as a vm and passed the sata controller to it directly... now I have the storage pool that I wanted lol
Well, I've checked the docs.
Temporary location is mentioned for backups of containers in suspend mode, see
https://pve.proxmox.com/pve-docs/chapter-vzdump.html
Also, the snapshot mode uses a temporary snapshot.
But have a look at...
any time you build an environment with such nested dependencies you're making a unsupportable/difficult to support solution. As a matter of design, your storage layer and your compute layer should not be interdependent.
Since you've decided that...
Thanks,
It finally works after months. Now I just need open a SSH shell, type in my LUKS passphrase and it automatically unlocks all the encrypted zfs datasets, mounts the SMB shares and starts the VMs and LXCs in the right order. :)
I used...
Hi,
That was a beginner mistake.
Real root cause (found 2026-05-06): discard=on was missing from scsi1 in the QEMU config. For over a year, every ext4 TRIM was silently dropped by QEMU and never propagated down to Ceph → 2.7 TiB of orphaned RBD...
96 packets transmitted, 96 received, 0% packet loss, time 95145ms
rtt min/avg/max/mdev = 15.667/25.435/33.892/3.466 ms
Not a shared medium as far as I'm aware, wired directly into ethernet switch > unifi router > ISP modem. It could definitely...
You are missing three things:
1. Hyper-V enlightenments enabled. Enable as many as you can that are performance driven
2. MBEC support enabled. This is most critical. Your processor supports it but you are not exposing that to the guest
3. A...
Hey Leesteken- sorry for the late reply. I have been busy decorating the house and postponed the proxmox project.
I have moved the HBA to PICE slot 1 and confirmed it works fine now.
As a test, I tried to plug in the main partition of the SSD...
DERIch habe mich bereits entschuldigt, aber Deutsch ist selbst mit Google Translate noch zu schwierig für mich. Ich hoffe, mein Fall kann jemandem helfen, da ich das Problem gelöst habe. Könnten Sie mir bitte die URL des englischen Forums geben?
Welcome, @x1234
Try "fleecing". There is a thread with a similar case:
https://forum.proxmox.com/threads/failed-backup-to-pbs-damages-vm.183459/post-852260
thats an understatement.
the crucial bx series is one of the worst performing ssds i have ever seen, even on clients.
you may use it as cold storage, but anything warm or hot will perform terrible on it.
more so if its used with zfs/ceph.
even...
Additionally, with just 3 nodes in a ceph cluster, make sure you have at least 4 OSDs in each. Because with just 2 per node, you will likely have issues if one of the OSDs fails. As then Ceph will recover the lost replicas to the only node it can...
Those are on the cheaper and slower side of consumer SSDs. They will not perform well with sustained load and the primarily sync writes that Ceph does.
The recommendation for enterprise SSDs with power loss protection (PLP) is there for good...
Thank you so much!
I did a combination of the first two suggestions you made.
1. I cloned the Container, which allowed me to make the container CT 101, then after creating it I made a new mount point with the same drive storage (900GB in this...
AFAIK you have a few options here. Firstly attach that media drive & set it's PVE storage settings EXACTLY as it was previously.
Then do ONE of the following:
1. Recreate an LXC with the same <ctid> as before (101) . I believe you will have to...