Latest activity

  • B
    I use a PBS client but mostly for file reference restores. My recovery is reinstall an rejoin cluster but I do agree with your sentiments. Curious as to what backup solution you would be considering for this configuration.
  • B
    Test of iperf3-ver-tailscale between two LXCs on the same machine show great performance: 09:42 user@samba:~ > iperf3 -c ts.ip.same.machine1 -t 5 Connecting to host ts.ip.same.machine1, port 5201 [ 5] local ts.ip.same.machine2 port 59294...
  • P
    Thanks for your help. None of that seemed to apply, so I deleted the pool/directories completely, installed windows server as a vm and passed the sata controller to it directly... now I have the storage pool that I wanted lol
  • P
    ph4zed reacted to Onslow's post in the thread datacenter directory only has 100GB? with Like Like.
    Well, I've checked the docs. Temporary location is mentioned for backups of containers in suspend mode, see https://pve.proxmox.com/pve-docs/chapter-vzdump.html Also, the snapshot mode uses a temporary snapshot. But have a look at...
  • A
    post the output of zfs list -t all all your questions would be answered there.
  • A
    alexskysilk replied to the thread HBA passthough.
    any time you build an environment with such nested dependencies you're making a unsupportable/difficult to support solution. As a matter of design, your storage layer and your compute layer should not be interdependent. Since you've decided that...
  • A
    Thanks, It finally works after months. Now I just need open a SSH shell, type in my LUKS passphrase and it automatically unlocks all the encrypted zfs datasets, mounts the SMB shares and starts the VMs and LXCs in the right order. :) I used...
  • Moayad
    Hi, That was a beginner mistake. Real root cause (found 2026-05-06): discard=on was missing from scsi1 in the QEMU config. For over a year, every ext4 TRIM was silently dropped by QEMU and never propagated down to Ceph → 2.7 TiB of orphaned RBD...
  • S
    96 packets transmitted, 96 received, 0% packet loss, time 95145ms rtt min/avg/max/mdev = 15.667/25.435/33.892/3.466 ms Not a shared medium as far as I'm aware, wired directly into ethernet switch > unifi router > ISP modem. It could definitely...
  • U
    You are missing three things: 1. Hyper-V enlightenments enabled. Enable as many as you can that are performance driven 2. MBEC support enabled. This is most critical. Your processor supports it but you are not exposing that to the guest 3. A...
  • J
    Jimmy Brown replied to the thread HBA passthough.
    Hey Leesteken- sorry for the late reply. I have been busy decorating the house and postponed the proxmox project. I have moved the HBA to PICE slot 1 and confirmed it works fine now. As a test, I tried to plug in the main partition of the SSD...
    • 1778620762746.jpeg
  • M
    martusi61 replied to the thread Connection error 596.
    DERIch habe mich bereits entschuldigt, aber Deutsch ist selbst mit Google Translate noch zu schwierig für mich. Ich hoffe, mein Fall kann jemandem helfen, da ich das Problem gelöst habe. Könnten Sie mir bitte die URL des englischen Forums geben?
  • gurubert
    What is the network latency between the nodes on the network used for corosync? is this network on a shared medium? Can there be congestion?
  • O
    Welcome, @x1234 Try "fleecing". There is a thread with a similar case: https://forum.proxmox.com/threads/failed-backup-to-pbs-damages-vm.183459/post-852260
  • gurubert
    gurubert reacted to beisser's post in the thread Ceph low performance with Like Like.
    thats an understatement. the crucial bx series is one of the worst performing ssds i have ever seen, even on clients. you may use it as cold storage, but anything warm or hot will perform terrible on it. more so if its used with zfs/ceph. even...
  • gurubert
    gurubert reacted to aaron's post in the thread Ceph low performance with Like Like.
    Additionally, with just 3 nodes in a ceph cluster, make sure you have at least 4 OSDs in each. Because with just 2 per node, you will likely have issues if one of the OSDs fails. As then Ceph will recover the lost replicas to the only node it can...
  • gurubert
    gurubert reacted to aaron's post in the thread Ceph low performance with Like Like.
    Those are on the cheaper and slower side of consumer SSDs. They will not perform well with sustained load and the primarily sync writes that Ceph does. The recommendation for enterprise SSDs with power loss protection (PLP) is there for good...
  • D
    qm rescan/pct rescan can help here with attaching existing disks to a guest with the same ID.
  • D
    Thank you so much! I did a combination of the first two suggestions you made. 1. I cloned the Container, which allowed me to make the container CT 101, then after creating it I made a new mount point with the same drive storage (900GB in this...
  • D
    AFAIK you have a few options here. Firstly attach that media drive & set it's PVE storage settings EXACTLY as it was previously. Then do ONE of the following: 1. Recreate an LXC with the same <ctid> as before (101) . I believe you will have to...