VictorSTS's latest activity

  • VictorSTS
    I don't agree: you have 3 copies of your data, you have host HA, there is no SPOF and you can easily grow up if/when needed. With proper sizing you can even tolerate the loss of some OSD in any host and still allow Ceph to self heal. If you lose...
  • VictorSTS
    Looks like I posted almost at the same time as @dcsapak. Maybe you or @mariol may take a look at the official documentation to mention how to nest pools and that they support permissions inheritance. I've been unable to find that information in...
  • VictorSTS
    Since PVE8.1[1] (section "Access control") you can have 3 levels of nested resource pool and apply permissions with inheriatance if you use "propagate". I think this is what you are looking for. Unfortunately, I haven't found that in the manual...
  • VictorSTS
    For me Option A: 2-node + QDevice with Ceph is the worst idea ever (as explained above), Option D: 3-node with ZFS replication makes no sense having other options, and Option E: 2-node + QDevice with other clustered iSCSI SAN is a no go due to...
  • VictorSTS
    Placing them in RAID 0 disables pass through and probably removes the partition headers of the drive and / or hides them from the host. Without a more precise answer to "What happens exactly?" its all guessing...
  • VictorSTS
    It isn't supported. A RAID 0 disk isn't supported either. You best bet is to change the controller personality to IT mode, if possible. What happens exactly? There has been no change regarding this. My bet is that your drives had some kind of...
  • VictorSTS
    VictorSTS replied to the thread Missing chunks again.
    Extending on previous reply, what GC does is: Phase 1: read the index of each backup that contains the list of chunks used by that backup snapshot. Then updates access time on each chunk of that backup snapshot. It does so with each backup...
  • VictorSTS
    VictorSTS replied to the thread Missing chunks again.
    You won't be able to recover space from expired backups and your datastore will eventually become full. GC must work for PBS te behave as it is designed. @Chris , would it be possible to implement an alternate GC that uses modify or creation...
  • VictorSTS
    VictorSTS replied to the thread Missing chunks again.
    FWIW, a couple weeks ago I tried to use a Dell DD3300, quite similar to OPs EMC Data Domain storage, and it refused to update access time neither via NFS nor CIFS. In my case, PBS 4 did show an error during datastore creation and refused to...
  • VictorSTS
    VictorSTS replied to the thread Cluster Issues.
    Dunno and It's impossible to guess it without logs and full configs. Given that with a different, UDP and unsigned, transport it works good for you, seems that your switch/network misbehaves with the standard kronosnet protocol (TCP, encrypted)...
  • VictorSTS
    There is no such thing. The only supported version on PVE 9 is Ceph Squid. If this is an updated cluster from PVE8, you should have updated repos and Ceph to Squid [1]. You package list show packages version 19.2.*, not Quincy ones (17.2.*)...
  • VictorSTS
    VictorSTS replied to the thread Cluster Issues.
    Again, Corosync does not use multicast with the default kronosnet transport, multicast isn't the issue. It does and also does not support native link redundacy IIRC.
  • VictorSTS
    VictorSTS replied to the thread Cluster Issues.
    Unless you set it manually or this is a cluster that has been upgraded since ancient times, your PVE is using unicast. PVE does not use multicast since PVE 4.x IIRC, when Corosync 3.x was introduced with the use of unicast kronosnet. Post your...
  • VictorSTS
    Sorry, but for me it's unclear what the problem is, how to reproduce / when does it happen, and what's different in your setup from a standard PVE installation were there's no issue like those you seem to describe. IIUC, you use/need some custom...
  • VictorSTS
    VictorSTS replied to the thread Physical server full backup.
    If you don't mind powering off the source system, this [1] may help. I've haven't tried it yet, so I can't really say how good it could be to backup and specially on restore. [1] https://www.apalrd.net/posts/2024/pbs_image/
  • VictorSTS
    VictorSTS replied to the thread Snapshot deletion too slow.
    As much of the performance of your SAN/network you want to devote to a saferemove operation without impacting other operations ;) I don't see that happening any time soon unless some sort of cluster aware filesystem with thin provisioning gets...
  • VictorSTS
    Hello, PVE 9 added "Support for snapshots as volume chains on Directory/NFS/CIFS storages (technology preview)". On any file level storage, like a directory or an NFS, you can use QCOW2 format for the disk(s) of your VMs, which already provides...
  • VictorSTS
    VictorSTS replied to the thread Snapshot deletion too slow.
    The problem is that saferemove is throttled by default to a whooping 10MBytes/s. If you check with ps -ef while a snapshot is being removed you'll see a cstream process zeroing the snapshot volume. Meanwhile the very welcome improvement of using...
  • VictorSTS
    VictorSTS replied to the thread S3 compatible storage.
    Use PBS to backup your VMs and set a replica job to S3 storage [1] [1] https://pbs.proxmox.com/docs/storage.html#datastores-with-s3-backend
  • VictorSTS
    You can't expect an exact answer if you don't provide an exact question: you are giving zero information on how your server/cluster is/was configured, zero information on what did you / happened to bork it and zero information on the current...