Search results

  1. VictorSTS

    Windows Server 2025

    Windows does that (assign an APIPA address) when the IP is in use somewhere in the network and some device replies to the ARP discover for the address you've entered in the configuration.
  2. VictorSTS

    Proxmox VE 9 existing ZFS vdev expansion?

    I may be missing something here, but keep in mind that files != zvol: you can't use neither the script nor zfs-rewrite to make VMs disk(s) "move" to use the newly added vdev. It would work for something like a PBS datastore. These options could work: Rewriting data inside the VMs VM...
  3. VictorSTS

    Suggestions for low cost HA production setup in small company

    I don't agree: you have 3 copies of your data, you have host HA, there is no SPOF and you can easily grow up if/when needed. With proper sizing you can even tolerate the loss of some OSD in any host and still allow Ceph to self heal. If you lose a host, everything will still work, albeit with...
  4. VictorSTS

    Resource pool inside resource pool

    Looks like I posted almost at the same time as @dcsapak. Maybe you or @mariol may take a look at the official documentation to mention how to nest pools and that they support permissions inheritance. I've been unable to find that information in the docs.
  5. VictorSTS

    Resource pool inside resource pool

    Since PVE8.1[1] (section "Access control") you can have 3 levels of nested resource pool and apply permissions with inheriatance if you use "propagate". I think this is what you are looking for. Unfortunately, I haven't found that in the manual [2]. The syntax uses slash to create the nested...
  6. VictorSTS

    Suggestions for low cost HA production setup in small company

    For me Option A: 2-node + QDevice with Ceph is the worst idea ever (as explained above), Option D: 3-node with ZFS replication makes no sense having other options, and Option E: 2-node + QDevice with other clustered iSCSI SAN is a no go due to the SAN becoming an SPOF, the lack of a supported...
  7. VictorSTS

    Proxmox 9, Ceph Squid, and PERC H755

    Placing them in RAID 0 disables pass through and probably removes the partition headers of the drive and / or hides them from the host. Without a more precise answer to "What happens exactly?" its all guessing...
  8. VictorSTS

    Proxmox 9, Ceph Squid, and PERC H755

    It isn't supported. A RAID 0 disk isn't supported either. You best bet is to change the controller personality to IT mode, if possible. What happens exactly? There has been no change regarding this. My bet is that your drives had some kind of partition signature and they simply didn't show in...
  9. VictorSTS

    Missing chunks again

    Extending on previous reply, what GC does is: Phase 1: read the index of each backup that contains the list of chunks used by that backup snapshot. Then updates access time on each chunk of that backup snapshot. It does so with each backup snapshot in that datastore (there's only one GC job...
  10. VictorSTS

    Missing chunks again

    You won't be able to recover space from expired backups and your datastore will eventually become full. GC must work for PBS te behave as it is designed. @Chris , would it be possible to implement an alternate GC that uses modify or creation timestamp instead of access timestamp? In my tests...
  11. VictorSTS

    Missing chunks again

    FWIW, a couple weeks ago I tried to use a Dell DD3300, quite similar to OPs EMC Data Domain storage, and it refused to update access time neither via NFS nor CIFS. In my case, PBS 4 did show an error during datastore creation and refused to create the datastore. When testing with stat + touch...
  12. VictorSTS

    Cluster Issues

    Dunno and It's impossible to guess it without logs and full configs. Given that with a different, UDP and unsigned, transport it works good for you, seems that your switch/network misbehaves with the standard kronosnet protocol (TCP, encrypted). Sorry not being of better help this time
  13. VictorSTS

    CEPH installation is failing due to a version mismatch

    There is no such thing. The only supported version on PVE 9 is Ceph Squid. If this is an updated cluster from PVE8, you should have updated repos and Ceph to Squid [1]. You package list show packages version 19.2.*, not Quincy ones (17.2.*). Maybe you have both repositories configured? I've...
  14. VictorSTS

    Cluster Issues

    Again, Corosync does not use multicast with the default kronosnet transport, multicast isn't the issue. It does and also does not support native link redundacy IIRC.
  15. VictorSTS

    Cluster Issues

    Unless you set it manually or this is a cluster that has been upgraded since ancient times, your PVE is using unicast. PVE does not use multicast since PVE 4.x IIRC, when Corosync 3.x was introduced with the use of unicast kronosnet. Post your /etc/pve/corosync.conf in each node and make 100%...
  16. VictorSTS

    2025 / PVE9.x / WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! [analysis, resolution]

    Sorry, but for me it's unclear what the problem is, how to reproduce / when does it happen, and what's different in your setup from a standard PVE installation were there's no issue like those you seem to describe. IIUC, you use/need some custom SSH settings that do not seem compatible with PVE...
  17. VictorSTS

    Physical server full backup

    If you don't mind powering off the source system, this [1] may help. I've haven't tried it yet, so I can't really say how good it could be to backup and specially on restore. [1] https://www.apalrd.net/posts/2024/pbs_image/
  18. VictorSTS

    Snapshot deletion too slow

    As much of the performance of your SAN/network you want to devote to a saferemove operation without impacting other operations ;) I don't see that happening any time soon unless some sort of cluster aware filesystem with thin provisioning gets properly implemented on Linux. I.e. a VMWare VMFS...
  19. VictorSTS

    Snapshot as volume chain for file level storage, use cases?

    Hello, PVE 9 added "Support for snapshots as volume chains on Directory/NFS/CIFS storages (technology preview)". On any file level storage, like a directory or an NFS, you can use QCOW2 format for the disk(s) of your VMs, which already provides snapshot functionality and in some aspects in a...
  20. VictorSTS

    Snapshot deletion too slow

    The problem is that saferemove is throttled by default to a whooping 10MBytes/s. If you check with ps -ef while a snapshot is being removed you'll see a cstream process zeroing the snapshot volume. Meanwhile the very welcome improvement of using discard gets published, use this to increase the...