Search results

  1. VictorSTS

    Suggestions for low cost HA production setup in small company

    There is RSTP [1] Maybe, but it does allow to use both links simultaneously while on RTSP only one is in use and the other is fallback only. Which you should have anyway, connected to two switches with MLAG/stacking to avoid the network being an SPOF. But yes, you would need 4 nics per host...
  2. VictorSTS

    Suggestions for low cost HA production setup in small company

    If Ceph doesn't let you write is because some PG(s) don't have enough OSD to fulfill the size/min.size set on a pool. In a 3 host Ceph cluster, for that to happen you either have to: Lose 2 hosts: you won't have quorum neither on Ceph nor on PVE and your VMs won't work until at least one host...
  3. VictorSTS

    Proxmox Ceph Performance

    That data means little if you don't post the exact fio test you ran. AFAIR, the benchmark that Ceph does is a 4k write bench to find out the IOps capacity of the drive. You should bench that with fio. Also, I would run the same bench on a host/disk that seems to provide proper performance and...
  4. VictorSTS

    Proxmox Ceph Performance

    Tell Ceph to benchmark those drives again on OSD start and restart the service when appropriate: ceph config set osd osd_mclock_force_run_benchmark_on_init true There's also another ceph tell like command to run a benchmark right now, but I don't remember it and may also be affected by real...
  5. VictorSTS

    PVE is killing my WinServer2025 VMs

    Those PVE logs only show that PVE is removing the network interfaces related to VMID 101 and 106. Check event log inside the VM. I have some Win2025 test VMs running 24x7 both on PVE8.4 and PVE9 without such issue. Although doesn't seem to be your case, triple-check there's no OOM events with...
  6. VictorSTS

    Windows Server 2025

    Windows does that (assign an APIPA address) when the IP is in use somewhere in the network and some device replies to the ARP discover for the address you've entered in the configuration.
  7. VictorSTS

    Proxmox VE 9 existing ZFS vdev expansion?

    I may be missing something here, but keep in mind that files != zvol: you can't use neither the script nor zfs-rewrite to make VMs disk(s) "move" to use the newly added vdev. It would work for something like a PBS datastore. These options could work: Rewriting data inside the VMs VM...
  8. VictorSTS

    Suggestions for low cost HA production setup in small company

    I don't agree: you have 3 copies of your data, you have host HA, there is no SPOF and you can easily grow up if/when needed. With proper sizing you can even tolerate the loss of some OSD in any host and still allow Ceph to self heal. If you lose a host, everything will still work, albeit with...
  9. VictorSTS

    Resource pool inside resource pool

    Looks like I posted almost at the same time as @dcsapak. Maybe you or @mariol may take a look at the official documentation to mention how to nest pools and that they support permissions inheritance. I've been unable to find that information in the docs.
  10. VictorSTS

    Resource pool inside resource pool

    Since PVE8.1[1] (section "Access control") you can have 3 levels of nested resource pool and apply permissions with inheriatance if you use "propagate". I think this is what you are looking for. Unfortunately, I haven't found that in the manual [2]. The syntax uses slash to create the nested...
  11. VictorSTS

    Suggestions for low cost HA production setup in small company

    For me Option A: 2-node + QDevice with Ceph is the worst idea ever (as explained above), Option D: 3-node with ZFS replication makes no sense having other options, and Option E: 2-node + QDevice with other clustered iSCSI SAN is a no go due to the SAN becoming an SPOF, the lack of a supported...
  12. VictorSTS

    Proxmox 9, Ceph Squid, and PERC H755

    Placing them in RAID 0 disables pass through and probably removes the partition headers of the drive and / or hides them from the host. Without a more precise answer to "What happens exactly?" its all guessing...
  13. VictorSTS

    Proxmox 9, Ceph Squid, and PERC H755

    It isn't supported. A RAID 0 disk isn't supported either. You best bet is to change the controller personality to IT mode, if possible. What happens exactly? There has been no change regarding this. My bet is that your drives had some kind of partition signature and they simply didn't show in...
  14. VictorSTS

    Missing chunks again

    Extending on previous reply, what GC does is: Phase 1: read the index of each backup that contains the list of chunks used by that backup snapshot. Then updates access time on each chunk of that backup snapshot. It does so with each backup snapshot in that datastore (there's only one GC job...
  15. VictorSTS

    Missing chunks again

    You won't be able to recover space from expired backups and your datastore will eventually become full. GC must work for PBS te behave as it is designed. @Chris , would it be possible to implement an alternate GC that uses modify or creation timestamp instead of access timestamp? In my tests...
  16. VictorSTS

    Missing chunks again

    FWIW, a couple weeks ago I tried to use a Dell DD3300, quite similar to OPs EMC Data Domain storage, and it refused to update access time neither via NFS nor CIFS. In my case, PBS 4 did show an error during datastore creation and refused to create the datastore. When testing with stat + touch...
  17. VictorSTS

    Cluster Issues

    Dunno and It's impossible to guess it without logs and full configs. Given that with a different, UDP and unsigned, transport it works good for you, seems that your switch/network misbehaves with the standard kronosnet protocol (TCP, encrypted). Sorry not being of better help this time
  18. VictorSTS

    CEPH installation is failing due to a version mismatch

    There is no such thing. The only supported version on PVE 9 is Ceph Squid. If this is an updated cluster from PVE8, you should have updated repos and Ceph to Squid [1]. You package list show packages version 19.2.*, not Quincy ones (17.2.*). Maybe you have both repositories configured? I've...
  19. VictorSTS

    Cluster Issues

    Again, Corosync does not use multicast with the default kronosnet transport, multicast isn't the issue. It does and also does not support native link redundacy IIRC.
  20. VictorSTS

    Cluster Issues

    Unless you set it manually or this is a cluster that has been upgraded since ancient times, your PVE is using unicast. PVE does not use multicast since PVE 4.x IIRC, when Corosync 3.x was introduced with the use of unicast kronosnet. Post your /etc/pve/corosync.conf in each node and make 100%...