Recent content by VictorSTS

  1. VictorSTS

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    Thanks for the heads up. Pretty sure most where created with Ceph Reef except a few that got recreated recently with Squid 19.2.3. I'm aware of that bug, but given that I don't use EC pools (Ceph bugreport mentions it seems to only happen on OSD that hold EC pools) never really payed attention...
  2. VictorSTS

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    @fstrankowski I'm fully aware about the risks of OSD being full and know how to deal with that, but in any case an OSD should break because of that ;) Definitely fragmentation has an impact on this and will watch it more closely from now on. Anyway, I'm expecting new servers for this cluster...
  3. VictorSTS

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    In short: if Ceph warns you about something, do something about it. Read the full bugreport and found this comment [1]: "This issue seems to mostly affect disks which were heavily fragmented.". Mine are and in fact I have some warnings related to this, although the webUI doesn't show them...
  4. VictorSTS

    Ceph squid OSD crash related to RocksDB ceph_assert(cut_off == p->length)

    PVE8.4.14 + Ceph 19.2.3, 3 node cluster. All disks are PCIe NVMe. Different pools, some with zstd compression enabled. I'm seeing OSD crashing lately with the same failure. Journal shows that it is unable to properly run RocksDB with an assert message. There are a few entries like these every...
  5. VictorSTS

    I want nothing more than encrypted push sync

    @Zappes please, link the Bugzilla you mention here [1]. I would love to be able pull (not push) encrypted syncs where the source is unencrypted for any reason and destination PBS must store encrypted backups. [1]...
  6. VictorSTS

    Questions about PBS local sync

    Say I have a namespace called "PVE" where PVE cluster stores it's backups. In the same datastore, I have another namespace called "DELETED". When a VM is deleted from the PVE cluster, I move it's backups from namespace "PVE" to "DELETED" in order to keep them there for some time. To make that...
  7. VictorSTS

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    Just follow the docs: https://pve.proxmox.com/wiki/Ceph_Squid_to_Tentacle
  8. VictorSTS

    Nested PVE (on PVE host) Kernel panic Host injected async #PF in kernel mode

    Seeing this issue with nested PVE on PVE on EPYC 9124 CPU. Host has SWAP and KSM enabled, although it has plenty of free memory (100GB of 512GB). This cluster currently runs PVE8.4.14 and runs other workloads too besides nested PVE VMs, with Debian, Ubuntu and Windows Server guest OS. Only the...
  9. VictorSTS

    Backup taken on a last day of a month as monthly backup

    I've already discussed about it here [1]. When I really have the need to keep the last backup of the last day of the month, I use this on PBS: Create a namespace "MONTHLY" Create a daily sync job that runs "if day is 28 to 31 of each month at 15:00": *-28..31 15:00. That will store a few of the...
  10. VictorSTS

    [noVNC] Endless loading spinner since Firefox 147.0.3

    It works ok for me on v147.0.2 and v147.0.3, both on Linux, accesing noVNC from PVE9.0.x, PVE9.1.x and PVE8.x. Try it from a VM to discard some issue with cache/whatever on your current PC.
  11. VictorSTS

    Backup migration between different namespaces

    As stated above, it can be done with sync jobs + manual deletion from the source namespace. Currently, local sync jobs only allow to sync between different datastores (I'm still wondering why). You will have to add that PBS itself as a remote so you can copy snapshots between namespaces of the...
  12. VictorSTS

    Network interface pinning inconsistencies: ISO installer vs pve-network-interface-pinning generate

    Thanks, but that's unrelated to the issue I described. Problem is that I end up with two .link files for nic0 because pve-network-interface-pinning doesn't recognize there's already a pinned name due to different .link file naming scheme. Both /usr/local/lib/systemd/network/50-pmx-nic0.link and...
  13. VictorSTS

    proxmox ceph performance with consumer grade samsung ssd

    Good catch!! My brain stopped processing as soon as my eyes spotted the "870 QVO ssds", which happened before reading that 2x4tb per node o_O
  14. VictorSTS

    proxmox ceph performance with consumer grade samsung ssd

    Yes, they cost more and will get really expensive in the coming months, but second hand SATA/SAS are easy to find and not that costly. In the long run they end up being cheaper as they don't degrade as fast as consumer ones, so won't need to replace them so often. That depends on your workload...
  15. VictorSTS

    Is a 3-node Full Mesh Setup For Ceph and Corosync Good or Bad

    Don't want to start an argument here, but whoever told you that has little idea what is a PVE Ceph mesh cluster. Linux kernel routing may use like 0'1% of CPU and FRR may use like 3% CPU while converging or during node boot for a few seconds. If we follow the same reasoning, hyper converged...