Recent content by VictorSTS

  1. VictorSTS

    Backup migration between different namespaces

    As stated above, it can be done with sync jobs + manual deletion from the source namespace. Currently, local sync jobs only allow to sync between different datastores (I'm still wondering why). You will have to add that PBS itself as a remote so you can copy snapshots between namespaces of the...
  2. VictorSTS

    Network interface pinning inconsistencies: ISO installer vs pve-network-interface-pinning generate

    Thanks, but that's unrelated to the issue I described. Problem is that I end up with two .link files for nic0 because pve-network-interface-pinning doesn't recognize there's already a pinned name due to different .link file naming scheme. Both /usr/local/lib/systemd/network/50-pmx-nic0.link and...
  3. VictorSTS

    proxmox ceph performance with consumer grade samsung ssd

    Good catch!! My brain stopped processing as soon as my eyes spotted the "870 QVO ssds", which happened before reading that 2x4tb per node o_O
  4. VictorSTS

    proxmox ceph performance with consumer grade samsung ssd

    Yes, they cost more and will get really expensive in the coming months, but second hand SATA/SAS are easy to find and not that costly. In the long run they end up being cheaper as they don't degrade as fast as consumer ones, so won't need to replace them so often. That depends on your workload...
  5. VictorSTS

    Is a 3-node Full Mesh Setup For Ceph and Corosync Good or Bad

    Don't want to start an argument here, but whoever told you that has little idea what is a PVE Ceph mesh cluster. Linux kernel routing may use like 0'1% of CPU and FRR may use like 3% CPU while converging or during node boot for a few seconds. If we follow the same reasoning, hyper converged...
  6. VictorSTS

    Using LDAP with RBAC groups ( nested Ad groups )

    When I need to, I use an AD backend and filter by groups that I create specifically to manage PVE privileges. Never had the need to use nested groups as the environments I've used this were not big enough to justify nesting groups or not creating groups for PVE. I understand that nesting groups...
  7. VictorSTS

    proxmox ceph performance with consumer grade samsung ssd

    For once, AI is right :) Any consumer drive will have low Ceph performance due to rocksDB and sync writes, but those drives in particular are terrible for anything but PC archiving purposes due to it's small SLC cache and very slow QLC nand chips. It's hard to get more than ~40MBytes/s from each...
  8. VictorSTS

    Network interface pinning inconsistencies: ISO installer vs pve-network-interface-pinning generate

    This lab is using PVE9.1.4, no subscription (although don't think there's anything different in Enterprise repo in this regard). Using nested PVE to test Ceph configs, etc. When installing the nested PVE with the iso I choosed to pin the network interfaces, so a config file is created at...
  9. VictorSTS

    Using LDAP with RBAC groups ( nested Ad groups )

    Try to post in the bug report itself. It shows as "ASSIGNED", but PVE version is still "6", so it may get lower attention.
  10. VictorSTS

    Live migration failure: Unknown savevm section or instance 'dbus-vmstate/dbus-vmstate' 0

    Hello, Have had an issue with one, single, live migration of a VM. This VM has been live migrated a few times before without issues, both from and to this same host. Many other VMs live migrate without issues (we've done 1000+ live migrations in this cluster already). My googling hasn't...
  11. VictorSTS

    Update ceph Quincy

    Never ever use apt upgrade on PVE: always use apt dist-upgrade or it's synonym apt full-upgrade, as detailed in the docs you linked. That said, if you follow those steps apt will update all packages, not just Ceph one's, which isn't what OP asked for. An apt install ceph-common should be enough...
  12. VictorSTS

    Cluster with different number of links

    Although I would setup two clusters, if you really want one cluster just setup corosync links in vlans and place said vlans on the available physical links on each host. Doesn't make sense for those "remote" nodes as it won't provide any real benefit, but won't hurt and at least the local nodes...
  13. VictorSTS

    Slow ceph operation

    Nice to see this reaching the official documentation! Maybe OP did setup a vlan for Ceph Public network with different IP network from that of other cluster services and can just move the vlan to a different physical nic/bond. Did you @kacper.adrianowicz ? If you didn't, you could do it now ;)
  14. VictorSTS

    Slow ceph operation

    Ceph Public is the network used to read/write from/to your Ceph OSDs from each PVE host, so your are limited to 1GB/s. Ceph Cluster network is used for OSD replication traffic only. Move Ceph Public to your 10GB nic and there should be an improvement. You can share the same nic for both Ceph...
  15. VictorSTS

    [SOLVED] Weird RAID configurations for redundancy

    You can do this with ZFS, albeit manually (not from webUI). You could also create 2 5-way mirror with 5 disks each, then create a RAID0 with those two vdev. Something like choosing stripping "vertically" or "horizontally". No idea on how it would perform speed wise. I let to you to decide if...