VictorSTS's latest activity

  • VictorSTS
    IIUC that patch seem to be applied to the verify tasks to improve it's performance. If that's the case, words can't express how eager am I to test it once patch gets packaged!
  • VictorSTS
    VictorSTS replied to the thread redundant qdevice network.
    This is wrong by design: you infrastructure devices must be 100% independent from your VMs. If your PVE hosts need to reach a remote QDevice, they must reach it on their own (i.e. run a wireguard tunnel on each PVE host). From the point of view...
  • VictorSTS
    IMHO, a final delete/discard should be done, too. If no discard is sent, it would delegate to the SAN what to do with those zeroes and depending on SAN capabilities (mainly thin provision but also compression and deduplication) may not free the...
  • VictorSTS
    It's explicitly explained in the PVE 8 to 9 documentation [1]: that package has been split in two on Trixie, hence systemd-boot isn't needed. I've upgraded some clusters already and removed that package when pve8to9 suggested to and the boot up...
  • VictorSTS
    VictorSTS replied to the thread Bug in wipe volume operation.
    I reported this already [1] and it is claimed to be fixed in PVE9.1 released today, although haven't tested it yet. [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6941
  • VictorSTS
    We're proud to present the next iteration of our Proxmox Virtual Environment platform. This new version 9.1 is the first point release since our major update and is dedicated to refinement. This release is based on Debian 13.2 "Trixie" but we're...
  • VictorSTS
    We recently uploaded the 6.17 kernel to our repositories. The current default kernel for the Proxmox VE 9 series is still 6.14, but 6.17 is now an option. We plan to use the 6.17 kernel as the new default for the Proxmox VE 9.1 release later in...
  • VictorSTS
    VictorSTS reacted to alexskysilk's post in the thread Proxmox with 48 nodes with Like Like.
    The number of cluster members is an inexact limit. that ACTUAL limit has to do with how much data the cluster members have to keep synchronized- if each of your cluster members had 400vms with continuous api traffic- your cluster would probably...
  • VictorSTS
    VictorSTS replied to the thread Proxmox with 48 nodes.
    You need two corosync links. For 12 nodes on gigabit I would use dedicated links for both, just in case, even if having it just for Link0 would be enough. Max I've got in production with gigabit corosync is 8 hosts, no problems at all.
  • VictorSTS
    VictorSTS reacted to alexskysilk's post in the thread PVE 8.4.12 Data Hunt for Recovery with Like Like.
    yes. you've already been given answers, you just dont like them. reinstall and restore from backup. fixing your install is more complicated and will require you to read documentation instead of just posting questions that are covered there.
  • VictorSTS
    Given the logs you posted, I would start by removing docker from that host (it's not officially supported) and not exposing critical services like ssh to the internet. You also mention "VNC", which makes me think maybe you installed PVE on top of...
  • VictorSTS
    There is RSTP [1] Maybe, but it does allow to use both links simultaneously while on RTSP only one is in use and the other is fallback only. Which you should have anyway, connected to two switches with MLAG/stacking to avoid the network being...
  • VictorSTS
    If Ceph doesn't let you write is because some PG(s) don't have enough OSD to fulfill the size/min.size set on a pool. In a 3 host Ceph cluster, for that to happen you either have to: Lose 2 hosts: you won't have quorum neither on Ceph nor on PVE...
  • VictorSTS
    VictorSTS replied to the thread Proxmox Ceph Performance.
    That data means little if you don't post the exact fio test you ran. AFAIR, the benchmark that Ceph does is a 4k write bench to find out the IOps capacity of the drive. You should bench that with fio. Also, I would run the same bench on a...
  • VictorSTS
    VictorSTS replied to the thread Proxmox Ceph Performance.
    Tell Ceph to benchmark those drives again on OSD start and restart the service when appropriate: ceph config set osd osd_mclock_force_run_benchmark_on_init true There's also another ceph tell like command to run a benchmark right now, but I...
  • VictorSTS
    There is a new QEMU 10.1 package available in the pve-test and pve-no-subscription repositories for Proxmox VE 9. After internally testing QEMU 10.1 for over a month and having this version available on the pve-test repository almost as long, we...
  • VictorSTS
    Those PVE logs only show that PVE is removing the network interfaces related to VMID 101 and 106. Check event log inside the VM. I have some Win2025 test VMs running 24x7 both on PVE8.4 and PVE9 without such issue. Although doesn't seem to be...
  • VictorSTS
    VictorSTS replied to the thread Windows Server 2025.
    Windows does that (assign an APIPA address) when the IP is in use somewhere in the network and some device replies to the ARP discover for the address you've entered in the configuration.
  • VictorSTS
    I may be missing something here, but keep in mind that files != zvol: you can't use neither the script nor zfs-rewrite to make VMs disk(s) "move" to use the newly added vdev. It would work for something like a PBS datastore. These options could...