VictorSTS's latest activity

  • VictorSTS
    Do you have numbers on what the performance should be? Without them, you can't decide which VMs are "non latency intensive". Ceph isn't slow by any means, but of course you have the added latency and capacity limit of the network. How much that...
  • VictorSTS
    VictorSTS replied to the thread nofsfreez: 1.
    The original issue with QEMU Agent fsfreeze was that it notified VSS about the backup and all applications subscribed to VSS would prepare for it. In the case of SQL Server, it wrongly understood that it had to trim the log and thus broke the...
  • VictorSTS
    Space will be preallocated (that is, thin provisioning will be lost) on any non-shared storage if you live migrate the VM due to the fact that QEMU needs to set the source disk in "mirror" state so every write done to the source disk is written...
  • VictorSTS
    Ceph docs recommendations are based on simplicity of deployment and the fact that in a pure Ceph cluster you will have dozens or more servers contributing to the overall cluster network capacity. In a typical PVE+Ceph cluster you usually have a...
  • VictorSTS
    VictorSTS reacted to Johannes S's post in the thread Ceph/RAID 5 in a small homelab with Like Like.
    Hello, as you already noted Ceph in a small homelab opens a whole can of worms: https://forum.proxmox.com/threads/fabu-can-i-use-ceph-in-a-_very_-small-cluster.159671/ With your current hardware you have basically following options: - Build a...
  • VictorSTS
    I suggest you open a new thread and provide as much information as possible (pveversion -v, qm config VMID, etc). Even if you problem may show similar symptoms probably isn't related as this one got solved in a 6.2 kernel released long ago. The...
  • VictorSTS
    VictorSTS reacted to martin's post in the thread New Proxmox Hosting Partner - partimus with Like Like.
    We're excited to welcome partimus, a hosting provider from Germany, as our newest official Proxmox Hosting Partner. Partimus is part of the primeline group, together with the primeLine Solutions GmbH, a longstanding Proxmox Gold Partner. Proxmox...
  • VictorSTS
    If you want a supported configuration, use /etc/network/interfaces as currently it's the only supported way to configure the network, not just for the GUI but for other functionalities like Cluster deployment. IMHO you should adapt the tool...
  • VictorSTS
    Appreciate the effort, but giving this kind of script has it's risks. You can receive similar apt errors "attempting to remove proxmox-ve package" for many different reasons. As mentioned, this will never happen if you use the correct PVE...
  • VictorSTS
    VictorSTS reacted to davids01's post in the thread Ceph pve hyperconverged networking with Like Like.
    There are single points of failure in this setup and at the moment it is an accepted risk
  • VictorSTS
    Slight offtopic (and I might be missing something): If you use a single switch, your cluster will have very reduced availability due to switch being SPOF. Same with that break out cable (SPF cables can fail too).
  • VictorSTS
    That drive is dying in a quite peculiar way, although I've seen other weird behaviors like that. Simply backup all data, buy a new drive and ditch the old one. I wouldn't use it for anything besides practicing with broken drives in a lab. At the...
  • VictorSTS
    Keep in mind that in 2 node cluster, if one loses quorum, the other one will lose it too, as it won't have a majority of votes (will have just 1 vote with is exactly 50% of 2 votes total). A 2 node cluster + HA will not provide any...
  • VictorSTS
    To me seems that that drive that ends up in DEGRADED state is dying in some funky way that causes the behavior you see. I would make sure you have a backup, remove the failing drive, connect a new one and use zfs replace to resilver it. You could...
  • VictorSTS
    Maybe same symptoms, but certainly a different root cause as this got sorted out in kernel 6.8, which is the default in PVE 8.4.1
  • VictorSTS
    VictorSTS replied to the thread Problem with time within vms.
    I would use QEMU Agent hook scripts instead, so you can run inside the VM which ever time sync command you need when the filesystem is thawed. Some details on [1] and [2]. Out of curiosity: which DB is it? Using Percona, MySQL GTID replication...
  • VictorSTS
    https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Roadmap You'll run into issues eventually
  • VictorSTS
    VictorSTS reacted to shanreich's post in the thread Load balancing and redundancy with Like Like.
    If you're going with a routed setup via Openfabric / OSPF, then no bonds should be required - they're probably even detrimental to the whole setup. FRR supports ECMP, so just adding multiple interfaces to the same Openfabric router should already...
  • VictorSTS
    VictorSTS replied to the thread Memory usage graphic.
    Feeling that I'm going to repeat myself a bit too much :), but... that is showing the configured RAM in the VM, not the used RAM. The green area will be drawn regardless of the power state of the VM or if it has ever been powered on. If you power...
  • VictorSTS
    HA acts locally on each host and will fence a host if the host loses quorum. To lose quorum, corosync in that host has to decide that none of both link0 and link1 are operating properly (nic link down, switch down, too much jitter, too much...