Search results

  1. VictorSTS

    Costant CPU usage

    Unrelated to the CPU usage, which is indeed caused by proxmox-backup-api serving PVE storage status requests plus your little CPU, just a heads up: using RAID6 on BTRFS isn't a good idea as it is not stable [1] and even a badly timed power outage can corrupt metadata and make you lose data. [1]...
  2. VictorSTS

    iothread-vq-mapping support

    That same post, at the third comment, has a link to Proxmox bugzilla were they are discussing the matter... [1] [1] https://bugzilla.proxmox.com/show_bug.cgi?id=6350
  3. VictorSTS

    iothread-vq-mapping support

    AFAIK it's in the works, as a forum search shows [1] [1] https://forum.proxmox.com/threads/feature-request-proxmox-9-0-iothread-vq-mapping.166919/
  4. VictorSTS

    Many Errors on Proxmox Hypervisor

    If you want to hide such messages, disable PCIe Advanced Error Reporting (PCIe AER) in your BIOS. Whatever hardware causes them will still cause them, but you won't see them in your logs. The downside is that uncorrectable errors, the bad ones, won't show up in your logs either... If you really...
  5. VictorSTS

    ProLiant DL360 Gen11 sas

    That makes no sense: 2 drives are the perfect RAID1 setup to install any OS. That would be the first mobo/controller in history that doesn't allow a RAID1 with two drives :)
  6. VictorSTS

    Shut down a VM when a different hosts shuts down

    This doesn't feel logical IMHO. If you will end up starting the VM again in any of your surviving hosts, it will eventually use the same amount of memory it had before the shutdown, risking OOM killer on the host it is running in. Maybe a simpler option could be to use memory ballooning for that...
  7. VictorSTS

    Ceph does not recover on second node failure after 10 minutes

    TLDR mon_osd_min_in_ratio is your friend [1] Long story By default it is 0.75, meaning that Ceph will not mark out a down OSD if there is already ~25% of OSD already marked out. That is, a minimum of 75% of OSD will remain in even if they are down hence no recovery will happen. In your...
  8. VictorSTS

    Disable fs-freeze on snapshot backups

    Just a note: that option is exposed in the webUI at least since March 2023 with the release of PVE7.4 (check release notes [1]): [1] https://pve.proxmox.com/wiki/Roadmap#Proxmox_VE_7.4
  9. VictorSTS

    Symlink /etc/ceph/ceph.conf -> /etc/pve/ceph.conf not created automatically

    Hello, PVE8.4.1 cluster + Ceph Squid 19.2.1. I'm doing some hardware replacement to remove old nodes and adding new ones, so I added some of the new nodes to the cluster. Once in the cluster, from the webUI I deployed Ceph packages on 4 of the new nodes. Everything seems ok, but the symlink at...
  10. VictorSTS

    Proxmox always pre-allocates when migrating? (LVM to LVM)

    Do you have numbers on what the performance should be? Without them, you can't decide which VMs are "non latency intensive". Ceph isn't slow by any means, but of course you have the added latency and capacity limit of the network. How much that may affect real usage performance depends on many...
  11. VictorSTS

    nofsfreez: 1

    The original issue with QEMU Agent fsfreeze was that it notified VSS about the backup and all applications subscribed to VSS would prepare for it. In the case of SQL Server, it wrongly understood that it had to trim the log and thus broke the total/incremental/diferential backup chain of SQL...
  12. VictorSTS

    Proxmox always pre-allocates when migrating? (LVM to LVM)

    Space will be preallocated (that is, thin provisioning will be lost) on any non-shared storage if you live migrate the VM due to the fact that QEMU needs to set the source disk in "mirror" state so every write done to the source disk is written synchronously to the destination disk too. That...
  13. VictorSTS

    Ceph pve hyperconverged networking

    Ceph docs recommendations are based on simplicity of deployment and the fact that in a pure Ceph cluster you will have dozens or more servers contributing to the overall cluster network capacity. In a typical PVE+Ceph cluster you usually have a few nodes, so less overall network cluster...
  14. VictorSTS

    [SOLVED] VMs freeze with 100% CPU

    I suggest you open a new thread and provide as much information as possible (pveversion -v, qm config VMID, etc). Even if you problem may show similar symptoms probably isn't related as this one got solved in a 6.2 kernel released long ago. The current kernel is 6.8, with even newer version...
  15. VictorSTS

    Configuring Proxmox VE using only netplan

    If you want a supported configuration, use /etc/network/interfaces as currently it's the only supported way to configure the network, not just for the GUI but for other functionalities like Cluster deployment. IMHO you should adapt the tool (ansible) to the application (pve) and not the other...
  16. VictorSTS

    Upgrade Warning: Prevent proxmox-ve Removal, Firmware Conflicts, and Broken Kernels — Full Explanation and Safe Script

    Appreciate the effort, but giving this kind of script has it's risks. You can receive similar apt errors "attempting to remove proxmox-ve package" for many different reasons. As mentioned, this will never happen if you use the correct PVE repositories and follow the installation instructions...
  17. VictorSTS

    Ceph pve hyperconverged networking

    Slight offtopic (and I might be missing something): If you use a single switch, your cluster will have very reduced availability due to switch being SPOF. Same with that break out cable (SPF cables can fail too).
  18. VictorSTS

    Scrub won't complete on degraded ZFS pool

    That drive is dying in a quite peculiar way, although I've seen other weird behaviors like that. Simply backup all data, buy a new drive and ditch the old one. I wouldn't use it for anything besides practicing with broken drives in a lab. At the very least, use a mirror of two drives (RAID1)...
  19. VictorSTS

    [TOTEM ] Retransmit List ... causing entire HA cluster to reboot unexpectedly.

    Keep in mind that in 2 node cluster, if one loses quorum, the other one will lose it too, as it won't have a majority of votes (will have just 1 vote with is exactly 50% of 2 votes total). A 2 node cluster + HA will not provide any redundancy/resiliency at all. At the very least, add a QDevice...
  20. VictorSTS

    Scrub won't complete on degraded ZFS pool

    To me seems that that drive that ends up in DEGRADED state is dying in some funky way that causes the behavior you see. I would make sure you have a backup, remove the failing drive, connect a new one and use zfs replace to resilver it. You could even add a third drive if it is a mirror, but...