Search results

  1. A

    Iscsi for Virtual Machine Storage in Proxmox

    post logs from the storage and host for the timeperiod the drop occured. it would also be good to look at your /etc/network/interfaces file, as well as your network setting from the storage (not just ip but connection speed and mtu) I'm guessing you have network issues.
  2. A

    Proxmox QDevice with a 16 node cluster

    Nodes failing isnt the issue. the quorum device is meant as defense against a silo connectivity "tie". Anything short of a room losing connectivity would be handled normally without it.
  3. A

    Skip backup of VM if it hasn't run nor changed

    There are MANY. A simple script of freeze, rdiff signature, rdiff delta, thaw would get you where you want to be. It would be harmless to run it nightly- hell, even every hour but you can also condition the run with a modified date check if that really bothers you. PBS is simply the most...
  4. A

    yawgpp (Yet Another Windows Guest Performance Post)

    More modern than what? My hardware is usually 2-5 years old before replacement. I wouldnt know. I have no use case for this. Passing cpu=host to the guest allows Windows to enable vulnerabilty mitigation code (spectre, heartbleed, etc) which is the reason its slower. outside of a homelab this...
  5. A

    yawgpp (Yet Another Windows Guest Performance Post)

    https://learn.microsoft.com/en-us/windows-hardware/design/minimum/windows-processor-requirements you CAN make Windows 11/2025 work with an older cpu, but the consequences are slow performance. no amount of hypervisor tweaking is going to fix that.
  6. A

    Subscriptions and new hardware

    I dont see where anyone suggested that. you can either run your cluster using the subscription repo (slower, stable) or the no-sub repo (quicker, less, umm, stable.) in practice, the nosub repo is stable enough for production, certainly in mine. I think you're using the word clear in a way I'm...
  7. A

    Poor user experience with Windows Server 2025 on Proxmox

    Modern CPUs use NUMA inside the socket as well as out.
  8. A

    Repurposing vxrails hardware with Ceph

    ok, in that case you need to pay special attention to your network design. You have, at MINIMUM, the following disparate network functions: 1. corosync 2. ceph public 3. ceph private 4. NFS payload 5. Internet/service network 6. BMC comingling any combination of physical interfaces for 1-4...
  9. A

    Repurposing vxrails hardware with Ceph

    Ahh makes sense. will the nfs boot apply to the workloads deployed on this hypervisor? if so, dont bother with ceph at all at this stage, since you already have storage. Your hardware is perfectly adequate for workload performance but ceph on hard drives will give everyone a bad taste, which is...
  10. A

    Repurposing vxrails hardware with Ceph

    There is nothing "special" about the vxrail hardware; if the purpose of the exercise is to prove it "works" I can save you the trouble- it works. The better question is, do you have a better description of the "concept" here? as others noted, the solution would be very slow, but thats only part...
  11. A

    Skip backup of VM if it hasn't run nor changed

    I suppose maybe I didnt understand what the problem was. I had understood you to not want to transfer backups when no change was present. You could have started here and make the whole thread unnecessary. You dont actually NEED vzdump at all to do this.
  12. A

    CPU Type Benchmark comparison - 'host' performance noticeably worse

    The issue isnt EXACTLY the baseline virtual cpu model (although this comes to play too) but rather the presence/absence of specific feature flags and/or hw vulnerability mitigations. The X86-vX models essentially are presets for flags, and attempt to mimic a specific "age" of underlying...
  13. A

    Skip backup of VM if it hasn't run nor changed

    You're trying to reinvent the wheel. Moden backup strategies are differential, which means that they are - content aware (via CBT) - only transfer the changes the "simple" vzdump process is not content aware, so you would have to resort to these self made hacks to get what you're after, but if...
  14. A

    Subscriptions and new hardware

    unless there was a policy shift, subscription is only valid if all nodes in the cluster are licensed.
  15. A

    Best Practices: Open Claw & Ollama on Fanless Proxmox (i7-1355U / Intel Iris Xe)

    1. lxc. 2. "decent" speeds are very relative. your tps on this system will be abysmal in the best of cases, and will drop as soon as you start hammering the system. 3. running the container in unprivileged mode. 4. since all you need is connectivity via port 11434, this is trivial.
  16. A

    ZFS: Need help with vdevs & pools across multiple HDDs with different capacity

    your use case is more suited for snapraid or unraid. these dont integrate with pve in any way, but they will work normally like any directory type store.
  17. A

    Make a Proxmox NTFS HDD accessible under Windows network (Windows Explorer)

    I misspoke. PVE does not have any such functionality. What I MEANT was to do it at the local debian level (and since thats where the entirety of the pve stack resides it was convenient shorthand) there are a bunch of tutorials for "setting up samba on debian" but in broad strokes, 1. apt...
  18. A

    Make a Proxmox NTFS HDD accessible under Windows network (Windows Explorer)

    Yes. In your use case, you have the option to have samba served either at the PVE level or in a container/vm. having PVE serve samba works well in a homelab environment, BUT you will need to manage users and access by hand. having a "NAS" distro like omv makes the user management convenient, but...
  19. A

    tips for shared storage that 'has it all' :-)

    Ceph is the only "all features" supported shared solution easily available for PVE. Its also the most heavily worked on for other Virt platforms such as XCP-NG and various flavors of kvm. Your decision tree going forward heavily depends on WHY ceph was rejected. If its a matter of available...
  20. A

    Problem with SANs on a three node cluster

    As I mentioned before, options 1 and 2 are available to you. option 3 is not, at least not in a non hackey way. options 1 and 2 work the same no matter whether you choose to use one, the other, or both, and dont interact/interfere with each other unless you gave them identical iqns (which must...