Search results

  1. A

    Best practice to use a Synology NAS as storage for Plex running on Proxmox?

    This might be a question for reddit /r/plex. Not seeing how Proxmox is involved. Wherever Plex is running, virtual or not, you will have to mount your network storage to the local system, expressed as some local path, and configure your Plex libraries on those path(s). If its...
  2. A

    [SOLVED] high latency clusters

    This is more interesting not because of OP's original question but because of how it is related to the practical max cluster size. pmxcfs works great as-is and the max cluster size, for 99% of people's use, is high enough. Far into the future, what are everyone's ideas for how this could be...
  3. A

    Proxmox installation - hard disks and PowerEdge R620

    Do you use all the internal PCIe slots on your R620? You could add a small m.2 mirrored device there to install PVE. 12th gen don't have bifurcation but there is some limited NVMe support. 8-bay as opposed to 10-bay R620 also implies there is an optical bay? You could just tape up a small SATA...
  4. A

    [SOLVED] Install Proxmox 8.1 on Boss-N1 and using Dell PERC H965i Controller

    Passthrough or not, a PERC card will bottleneck the individual drives. 8 NVMe drives at x4 each means you need 32 PCIe lanes to hook all these up to your system at their full width... but the PERC card only has x8. To use the drives properly as a JBOD or ZFS, you would need to give the PERC...
  5. A

    Feature Request - Cinder

    It's just weird to get newcomers into a community who rather than try and learn our way of doing things, they want to force an unnatural marriage between 2 otherwise incompatible systems....yes we get you have invested tons of cash in hardware and then the rules were changed on you. Still, it...
  6. A

    [SOLVED] Install Proxmox 8.1 on Boss-N1 and using Dell PERC H965i Controller

    The "no parity raid for VM hosting" is a very old rule, I have been breaking it for over 10 years. If you have HDDs and a cacheless controller, don't break the rule. If you have high performance SSDs and a 8 GB DDR4 write-back cache on your controller, there is no rule. The drives won't perform...
  7. A

    Feature Request - Cinder

    You will be absolutely fine staying on VMware for the remainder of this cycle, and plan better for the next. You might not get support and software updates, but they are not going to turn off your stuff.
  8. A

    Feature Request - Cinder

    Proxmox is already well known in the main enterprise space. When you talk to a Dell sales rep and wind up going with whatever host/hypervisor/SAN combo they recommend, that is more like SMB with a members only jacket thrown in as a gift. Not enterprise. Cinder MAY eventually come from a paid...
  9. A

    Feature Request - Cinder

    Everyone knows what's going on with VMware, and everyone is looking for Proxmox to jump through hoops to make themselves a frictionless drop-in replacement for VMware, and that's just not how it's going to work in the beginning. I hope they get a ton more interest and investment though, but man...
  10. A

    Minimal Ceph Cluster (2x Compute Nodes and 1x Witness -- possible?)

    Tiny Ceph clusters are going to be slow to begin with, and if you are power-constrained and trying to operate permanently on such a cluster, i would reconsider Ceph entirely. Replicated ZFS may be better for a 2-server solution. Too many entry-level Ceph users are drawn to the idea of running...
  11. A

    MAKOP Ransomware attack on PVE 6.4.15 from within a VM guest because it had full access to my vdisks (qcow), allowed ransomware to encrypt my vdisks

    A public RDP host can serve over 40,000 password guesses per day. Even with a complex login, you should not be hosting ANY administrative services directly on the internet without 2FA, an IP ACL, or a tool like RDPGuard or fail2ban, etc. Your guest VMs should also never have direct access to...
  12. A

    Minimal Ceph Cluster (2x Compute Nodes and 1x Witness -- possible?)

    if you don't host VMs on it then you don't need as much CPU for just Ceph. but I think most people agree clusters are better when the nodes are all identical.
  13. A

    [SOLVED] VMs freeze with 100% CPU

    thanks everyone, updating my 7.4 nodes today
  14. A

    [SOLVED] VMs freeze with 100% CPU

    i can't seem to upgrade past 6.2.16-4-bpo11. How do you manually install 6.2.16-11-bpo11-pve on 7.4 ?
  15. A

    [SOLVED] VMs freeze with 100% CPU

    We have disabled mitigations and KSM but will remain on 7.4-16 and 6.2.16-4-bpo11-pve for now.
  16. A

    PVE and Ceph on Mellanox infiniband parts, what is the current state of support?

    RDMA does not exist in Ceph. It is ethernet only. There was an effort for it that was abandoned 6-7 years ago. https://github.com/Mellanox/ceph/tree/luminous-12.1.0-rdma
  17. A

    [SOLVED] VMs freeze with 100% CPU

    You realize the PVE devs do not work on qemu and the linux kernel right?
  18. A

    Restrict pvemanager default_views per-user

    i don't recommend maintaining your own pvemanager.lib. Maybe PVE 8 will bring it, it's a pretty basic thing.
  19. A

    VMs crash on migration to host with downgraded CPU

    after apt update; apt install pve-kernel-6.2 and rebooting, I have 2 test nodes (of different architecture) running this kernel: Linux 6.2.16-4-bpo11-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-4~bpo11+1 and pinning the virtual CPU to "IvyBridge" and they are able to migrate VMs successfully back and...
  20. A

    VMs crash on migration to host with downgraded CPU

    We are introducing newer servers to our environment with Xeon Gold 6148 in combination with our E5-2697v2 hosts. So now we have a mix of Ivy Bridge-EP and Skylake-IP. I expected the requirement was to configure the VM's CPU to "IvyBridge" to fix the CPU flags, instruction set extensions, etc to...