Search results

  1. A

    New Cluster for VMware migration

    I dont believe NFS is an option on an IBM FlashSystem 5200- maybe yours is a different product?
  2. A

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    makes a lot more sense- but still doesnt answer why you want the vps to run proxmox. Also, in this configuration you really want to use shared storage and not local drives. you can still use nvmes with ceph.
  3. A

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    because its dumb. YOU might convolute a way to make it make sense; in my world, customers pay me to AVOID complexity. again, the product is the VM.
  4. A

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    Thats... absurd. you're wasting all those resources without any benefit; a cluster is of no value if all resources are HOSTED BY THE SAME DEVICE.
  5. A

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    This is how I look at it. You are offering to operate a bus for people. but instead of selling seats, you are putting busses inside your bus. Whats the use case?! If you're trying to offer a customer resources that they can distribute between disparate vms, thats simple enough to do WITHOUT...
  6. A

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    you use the word "need" in a context I dont understand. This isn't a rational product offering as far as I can tell.
  7. A

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    You dont. I dont understand the use case enough to comment on the wisdom of the solution; please explain what you mean by VDS, and why you want proxmox inside them.
  8. A

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    no. There is almost NEVER a use case for nested hypervisors except for development/lab use. Even if we assume there are no cpu/ram performance degradation that occurs with modern VT extensions (hint: there are) the consequences of cascading memory space governors and another level of write...
  9. A

    Best architecture for running 50+ VDSs each with its own Proxmox VE on a large Dedicated Server (2 TB RAM)

    mapping individual disks to vms is almost never the correct approach. You can and should present a dedicated vdisk on a highly performant, highly available storage option instead of bifurcating it and offering neither. Beyond performance, having a modern CoWFS under the vdisks provides fault...
  10. A

    [SOLVED] Streched unbalanced cluster

    The number of nodes isnt the issue; understand that since all data is replicated between site a and site b, you can only use the capacity of the smaller side no matter what you do- there is no benefit to having more OSD space on one side.
  11. A

    ZFS storage is very full, we would like to increase the space but...

    Your raid controller is most likely an LSI 9x00 based card. you can import a raid volume from virtually any LSI to any LSI RAID controller, and they are cheap and plentiful. This isnt much of a concern and can be treated as any consumable. If you want certainty, post the model of your raid...
  12. A

    SSD wear with ZFS compared to BTRFS

    This isnt actually true. ZFS is quite good at managing its metaslabs, and allows you to set per zvol/child fs record size. Where write amplification is a problem is when using parity raid (raidz) because trying to align written blocksize to data blocksize is difficult. btrfs isnt subject to this...
  13. A

    Installing Proxmox VE 9.x on Debian with full disk LUKS (manual install)

    manually unlocking is what keeps this a non production ready setup. if you're serious, consider tang/clevis or other auto-unlocking mechanisms. manual intervention should only be necessary for disaster recovery.
  14. A

    VMWare to Proxmox Conversion Help - Dell PowerVault ME5224 Integration / Shared Storage Questions

    Veeam doesnt work the same way on PVE as it does on vmware, so you DONT actually need snapshot support for functionality- nor does veeam even use hardware snapshots if available. I was really excited when veeam started quietly testing pve support, but was underwhelmed by the actual...
  15. A

    Maximum CPU sockets

    Please share benchmarks. Until then, I am guessing that you eventually got the vm to address the socket housing the PCI link to the NIC. generally speaking, adding sockets does no harm AT BEST- and usually slows down the machine by introducing 100s to 1000s of unnecessary cycles for DMA...
  16. A

    how P2V in Proxmox ?

    and my reply was to @AceBandge who asked OPs question has already been answered afaict. if its not clear- use clonezilla. in the list of priorities for the devs to follow I'd much rather them add useful features or squash bugs then add a tool for a solution that already exists.
  17. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    hmm. in that case, have you tried following the suggestion? (using --force)