Search results

  1. A

    [SOLVED] Streched unbalanced cluster

    The number of nodes isnt the issue; understand that since all data is replicated between site a and site b, you can only use the capacity of the smaller side no matter what you do- there is no benefit to having more OSD space on one side.
  2. A

    ZFS storage is very full, we would like to increase the space but...

    Your raid controller is most likely an LSI 9x00 based card. you can import a raid volume from virtually any LSI to any LSI RAID controller, and they are cheap and plentiful. This isnt much of a concern and can be treated as any consumable. If you want certainty, post the model of your raid...
  3. A

    SSD wear with ZFS compared to BTRFS

    This isnt actually true. ZFS is quite good at managing its metaslabs, and allows you to set per zvol/child fs record size. Where write amplification is a problem is when using parity raid (raidz) because trying to align written blocksize to data blocksize is difficult. btrfs isnt subject to this...
  4. A

    Installing Proxmox VE 9.x on Debian with full disk LUKS (manual install)

    manually unlocking is what keeps this a non production ready setup. if you're serious, consider tang/clevis or other auto-unlocking mechanisms. manual intervention should only be necessary for disaster recovery.
  5. A

    VMWare to Proxmox Conversion Help - Dell PowerVault ME5224 Integration / Shared Storage Questions

    Veeam doesnt work the same way on PVE as it does on vmware, so you DONT actually need snapshot support for functionality- nor does veeam even use hardware snapshots if available. I was really excited when veeam started quietly testing pve support, but was underwhelmed by the actual...
  6. A

    Maximum CPU sockets

    Please share benchmarks. Until then, I am guessing that you eventually got the vm to address the socket housing the PCI link to the NIC. generally speaking, adding sockets does no harm AT BEST- and usually slows down the machine by introducing 100s to 1000s of unnecessary cycles for DMA...
  7. A

    how P2V in Proxmox ?

    and my reply was to @AceBandge who asked OPs question has already been answered afaict. if its not clear- use clonezilla. in the list of priorities for the devs to follow I'd much rather them add useful features or squash bugs then add a tool for a solution that already exists.
  8. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    hmm. in that case, have you tried following the suggestion? (using --force)
  9. A

    Backup: error fetching datastores - 500 Can't connect to 192.29.72.4:8007 (Connection timed out) (500)

    since you're able to access the webui, the issue is almost certainly incorrect thumbprint. delete and recreate the pbs datastore. One other thing- 192.29.72.4 is a real IP address. if you are using this subnet privately- dont. the allowed subnets for private use are within 192.168.0.0./16
  10. A

    how P2V in Proxmox ?

    Way ahead of you: https://www.proxmox.com/en/services/training-courses/videos/proxmox-virtual-environment/proxmox-ve-import-wizard-for-vmware
  11. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    how many mgr daemons do you have? are they running different versions of ceph? You only need one mgr daemon. and make sure its on the same version as your monitors.
  12. A

    proxmox 8 - how migrate vm without shared storage ?

    I suppose the real question is- why are you migrating a VM from your "production" group members to the "backup" group? if its for backup, you can and should use PBS for the purpose. If you were interested in actually running that workload on a member of the backup group, there is nothing...
  13. A

    proxmox 8 - how migrate vm without shared storage ?

    Yeah I get you, but he has "MD3200" luns to all 5 nodes. if he's not mapping some to all WWNs thats by choice, not by limitation. I suppose those could be different physical devices.
  14. A

    proxmox 8 - how migrate vm without shared storage ?

    his provided storage.cfg says otherwise :)
  15. A

    proxmox 8 - how migrate vm without shared storage ?

    When you migrate from pve2 to 6, you will need to specify store on the destination- but why do you limit node access when all 5 nodes can see the storage?
  16. A

    proxmox 8 - how migrate vm without shared storage ?

    If you mark a storage pool as shared, PVE will assume its available on all nodes by default (there is an option to limit to specific nodes, but you need to set those) If a "shared" pool doesnt exist on the destination, the migration will naturally fail. mark the shared lun for access by...
  17. A

    [SOLVED] AMD RX 7800/9070 XT vendor-reset?

    You're clearly upset, but I dont think your understanding of the situation warrants that conclusion. The PVE OFFICIAL DOCUMENTATION is in one place: https://pve.proxmox.com/pve-docs/ but that documentation doesnt cover everything applicable to your hardware or use case. The reason that Proxmox...
  18. A

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    transfer is limited to the slowest link in the chain. You NIC chip, driver, destination nic/driver, source disk, destination disk, etc. Sounds like you have some tracing to do. change your HBA to virtio-scsi-single, with io-thread checked for disks.
  19. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    sure, although probably NOT using pveceph. Not that you should, seastore is not considered production quality at this point.