Recent content by alexskysilk

  1. A

    proxmox 8 - how migrate vm without shared storage ?

    I suppose the real question is- why are you migrating a VM from your "production" group members to the "backup" group? if its for backup, you can and should use PBS for the purpose. If you were interested in actually running that workload on a member of the backup group, there is nothing...
  2. A

    proxmox 8 - how migrate vm without shared storage ?

    Yeah I get you, but he has "MD3200" luns to all 5 nodes. if he's not mapping some to all WWNs thats by choice, not by limitation. I suppose those could be different physical devices.
  3. A

    proxmox 8 - how migrate vm without shared storage ?

    his provided storage.cfg says otherwise :)
  4. A

    proxmox 8 - how migrate vm without shared storage ?

    When you migrate from pve2 to 6, you will need to specify store on the destination- but why do you limit node access when all 5 nodes can see the storage?
  5. A

    proxmox 8 - how migrate vm without shared storage ?

    If you mark a storage pool as shared, PVE will assume its available on all nodes by default (there is an option to limit to specific nodes, but you need to set those) If a "shared" pool doesnt exist on the destination, the migration will naturally fail. mark the shared lun for access by...
  6. A

    [SOLVED] AMD RX 7800/9070 XT vendor-reset?

    You're clearly upset, but I dont think your understanding of the situation warrants that conclusion. The PVE OFFICIAL DOCUMENTATION is in one place: https://pve.proxmox.com/pve-docs/ but that documentation doesnt cover everything applicable to your hardware or use case. The reason that Proxmox...
  7. A

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    transfer is limited to the slowest link in the chain. You NIC chip, driver, destination nic/driver, source disk, destination disk, etc. Sounds like you have some tracing to do. change your HBA to virtio-scsi-single, with io-thread checked for disks.
  8. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    sure, although probably NOT using pveceph. Not that you should, seastore is not considered production quality at this point.
  9. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    The most interesting feature in Tentacle with direct relevence to PVE is the instant RBD live migration. @t.lamprecht have you guys discussed implementation within PVE, and if so can you share your thoughts?
  10. A

    storage config

    The issue is not the transport, its the bridge. I imagine thunderbolt is less susceptible (if for no other reason then the bridge chips are more expensive) but I never used thunderbolt for server use, so cant say. So use larger drives. My advice was arbitrary, as you never mentioned what your...
  11. A

    storage config

    I tried to extract your workload from your description: 2 docker VMs 1 Windows 1 "hungry" Dont do that. disk response times are unpredictable behind a USB bridge and have a tendency of timing out- the consequence of which is to lock up your storage at best and your whole server at worst-...
  12. A

    Error with adding a new node in the cluster.

    yeah that was pretty much a given with your problem description :) They probably arent; you're just not aware of the problem because their active bond interfaces all connect to the same switch. 1st order of business- get your switches to talk to each other. 2nd- dont use bonds for corosync...
  13. A

    Error with adding a new node in the cluster.

    Your network is, for a lack of a better word, broken. are you using bonds for your corosync interface(s)? do the individual interfaces have a path to ALL members of bonds on the OTHER nodes?