Search results

  1. A

    proxmox 8 - how migrate vm without shared storage ?

    Yeah I get you, but he has "MD3200" luns to all 5 nodes. if he's not mapping some to all WWNs thats by choice, not by limitation. I suppose those could be different physical devices.
  2. A

    proxmox 8 - how migrate vm without shared storage ?

    his provided storage.cfg says otherwise :)
  3. A

    proxmox 8 - how migrate vm without shared storage ?

    When you migrate from pve2 to 6, you will need to specify store on the destination- but why do you limit node access when all 5 nodes can see the storage?
  4. A

    proxmox 8 - how migrate vm without shared storage ?

    If you mark a storage pool as shared, PVE will assume its available on all nodes by default (there is an option to limit to specific nodes, but you need to set those) If a "shared" pool doesnt exist on the destination, the migration will naturally fail. mark the shared lun for access by...
  5. A

    [SOLVED] AMD RX 7800/9070 XT vendor-reset?

    You're clearly upset, but I dont think your understanding of the situation warrants that conclusion. The PVE OFFICIAL DOCUMENTATION is in one place: https://pve.proxmox.com/pve-docs/ but that documentation doesnt cover everything applicable to your hardware or use case. The reason that Proxmox...
  6. A

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    transfer is limited to the slowest link in the chain. You NIC chip, driver, destination nic/driver, source disk, destination disk, etc. Sounds like you have some tracing to do. change your HBA to virtio-scsi-single, with io-thread checked for disks.
  7. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    sure, although probably NOT using pveceph. Not that you should, seastore is not considered production quality at this point.
  8. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    The most interesting feature in Tentacle with direct relevence to PVE is the instant RBD live migration. @t.lamprecht have you guys discussed implementation within PVE, and if so can you share your thoughts?
  9. A

    storage config

    The issue is not the transport, its the bridge. I imagine thunderbolt is less susceptible (if for no other reason then the bridge chips are more expensive) but I never used thunderbolt for server use, so cant say. So use larger drives. My advice was arbitrary, as you never mentioned what your...
  10. A

    storage config

    I tried to extract your workload from your description: 2 docker VMs 1 Windows 1 "hungry" Dont do that. disk response times are unpredictable behind a USB bridge and have a tendency of timing out- the consequence of which is to lock up your storage at best and your whole server at worst-...
  11. A

    Error with adding a new node in the cluster.

    yeah that was pretty much a given with your problem description :) They probably arent; you're just not aware of the problem because their active bond interfaces all connect to the same switch. 1st order of business- get your switches to talk to each other. 2nd- dont use bonds for corosync...
  12. A

    Error with adding a new node in the cluster.

    Your network is, for a lack of a better word, broken. are you using bonds for your corosync interface(s)? do the individual interfaces have a path to ALL members of bonds on the OTHER nodes?
  13. A

    Use Cases for 3-Way ZFS Mirror for VM/LXC Storage vs. 2-Way Mirror?

    This isnt valid for zfs. ZFS will simply replace any read with a failed checksum and replace it on the affected vdev. EXCEPT this didnt actually work, which is why you dont see these anymore. abstraction doesnt change the underlying device performance, latency, or lifecycle characteristics...
  14. A

    [SOLVED] External Storage Server for 2 Node Cluster

    There are two ways to accomplish what you're after (three if you include zfs replication but that doesnt use the external storage device.) 1- zfs over iscsi- as @bbgeek17 explained. 2. qcow2 over nfs- install nfsd, map the dataset into exports, mount on the guests. option 1 will perform better...
  15. A

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    This isnt actually so. you can think of monitoring quorum rule as 3:1. fun fact- a cluster with 2 monitors is more prone to pg errors (monitor disagree) then with one. Feel free to try it yourself- shut down all but one of your monitors and see what happens. This has happened to me on numerous...
  16. A

    Storage/Filesystem recommendations for new Proxmox user

    Thats true for any virtualized environment. A nested NAS will always incurr a penalty; as you mentioned, CoW on CoW kills performance and destroys space efficiency. To avoid this, dont have a nested NAS on your hypervisor- install your NAS on the metal. Both OMV and Truenas have some...
  17. A

    Storage/Filesystem recommendations for new Proxmox user

    Thats not what it means. it means that the PVE devs say "we havent tested this as completely as other options, and we havent included controls for all its functionality." BTRFS is fully supported, just that you'd need to go to cli for some/much of the functionality- which is to say, outside the...
  18. A

    Bond & Bridge Interfaces - Undesired Behavior

    Because I would not expect that vlan to be accessible to virtual machines. adjust that as appropriate. Far be it from me to dissuade you from pursuing NIC level fault tolerance. Suffice it to say I dont; I care about path redundancy- a switch will be reboot far more often than a nic failure...