Search results

  1. A

    proxmox 8 - how migrate vm without shared storage ?

    If you mark a storage pool as shared, PVE will assume its available on all nodes by default (there is an option to limit to specific nodes, but you need to set those) If a "shared" pool doesnt exist on the destination, the migration will naturally fail. mark the shared lun for access by...
  2. A

    [SOLVED] AMD RX 7800/9070 XT vendor-reset?

    You're clearly upset, but I dont think your understanding of the situation warrants that conclusion. The PVE OFFICIAL DOCUMENTATION is in one place: https://pve.proxmox.com/pve-docs/ but that documentation doesnt cover everything applicable to your hardware or use case. The reason that Proxmox...
  3. A

    ESXi → Proxmox migration: Host CPU vs x86-64-v4-AES performance & 10 Gbit VirtIO speeds

    transfer is limited to the slowest link in the chain. You NIC chip, driver, destination nic/driver, source disk, destination disk, etc. Sounds like you have some tracing to do. change your HBA to virtio-scsi-single, with io-thread checked for disks.
  4. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    sure, although probably NOT using pveceph. Not that you should, seastore is not considered production quality at this point.
  5. A

    Ceph 20.2 Tentacle Release Available as test preview and Ceph 18.2 Reef soon to be fully EOL

    The most interesting feature in Tentacle with direct relevence to PVE is the instant RBD live migration. @t.lamprecht have you guys discussed implementation within PVE, and if so can you share your thoughts?
  6. A

    storage config

    The issue is not the transport, its the bridge. I imagine thunderbolt is less susceptible (if for no other reason then the bridge chips are more expensive) but I never used thunderbolt for server use, so cant say. So use larger drives. My advice was arbitrary, as you never mentioned what your...
  7. A

    storage config

    I tried to extract your workload from your description: 2 docker VMs 1 Windows 1 "hungry" Dont do that. disk response times are unpredictable behind a USB bridge and have a tendency of timing out- the consequence of which is to lock up your storage at best and your whole server at worst-...
  8. A

    Error with adding a new node in the cluster.

    yeah that was pretty much a given with your problem description :) They probably arent; you're just not aware of the problem because their active bond interfaces all connect to the same switch. 1st order of business- get your switches to talk to each other. 2nd- dont use bonds for corosync...
  9. A

    Error with adding a new node in the cluster.

    Your network is, for a lack of a better word, broken. are you using bonds for your corosync interface(s)? do the individual interfaces have a path to ALL members of bonds on the OTHER nodes?
  10. A

    Use Cases for 3-Way ZFS Mirror for VM/LXC Storage vs. 2-Way Mirror?

    This isnt valid for zfs. ZFS will simply replace any read with a failed checksum and replace it on the affected vdev. EXCEPT this didnt actually work, which is why you dont see these anymore. abstraction doesnt change the underlying device performance, latency, or lifecycle characteristics...
  11. A

    [SOLVED] External Storage Server for 2 Node Cluster

    There are two ways to accomplish what you're after (three if you include zfs replication but that doesnt use the external storage device.) 1- zfs over iscsi- as @bbgeek17 explained. 2. qcow2 over nfs- install nfsd, map the dataset into exports, mount on the guests. option 1 will perform better...
  12. A

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    This isnt actually so. you can think of monitoring quorum rule as 3:1. fun fact- a cluster with 2 monitors is more prone to pg errors (monitor disagree) then with one. Feel free to try it yourself- shut down all but one of your monitors and see what happens. This has happened to me on numerous...
  13. A

    Storage/Filesystem recommendations for new Proxmox user

    Thats true for any virtualized environment. A nested NAS will always incurr a penalty; as you mentioned, CoW on CoW kills performance and destroys space efficiency. To avoid this, dont have a nested NAS on your hypervisor- install your NAS on the metal. Both OMV and Truenas have some...
  14. A

    Storage/Filesystem recommendations for new Proxmox user

    Thats not what it means. it means that the PVE devs say "we havent tested this as completely as other options, and we havent included controls for all its functionality." BTRFS is fully supported, just that you'd need to go to cli for some/much of the functionality- which is to say, outside the...
  15. A

    Bond & Bridge Interfaces - Undesired Behavior

    Because I would not expect that vlan to be accessible to virtual machines. adjust that as appropriate. Far be it from me to dissuade you from pursuing NIC level fault tolerance. Suffice it to say I dont; I care about path redundancy- a switch will be reboot far more often than a nic failure...
  16. A

    Proxmox VE 9.1.1 – Windows Server / SQL Server issues

    I'm confused. were you not asking for help troubleshooting this? SQL performance is a function of two things- query efficiency and disk IO latency and IOPs. since we know your queries are the same, what remains is the storage. How did you have the storage configured on your ESX environment...
  17. A

    Ceph Squid (19.2.3) Cluster Hangs on Node Reboot - 56 NVMe OSDs - PVE 9.1.1

    cite your sources please. 5 monitors are "suggested" with a high number of OSD nodes. with a typical crush rule of 3:2, this only makes sense IF you have dedicated monitor nodes (eg, no OSD) AND you have environmental issues that take your nodes down routinely. Otherwise, the risk is miniscule...
  18. A

    Bond & Bridge Interfaces - Undesired Behavior

    Not at all. my (and everyone else's) participation in this forum is voluntary. Nothing you provide (or not) is necessary as long as you dont expect anything in return. you have 4 ports. how are you attaching them to 7 different devices? More importantly, are they all connected to each other...