bbgeek17's latest activity

  • bbgeek17
    You can't I suspect when it's business hours in Europe @Thomas Lamprecht might notice and take action. If no one from the stuff replies here, you can always submit https://bugzilla.proxmox.com/ report Blockbridge : Ultra low latency all-NVME...
  • bbgeek17
    Hi @jbanham , welcome to the forum. While this is outside of our area of expertise - Storage, diagnosing strange SMB/CIFS/AD behavior definitely scratches a familiar itch. You’re correct that there are multiple network layers which the packet...
  • bbgeek17
    More likely than not it is a hardware issue. Anywhere from DAS, to USB cable, to USB port, to USB card, and anything in the middle and both ends. PVE is not a special-purpose black-box OS. It is based on Debian Linux with Ubuntu derived Kernel...
  • bbgeek17
    The code mostly references root_password, give it a try https://github.com/proxmox/pve-installer Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    There seems to be an inconsistency in the documentation where both root-password and root_password are mentioned. https://pve.proxmox.com/wiki/Automated_Installation Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
  • bbgeek17
    This article/chapter may be helpful: https://kb.blockbridge.com/technote...rver/part-1.html#what-is-a-storage-controller Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    bbgeek17 replied to the thread How to Deploy ISO.
    Windows has a very specific driver support requirement. You either need to provide the VIRTIO drivers during the install phase, or install on a supported type of the controller. This article/chapter may be helpful...
  • bbgeek17
    bbgeek17 replied to the thread Shared LVM over FC SAN.
    I’m not aware of any major SAN vendors offering solutions that can or want to expose individual disks directly as part of their standard SAN platforms. These systems are typically designed around aggregated, abstracted storage pools, not raw...
  • bbgeek17
    bbgeek17 replied to the thread Shared LVM over FC SAN.
    Ceph is an example of built-in PVE storage, its capabilities as the default go-to storage in PVE infrastructure are a given. No, I was not referring to Ceph. It is not suitable for your FC infrastructure re-use. For the solution I am most...
  • bbgeek17
    bbgeek17 replied to the thread Shared LVM over FC SAN.
    Hi @whiney , welcome to the forum. At this point in time you are using the primary supported option. Today, your only option to have all of the above is to use a 3rd party cluster-aware filesystem, i.e. OCFS. I'd recommend researching its...
  • bbgeek17
    You are done a generic login, there is nothing wrong with it, but also its probably not what you need to do in the context of PVE storage connectivity. Take a look at this article, and the subsection on connectivity in particular...
  • bbgeek17
    Hi @Christopheric1 , It’s been a while, and I have to admit - your original request has long since been paged out to an “offsite” memory warehouse. Now that you’ve made a major change, it would be best to restate your current issue as it relates...
  • bbgeek17
    At this point in time these two goals are incompatible when only native built-in PVE technologies are used. You need to pick either future cluster expandability or snapshots. @VictorSTS covered pretty much all sides of your question. You may...
  • bbgeek17
    bbgeek17 replied to the thread Help with FC Multipath HPE.
    @Miyagi007 , one other resource that may be helpful for you: https://kb.blockbridge.com/technote/proxmox-lvm-shared-storage/ Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    I found this thread in a search for the same problem - I was using an outdated bookmark or search results for the helper scripts. For the sake of future travelers, I will do the opposite of what bbgeek17 did in two whole posts of text: actually...
  • bbgeek17
    This is not a correct expectation. Whether you create iSCSI session via DC menu, or manually, the higher layers will still take care of exclusivity. As long as you don't go out of your way to trip it up. Correct. There are very few Open Source...
  • bbgeek17
    That's a very open-ended statement that can't be answered with "yes" or "no". There are many "it depends" here. PVE can arbitrate exclusive access by multiple hosts. It may matter for your business needs, but it does not matter to PVE whether...
  • bbgeek17
    Whether you manage your backend storage via SSH, API or GUI, does not change the end result that the LUNs presented to PVE are iSCSI (raw block). Since you tasked PVE with managing your backend storage via the appropriate plugin, it will use...
  • bbgeek17
    Hi @Specimen , welcome to the forum. The primary type of file that multiple PVE nodes access in shared storage is the disk image (similar to VMDK in ESXi). When using a file-based shared storage like NFS, these are typically in QCOW2 format. You...
  • bbgeek17
    You can place the QCOW in appropriate directory ( /var/lib/vz/images/vm-101 https://pve.proxmox.com/wiki/Storage:_Directory). Name it appropriately, ie vm-101-disk-10.qcow2. Then you can " qm disk rescan --vmid 101". This should pick the disk up...