Search results

  1. bbgeek17

    No Webgui after updating 8.4x to 9.1x

    From the console of the PVE after you logged in via SSH? Perhaps you can share your configuration, i.e. "ip a", etc. standard list of questions: What is the IP of the PVE server? What is the IP of your workstation? What is the IP of your gateway? Can you ping the PVE server? Can you ssh into...
  2. bbgeek17

    No Webgui after updating 8.4x to 9.1x

    Check you router/firewall. Did you update the anything in your network? Try "curl" on your LAN IP as a confirmation. Try to direct connect a PC to your PVE to bypass your router. Users often report a case where they use an IP for PVE that is located within a DHCP range of their smart router...
  3. bbgeek17

    Proxmox Unresponsive due to Full Storage Space

    This means that you are trying to delete a Virtual Disk/Image that matches the index ID of an existing VM. If you cannot see this disk in the VM hardware platform then you need to scan the disk in: qm disk rescan (man qm). Or, if you insist doing doing it manually, rename the disk to any other...
  4. bbgeek17

    iSCSI interfaces

    It is the iSCSI target that advertises the portals (where IPs are asigned). The iSCSI Target is the TrueNAS, so its the TrueNAS that informs the Initiator (PVE) about possible connection options (Portals). Your situation is described as one of the options here...
  5. bbgeek17

    tips for shared storage that 'has it all' :-)

    Hi @mouk , It's not just not cluster-safe, it's not supported by PVE and you have to go out of your way to work around all the bumpers put in place to stop you. There are a number of commercial vendors in this space. It all depends on your requirements, budget, and supportability needs...
  6. bbgeek17

    Problem with SANs on a three node cluster

    You keep missing the point that the ZFS exists on the storage device side. The storage device, in your case Ubuntu, has a ZFS pool implemented by you, manually. You specify the name of that pool in PVE config to let PVE know what you named that pool. There is no formatting with ZFS happening...
  7. bbgeek17

    Problem with SANs on a three node cluster

    ZFS over iSCSI in PVE is a specific scheme that implies: a) There is an external storage device b) You can login to this server via SSH as root (this excludes Dell, Netapp, Hitachi, etc) c) There is internal storage in that server (HDD, SAS, NVMe, etc) d) The raw devices inside this storage are...
  8. bbgeek17

    iSCSI interfaces

    PVE is trying to connect to all Portals that are advertised by iSCSI Target. There is no mechanism in PVE today that allows for filtering of the Portal connectivity. You may want to look at the Truenas side to see if there is anything you can do there. Blockbridge : Ultra low latency all-NVME...
  9. bbgeek17

    Problem with SANs on a three node cluster

    Ok, you can ignore everything I said. @alexskysilk is correct in the direction you should be troubleshooting. You used a word combination that has very specific meaning in the PVE context "zfs over iscsi". However, you are just using straight iSCSI and trying to lay client side ZFS on top of...
  10. bbgeek17

    Problem with SANs on a three node cluster

    Keep in mind that OP is using ZFS/iSCSI scheme. ZFS volumes on the storage sever, their iSCSI exporting, and PVE iSCSI are managed by the native storage plugin. In fact, if I remember correctly, iscsiadm is not used. This approach uses built-in QEMU iSCSI interface. The fact that there are two...
  11. bbgeek17

    Problem with SANs on a three node cluster

    Thank you for clarifying. There are many ZFS experts in this forum, I am not one of them. That said, I suspect that the ZFS/iSCSI plugin is sufficiently different from the Local ZFS plugin, where the ZFS replication is primarily integrated and tested. My guess is that in the current state the...
  12. bbgeek17

    <span>nodename</span> showing up in front of everything

    Thank you for sharing, @Hank42. In your post #25, it stood out to me that you mentioned holding some packages while upgrading others. In general, this is not a good idea. The various components of the system are interconnected, and partial upgrades can easily lead to inconsistencies. Unless...
  13. bbgeek17

    Problem with SANs on a three node cluster

    Can you clarify, did you build DIY type storage servers that are running some sort of Linux and organizes the storage with ZFS internally? This is not an appliance type SAN? And these two storage servers are independent of each other? Blockbridge : Ultra low latency all-NVME shared storage for...
  14. bbgeek17

    Proxmox + NVMe/TCP : Is LVM-thin supported and suitable for fast VM provisioning?

    You may have to live with the current deployment time until a technical solution is implemented to address this limitation. There may be something on the horizon, but I do not have visibility into the development priorities of Proxmox Server Solutions GmbH. One option is to purchase a support...
  15. bbgeek17

    Proxmox + NVMe/TCP : Is LVM-thin supported and suitable for fast VM provisioning?

    Hi @lxiosjao , No. I am not sure if the new tech-preview PVE9 LVM snapshot-as-volume-chain may optimize anything for you. If I recall correctly, it still require 100% space overhead reservation for snapshots/clones. Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
  16. bbgeek17

    <span>nodename</span> showing up in front of everything

    @Hank42, have you installed helper scripts on your system at any point? Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  17. bbgeek17

    Problem with SANs on a three node cluster

    Hi @Ghosthawk , welcome to the forum. Can you please specify what SAN type/model you are using? Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  18. bbgeek17

    FC SAN cluster in proxmox

    Hi @nbani, @floh8 is correct. You can present the FC storage to all nodes in the cluster, configure the Multipath, and then layer the LVM that PVE can operate on as Storage Pool. The reason the FC is not called out in those documents, is because the PVE can create storage pools for the other...
  19. bbgeek17

    <span>nodename</span> showing up in front of everything

    https://www.reddit.com/r/Proxmox/s/KrmvrSdtmR Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  20. bbgeek17

    Shared ISCSI Storage VG's?

    There may be a slight advantage of having multiple LUNs and Multiple Targets from a queue depth perspective. However, that generally applies to very high performance storage with fast clients. Without knowing anything about your performance profile, and based solely on the fact that you have a...