bbgeek17's latest activity

  • bbgeek17
    The scripts put a "helpful" service that runs in the background and "fixes" your PVE installation every time it gets "unfixed". There are multiple threads about this on the forum, I don't have them saved - if I come across one I will place it...
  • bbgeek17
    Did you previously install "PVE helper scripts" ? Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    From the console of the PVE after you logged in via SSH? Perhaps you can share your configuration, i.e. "ip a", etc. standard list of questions: What is the IP of the PVE server? What is the IP of your workstation? What is the IP of your...
  • bbgeek17
    Check you router/firewall. Did you update the anything in your network? Try "curl" on your LAN IP as a confirmation. Try to direct connect a PC to your PVE to bypass your router. Users often report a case where they use an IP for PVE that is...
  • bbgeek17
    This means that you are trying to delete a Virtual Disk/Image that matches the index ID of an existing VM. If you cannot see this disk in the VM hardware platform then you need to scan the disk in: qm disk rescan (man qm). Or, if you insist doing...
  • bbgeek17
    bbgeek17 replied to the thread iSCSI interfaces.
    It is the iSCSI target that advertises the portals (where IPs are asigned). The iSCSI Target is the TrueNAS, so its the TrueNAS that informs the Initiator (PVE) about possible connection options (Portals). Your situation is described as one of...
  • bbgeek17
    Hi @mouk , It's not just not cluster-safe, it's not supported by PVE and you have to go out of your way to work around all the bumpers put in place to stop you. There are a number of commercial vendors in this space. It all depends on your...
  • bbgeek17
    You keep missing the point that the ZFS exists on the storage device side. The storage device, in your case Ubuntu, has a ZFS pool implemented by you, manually. You specify the name of that pool in PVE config to let PVE know what you named that...
  • bbgeek17
    ZFS over iSCSI in PVE is a specific scheme that implies: a) There is an external storage device b) You can login to this server via SSH as root (this excludes Dell, Netapp, Hitachi, etc) c) There is internal storage in that server (HDD, SAS...
  • bbgeek17
    bbgeek17 replied to the thread iSCSI interfaces.
    PVE is trying to connect to all Portals that are advertised by iSCSI Target. There is no mechanism in PVE today that allows for filtering of the Portal connectivity. You may want to look at the Truenas side to see if there is anything you can do...
  • bbgeek17
    Ok, you can ignore everything I said. @alexskysilk is correct in the direction you should be troubleshooting. You used a word combination that has very specific meaning in the PVE context "zfs over iscsi". However, you are just using straight...
  • bbgeek17
    Keep in mind that OP is using ZFS/iSCSI scheme. ZFS volumes on the storage sever, their iSCSI exporting, and PVE iSCSI are managed by the native storage plugin. In fact, if I remember correctly, iscsiadm is not used. This approach uses built-in...
  • bbgeek17
    Thank you for clarifying. There are many ZFS experts in this forum, I am not one of them. That said, I suspect that the ZFS/iSCSI plugin is sufficiently different from the Local ZFS plugin, where the ZFS replication is primarily integrated and...
  • bbgeek17
    Thank you for sharing, @Hank42. In your post #25, it stood out to me that you mentioned holding some packages while upgrading others. In general, this is not a good idea. The various components of the system are interconnected, and partial...
  • bbgeek17
    Can you clarify, did you build DIY type storage servers that are running some sort of Linux and organizes the storage with ZFS internally? This is not an appliance type SAN? And these two storage servers are independent of each other...
  • bbgeek17
    You may have to live with the current deployment time until a technical solution is implemented to address this limitation. There may be something on the horizon, but I do not have visibility into the development priorities of Proxmox Server...
  • bbgeek17
    Hi @lxiosjao , No. I am not sure if the new tech-preview PVE9 LVM snapshot-as-volume-chain may optimize anything for you. If I recall correctly, it still require 100% space overhead reservation for snapshots/clones. Blockbridge : Ultra low...
  • bbgeek17
    @Hank42, have you installed helper scripts on your system at any point? Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    Hi @Ghosthawk , welcome to the forum. Can you please specify what SAN type/model you are using? Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    bbgeek17 reacted to Johannes S's post in the thread FC SAN cluster in proxmox with Like Like.
    The official docs: And the specific documentation for LVM: https://pve.proxmox.com/wiki/Storage:_LVM Please note LVM, NOT Lvm/thin, they are not the same. LVM/thin doesn't work with shared storage like a FC san, LVM works. Following writeups...