bbgeek17's latest activity

  • bbgeek17
    Excellent, thank you for the update. Continue your configuration with Multipath, using your storage vendor's recommendation as a guide. Once done, you can configure LVM, or find a guide on configuring 3rd party Clustered FS. For LVM, you can use...
  • bbgeek17
    ok i rebooted the pve and now i see the storage i see it as 4 devices
  • bbgeek17
    Thank you for the update, @tdemetriou. There could be many reasons for this behavior: Wrong cable Wrong transceiver Faulty port on the FC switch Incorrect BIOS settings on the FC cards Firmware issues requiring update Missing or incorrect...
  • bbgeek17
    @oldtimer, as an elite member of the PVE subscription-owning class, you have access to the support channel. If I were you, I’d compile and condense the issue description, include the package information (along with the full upgrade output), and...
  • bbgeek17
    In PVE this service updates the text file that displays the login banner: systemctl |grep bann pvebanner.service loaded active exited Proxmox VE Login Banner Reboot...
  • bbgeek17
    Hi @Reartu24, If the data is that valuable, I strongly recommend proceeding with extreme caution. Here’s an example of what can happen when a QEMU block is not properly released: Data lost for time window My recommendation is to address this...
  • bbgeek17
    Hi @SaltyMcBitters , welcome to the forum. Cross-cluster migration is possible via "qm remote-migrate" CLI , or by using PDM (https://pve.proxmox.com/wiki/Proxmox_Datacenter_Manager_Roadmap). PDM uses "qm remote-migrate" in the background. Good...
  • bbgeek17
    I think something like this would be better: response = requests.post( f"{PROXMOX_HOST}/api2/json/nodes/{NODE}/qemu/{VMID}/config", headers=headers, data={ "virtio5": "Storage1-ext4:50,format=qcow2" }, verify=False )...
  • bbgeek17
    Hi @BD-Nets, you were on point up until this sentence. There are no built-in, or even recommended, cluster-aware filesystems for PVE. Based on all the information OP has provided so far, they should not attempt to configure a CAF. Using LVM is...
  • bbgeek17
    You do not want PVE to be mapped to the same LUNs and Zones as your VMware. You need to create new LUNs, new Zones, new mappings. If the LUNs are properly mapped , you should see the disks in "lsblk" and "lsscsi" output. If you don't - the...
  • bbgeek17
    You can do this: https://pve.proxmox.com/wiki/Root_Password_Reset You can also undo anything related to SDN in /etc/network/ location. Any locale should have a colon , you can just try all on-screen keys in user field first, so you can see what...
  • bbgeek17
    No worries. You can mark the thread as Solved by editing the first post and selecting the dropdown next to the subject. Cheers Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    This does not look like correct path. I think you are missing /dev Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    You should be able to run : qm disk rescan --vmid 3103 This should bring the disk into the VM hardware panel where you can delete it. For migration issue - it sounds like the QEMU process, that previously tried the migration, failed. It left...
  • bbgeek17
    Hi @tdemetriou , PVE is based on Debian Userland with Ubuntu Derived Kernel. The block storage is handled by Linux Kernel. The process of connecting the SAN to PVE host is the same as any other Debian/Ubuntu host. The steps are probably listed...
  • bbgeek17
    bbgeek17 replied to the thread new install upgrade v8.4 to v9.
    Hi @tdemetriou , welcome to the forum. If you have an existing 8.4 installation you do not need an ISO to upgrade. The process is described here: https://pve.proxmox.com/wiki/Upgrade_from_8_to_9 Blockbridge : Ultra low latency all-NVME shared...
  • bbgeek17
    Its not clear to me what the issue is. The snippet you posted applies to new empty disk. Your original post was about existing root partition. That would be handled by something like this: #cloud-config growpart: mode: auto devices: ['/']...
  • bbgeek17
    Hi @dchalon , welcome to the forum. I believe you will need to clone that new VM as full clone. There is no unlink operation. Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    Hi @theuken , welcome to the forum. You cant. Netapp is not a suitable target for ZFS/iSCSI scheme. I am not familiar with Virtucache, so cant provide any advice there. The out-of-the-box solution is to use LVM with iSCSI. You may find this...
  • bbgeek17
    Have you checked your browser console for errors? Have you disabled any password managers or other extensions that may interfere? Have you previously installed any "enhancements" that affect non-production subscription reminder...