Search results

  1. bbgeek17

    [SOLVED] WinXP conversion from esx to pve refuses to boot

    Hi @Elleni , PVE is based on the QEMU/KVM virtualization stack. While the Proxmox team makes some enhancements, the fundamentals of QEMU virtualization are universal. Just as ESXi uses VMware Tools for guest integration, QEMU uses VirtIO tools/drivers. These are typically developed by OS...
  2. bbgeek17

    [TUTORIAL] Inside Proxmox VE 9 SAN Snapshot Support

    Hi @spirit, Thanks for your patience. I’ve been meaning to get back to you, but things have been quite busy. On the caching side, I think there may be a bit of misunderstanding around subcluster allocation and the effect of l2_extended=on. In QCOW, enabling l2_extended increases the metadata...
  3. bbgeek17

    vm offline migration from cluster to cluster using Netapp Storage

    Not only that, but if there are dead DM devices, the lvs, pvs and other scan commands used by PVE will hang. This in turn will cause stats daemon to hang. Having dead devices on the system will lead to unpredictable instability. Blockbridge : Ultra low latency all-NVME shared storage for...
  4. bbgeek17

    RAID expand - LVM-thin expand & resize

    Hi @dindon_tv , welcome to the forum. If the steps are executed properly, you should not loose the data. There are many threads on the forum about this, for example: https://forum.proxmox.com/threads/increase-local-lvm-and-vm-disk-size.121257/ Blockbridge : Ultra low latency all-NVME shared...
  5. bbgeek17

    Proxmox + shared LVM

    Normally when using Shared LVM the LV for a particular VM is only active on the node which owns the VM. You should not see a "dev" link for that LVM. From your output, you are using the new QCOW/LVM technology. Keep in mind that it still has Experimental status. You may have run into something...
  6. bbgeek17

    vm offline migration from cluster to cluster using Netapp Storage

    Hi @Budgreg , welcome to the forum. Is this how you would describe a system in a ticket you open with Netapp? :-) What does it mean for software to be irritated? :-) This just removes a pool definition from PVE, the OS/Kernel are still very aware of the LVM structure and LUN presence. If you...
  7. bbgeek17

    Best Practise for Multipath iSCSI

    You are welcome, happy we could help. @johannes is correct. The main reason we haven't updated the lvm-shared-storage KB article is that the lvm snapshot bits are not yet ready for production. A considerable amount of focused development and testing is needed for it to be reliable in the way we...
  8. bbgeek17

    Please help with Proxmox VE 9 Cluster and Alletra B10000 Via iSCSI

    Hi @RonRegazzi , welcome to the forum. Most of your questions regarding best practices should be addressed in your storage vendor's documentation. Many storage vendors recommend using multiple subnets as best practices. That said, you may find this article helpful...
  9. bbgeek17

    iSCSI under performance

    @UdoB You are 100% correct and thanks for calling it out! Tables without units are incomplete. I've updated the table headers to clarify appropriate units. Cheers Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  10. bbgeek17

    Can no longer mount shared NFS storage from external device.

    It is Perl. It is what PVE does to health check NFS. You can look at two CMD= lines and run those commands manually to see if they fail. Then you can troubleshoot why they fail. You should try testing that it is actually working by doing a large non-fragmented ICMP ping. Blockbridge : Ultra...
  11. bbgeek17

    Can no longer mount shared NFS storage from external device.

    My first impression of the information you provided - you have a network issue. Perhaps an MTU mismatch. Note that PVE health checks for NFS consists of RPC probing (showmount , rpcinfo). Those often use UDP. My next step is to use those commands directly and troubleshoot any issues you find...
  12. bbgeek17

    Adding Proxmox 8.4.13 Node to Existing 8.3.2 Cluster - Compatibility & Live Migration?

    If I remember correctly, the official answer is: it should work, but it’s not guaranteed - nobody explicitly tests upgrades between every possible combination of minor in-family releases. That said, the Proxmox team takes great care to avoid breaking existing environments. If a potential...
  13. bbgeek17

    Can no longer mount shared NFS storage from external device.

    Hi @akulbe, What happens when you execute : pvesm status Are you able to "ls"/access the /mnt/pve/VM-Linux and do you see the data there? There is more involved in PVE/NFS relationship than port 2049. Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
  14. bbgeek17

    iSCSI under performance

    Hi @pmvemf, Following up on this, I asked our performance team to review the kernel iSCSI stack (what you referred to as the "Proxmox native initiator," which is in fact the standard Linux initiator). Our testing with PVE9 showed no functional issues with the Linux kernel iSCSI initiator. At...
  15. bbgeek17

    Proxmox proxy problems

    Hi @DJohneys , welcome to the forum. This is not PVE specific but rather standard Linux admistration. There are many ways to do what you want: echo 'export http_proxy="http://proxy.example.com:8080"' >> ~/.bashrc echo 'export https_proxy="http://proxy.example.com:8080"' >> ~/.bashrc echo...
  16. bbgeek17

    data lost for time window

    Hi @HaVecko, I don’t have any concrete suggestions for you, mainly because we simply don’t know what happened, only a theory. As a reminder, I’ve never worked with your particular backup vendor. It’s entirely possible that the whole theory is wrong and nothing is as it seems. You also haven’t...
  17. bbgeek17

    proxmox one node in cluster is showing up with question mark

    Hi @mosaab , welcome to the forum. I am guessing you had a PVE8 cluster and now added a PVE9 node? Search for "gluster" on this page: https://pve.proxmox.com/wiki/Roadmap Follow on with reading: https://forum.proxmox.com/threads/glusterfs-is-still-maintained-please-dont-drop-support.168804/...
  18. bbgeek17

    Best Practice: shared scratch disk on Proxmox host

    With this requirement ^ , do what google suggested, which seems to be to install CIFS server and share the location via CIFS Cheers Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  19. bbgeek17

    data lost for time window

    With the above additional information I am even more convienced that there was likely a Qemu snapshot/filter/staging that was lost (not replayed) on reboot. It is not surprising that your windows server got unsynced when its data went back, its a security precaution in AD. Blockbridge : Ultra...
  20. bbgeek17

    data lost for time window

    I am a little confused about what data was lost - actual data inside VM, ie files/databases/updates, or monitoring stats from external service? You should probably invest some time to understand how this critical process works and how it affects your production. Since nobody knows what exactly...