bbgeek17's latest activity

  • bbgeek17
    bbgeek17 replied to the thread LVM-Thin oder ZFS.
    Hallo @ce3rd, vielleicht interessiert dich auch die Diskussion in diesem Thread: https://forum.proxmox.com/threads/understanding-qcow2-risks-with-qemu-cache-none-in-proxmox.175933/#post-816074 Blockbridge : Ultra low latency all-NVME shared...
  • bbgeek17
    @Impact is correct, one VM has local-lvm backing TPM disk, the other directory/raw. Raw file format does not support snapshot capability. Blockbridge : Ultra low latency all-NVME shared storage for Proxmox - https://www.blockbridge.com/proxmox
  • bbgeek17
    I'd say it's because of the .raw TPM disk. Don't use Directory if you can help it.
  • bbgeek17
    @unsichtbarre , if you feel that your question was answered you can mark the thread as SOLVED by editing the first message and selecting the appropriate subject dropdown prefix. Blockbridge : Ultra low latency all-NVME shared storage for...
  • bbgeek17
    bbgeek17 replied to the thread Hostname cloud-init.
    Hi @Collbrothers , welcome to the forum. The dump command _always_ shows only the output that refers to _internal_ cloudInit configuration in PVE. It does not read/interpret your snippets. Once you use custom snippet the internal data is no...
  • bbgeek17
    Hi @Lugitsch IT , welcome to the forum. Please open a new thread with the following information: - pveversion from each host - cat /etc/pve/storage.cfg in TEXT format using CODE tags from each node - qm config [vmid] in TEXT format using CODE...
  • bbgeek17
    Duh! root@host1:/var/lib/iscsi/send_targets# pvcreate /dev/mapper/mpatha Physical volume "/dev/mapper/mpatha" successfully created.
  • bbgeek17
    You can try to "ls -alR /dev|grep mpath". You will likely find it in /dev/mapper. You can also grep for dm-5, or 36000d310055d34000000000000000028 Cheers Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
  • bbgeek17
    Hi @unsichtbarre , You are more than halfway there. The only thing left to do is set up LVM. There are two main ways to configure iSCSI with Proxmox Virtual Environment. The first method is to use the built-in PVE iSCSI storage pool. The...
  • bbgeek17
    Thanks for your points. While GFS2 technically works, I would not recommend it for production use. This setup was only a PoC to demonstrate what could be achieved. We also added PBS as a qdevice to provide proper quorum handling for HA. For...
  • bbgeek17
    As our customers are businesses and enterprises, I'd never recommend to them to use unsupported technology combination. That said, necessity is the mother of invention/adaptation. The minimum requirements of GFS2 do not negate the minimum...
  • bbgeek17
    Great, feel free to mark this thread as SOLVED by editing the first post and selecting the appropriate prefix from the subject dropdown. Cheers Blockbridge : Ultra low latency all-NVME shared storage for Proxmox -...
  • bbgeek17
    Hello, Thank you. This worked, I was afraid this command could stop the VM.
  • bbgeek17
    Hi @F4R , welcome to the forum. You need to "qm disk rescan --vmid 100" to bring the disk into VM configuration as "unused0". Then you can remove it from the VM hardware page. Blockbridge : Ultra low latency all-NVME shared storage for Proxmox...
  • bbgeek17
    If someone is submitting a request via API they might as well use API to query the task status ;-) pvesh get /nodes/$NODE/tasks --output-format json| jq -r '[.[] | select(.type == "clusterjoin")] | sort_by(.endtime) | .[-1] | .status'...
  • bbgeek17
    bbgeek17 replied to the thread Can't create VMs in PVE-9.1.6.
    You can perform this directly from the console or via VNC. From what you’ve described, you are following all the correct steps. No one else has reported a similar issue. Since your environment has not yet been fully excluded and there’s no...
  • bbgeek17
    bbgeek17 replied to the thread Can't create VMs in PVE-9.1.6.
    As a temporary test/fix, given you can create VMs, build a Linux desktop one and try from there, if you can. Install debsums and run it (apt install debsums), man debsums. Depending on the result of the debsums run, you may still want to...
  • bbgeek17
    Hi @Charrat, welcome to the forum. Other users in your situation reported that they were able to restore functionality once the automated patching was removed, and proper packages were reinstalled. So it seems like that should be a possibility...
  • bbgeek17
    You have a possible race where "pvesm" status is possibly not yet propagated to PVE cluster view. I've asked Claude to help you, obviously I have not tested this for you. Good luck #!/bin/bash # ─── CONFIG...
  • bbgeek17
    I think OP is likely in a situation where their hypervisor hosts do not have significant internal storage. They, or their customers, already own an entry- to mid-level SAN solution and want to maximize ROI from it. Typically, people in this...