Proxmox (8.4.0) / Linstore (8.1.2) PoC

msantosn

New Member
Aug 1, 2025
2
0
1
Hi, I am trying to set up a PoC for Proxmox and Linstore.

I have a 2-node cluster and followed the instructions here: https://linbit.com/blog/linstor-setup-proxmox-ve-volumes/ and the corresponding video, but I am unsure why am I getting the following error when creating a VM:


Code:
NOTICE
  Trying to create diskful resource (pm-2e2b977c) on (VXNL01A).
  Diskfull assignment on VXNL01A failed, let's autoplace it.
TASK ERROR: unable to create VM 103 - API Return-Code: 500. Message: Could not autoplace resource pm-2e2b977c, because: [{"ret_code":-4611686018407201828,"message":"Satellite 'VXNL01A' does not support the following layers: [DRBD]","details":"Auto-placing resource: pm-2e2b977c","error_report_ids":["688C9F57-00000-000002"],"obj_refs":{"RscDfn":"pm-2e2b977c"},"created_at":"2025-08-01T12:26:38.483976816Z"}]  at /usr/share/perl5/PVE/Storage/Custom/LINSTORPlugin.pm line 434.     PVE::Storage::Custom::LINSTORPlugin::alloc_image("PVE::Storage::Custom::LINSTORPlugin", "drbd0", HASH(0x578425754450), 103, "raw", undef, 33554432) called at /usr/share/perl5/PVE/Storage.pm line 1036     eval {...} called at /usr/share/perl5/PVE/Storage.pm line 1036     PVE::Storage::__ANON__() called at /usr/share/perl5/PVE/Cluster.pm line 653     eval {...} called at /usr/share/perl5/PVE/Cluster.pm line 619     PVE::Cluster::__ANON__("storage-drbd0", undef, CODE(0x5784257682a0)) called at /usr/share/perl5/PVE/Cluster.pm line 698     PVE::Cluster::cfs_lock_storage("drbd0", undef, CODE(0x5784257682a0)) called at /usr/share/perl5/PVE/Storage/Plugin.pm line 650     PVE::Storage::Plugin::cluster_lock_storage("PVE::Storage::Custom::LINSTORPlugin", "drbd0", 1, undef, CODE(0x5784257682a0)) called at /usr/share/perl5/PVE/Storage.pm line 1041     PVE::Storage::vdisk_alloc(HASH(0x57842575db08), "drbd0", 103, "raw", undef, 33554432) called at /usr/share/perl5/PVE/API2/Qemu.pm line 580     PVE::API2::Qemu::__ANON__("scsi0", HASH(0x5784257691b0)) called at /usr/share/perl5/PVE/API2/Qemu.pm line 94     PVE::API2::Qemu::__ANON__(HASH(0x5784256387a8), CODE(0x57842575e918)) called at /usr/share/perl5/PVE/API2/Qemu.pm line 632     eval {...} called at /usr/share/perl5/PVE/API2/Qemu.pm line 632     create_disks(PVE::RPCEnvironment=HASH(0x57841f569820), "root\@pam", HASH(0x5784256387a8), "x86_64", HASH(0x57842575db08), 103, undef, HASH(0x5784256387a8), ...) called at /usr/share/perl5/PVE/API2/Qemu.pm line 1306     eval {...} called at /usr/share/perl5/PVE/API2/Qemu.pm line 1305     PVE::API2::Qemu::__ANON__() called at /usr/share/perl5/PVE/AbstractConfig.pm line 299     PVE::AbstractConfig::__ANON__() called at /usr/share/perl5/PVE/Tools.pm line 259     eval {...} called at /usr/share/perl5/PVE/Tools.pm line 259     PVE::Tools::lock_file_full("/var/lock/qemu-server/lock-103.conf", 1, 0, CODE(0x5784256e6690)) called at /usr/share/perl5/PVE/AbstractConfig.pm line 302     PVE::AbstractConfig::__ANON__("PVE::QemuConfig", 103, 1, 0, CODE(0x57841d01b428)) called at /usr/share/perl5/PVE/AbstractConfig.pm line 322     PVE::AbstractConfig::lock_config_full("PVE::QemuConfig", 103, 1, CODE(0x57841d01b428)) called at /usr/share/perl5/PVE/API2/Qemu.pm line 1362     PVE::API2::Qemu::__ANON__() called at /usr/share/perl5/PVE/API2/Qemu.pm line 1397     eval {...} called at /usr/share/perl5/PVE/API2/Qemu.pm line 1397     PVE::API2::Qemu::__ANON__("UPID:VXNL01A:00007982:00077F24:688CB27E:qmcreate:103:root\@pam:") called at /usr/share/perl5/PVE/RESTEnvironment.pm line 620     eval {...} called at /usr/share/perl5/PVE/RESTEnvironment.pm line 611     PVE::RESTEnvironment::fork_worker(PVE::RPCEnvironment=HASH(0x57841f569820), "qmcreate", 103, "root\@pam", CODE(0x578425767a78)) called at /usr/share/perl5/PVE/API2/Qemu.pm line 1424     PVE::API2::Qemu::__ANON__(HASH(0x5784256387a8)) called at /usr/share/perl5/PVE/RESTHandler.pm line 499     PVE::RESTHandler::handle("PVE::API2::Qemu", HASH(0x578422f69f10), HASH(0x5784256387a8)) called at /usr/share/perl5/PVE/HTTPServer.pm line 180     eval {...} called at /usr/share/perl5/PVE/HTTPServer.pm line 141     PVE::HTTPServer::rest_handler(PVE::HTTPServer=HASH(0x57841d01bce0), "::ffff:195.158.104.85", "POST", "/nodes/VXNL01A/qemu", HASH(0x578425769090), HASH(0x57842576aca0), "extjs") called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 961     eval {...} called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 935     PVE::APIServer::AnyEvent::handle_api2_request(PVE::HTTPServer=HASH(0x57841d01bce0), HASH(0x578425748cb0), HASH(0x578425769090), "POST", "/api2/extjs/nodes/VXNL01A/qemu") called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1190     eval {...} called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1182     PVE::APIServer::AnyEvent::handle_request(PVE::HTTPServer=HASH(0x57841d01bce0), HASH(0x578425748cb0), HASH(0x578425769090), "POST", "/api2/extjs/nodes/VXNL01A/qemu") called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1662     PVE::APIServer::AnyEvent::__ANON__(AnyEvent::Handle=HASH(0x578425768ce8), "scsihw=virtio-scsi-single&name=Test0&vmid=103&scsi0=drbd0%3A3"...) called at /usr/lib/x86_64-linux-gnu/perl5/5.36/AnyEvent/Handle.pm line 1505     AnyEvent::Handle::__ANON__(AnyEvent::Handle=HASH(0x578425768ce8)) called at /usr/lib/x86_64-linux-gnu/perl5/5.36/AnyEvent/Handle.pm line 1315     AnyEvent::Handle::_drain_rbuf(AnyEvent::Handle=HASH(0x578425768ce8)) called at /usr/lib/x86_64-linux-gnu/perl5/5.36/AnyEvent/Handle.pm line 2015     AnyEvent::Handle::__ANON__() called at /usr/lib/x86_64-linux-gnu/perl5/5.36/AnyEvent/Loop.pm line 248     AnyEvent::Loop::one_event() called at /usr/lib/x86_64-linux-gnu/perl5/5.36/AnyEvent/Impl/Perl.pm line 46     AnyEvent::CondVar::Base::_wait(AnyEvent::CondVar=HASH(0x57841f481748)) called at /usr/lib/x86_64-linux-gnu/perl5/5.36/AnyEvent.pm line 2034     AnyEvent::CondVar::Base::recv(AnyEvent::CondVar=HASH(0x57841f481748)) called at /usr/share/perl5/PVE/APIServer/AnyEvent.pm line 1999     PVE::APIServer::AnyEvent::run(PVE::HTTPServer=HASH(0x57841d01bce0)) called at /usr/share/perl5/PVE/Service/pvedaemon.pm line 52     PVE::Service::pvedaemon::run(PVE::Service::pvedaemon=HASH(0x5784251da298)) called at /usr/share/perl5/PVE/Daemon.pm line 171     eval {...} called at /usr/share/perl5/PVE/Daemon.pm line 171     PVE::Daemon::__ANON__(PVE::Service::pvedaemon=HASH(0x5784251da298)) called at /usr/share/perl5/PVE/Daemon.pm line 390     eval {...} called at /usr/share/perl5/PVE/Daemon.pm line 379     PVE::Daemon::__ANON__(PVE::Service::pvedaemon=HASH(0x5784251da298), undef) called at /usr/share/perl5/PVE/Daemon.pm line 551     eval {...} called at /usr/share/perl5/PVE/Daemon.pm line 549     PVE::Daemon::start(PVE::Service::pvedaemon=HASH(0x5784251da298), undef) called at /usr/share/perl5/PVE/Daemon.pm line 659     PVE::Daemon::__ANON__(HASH(0x57841d0144b0)) called at /usr/share/perl5/PVE/RESTHandler.pm line 499     PVE::RESTHandler::handle("PVE::Service::pvedaemon", HASH(0x5784251da5e0), HASH(0x57841d0144b0), 1) called at /usr/share/perl5/PVE/RESTHandler.pm line 985     eval {...} called at /usr/share/perl5/PVE/RESTHandler.pm line 968     PVE::RESTHandler::cli_handler("PVE::Service::pvedaemon", "pvedaemon start", "start", ARRAY(0x57841d03b630), ARRAY(0x57841d034de0), undef, undef, undef) called at /usr/share/perl5/PVE/CLIHandler.pm line 594     PVE::CLIHandler::__ANON__(ARRAY(0x57841d014678), CODE(0x57841d4278c8), undef) called at /usr/share/perl5/PVE/CLIHandler.pm line 673     PVE::CLIHandler::run_cli_handler("PVE::Service::pvedaemon", "prepare", CODE(0x57841d4278c8)) called at /usr/bin/pvedaemon line 27

The setup is:
Node A - 10.201.3.12
Node B - 10.201.3.8

The only different thing I did was created an alias interface, but I doubt that is the problem:

Code:
root@VXNL01A:~# cat /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,vztmpl,backup

zfspool: local-zfs
        pool rpool/data
        sparse
        content images,rootdir

drbd: drbd0
        content images, rootdir
        controller 10.201.3.12
        resourcegroup drbd0-rg
root@VXNL01A:~# linstor node list
╭─────────────────────────────────────────────────────────╮
┊ Node    ┊ NodeType  ┊ Addresses                ┊ State  ┊
╞═════════════════════════════════════════════════════════╡
┊ VXNL01A ┊ SATELLITE ┊ 10.201.3.12:3366 (PLAIN) ┊ Online ┊
┊ VXNL01B ┊ SATELLITE ┊ 10.201.3.8:3366 (PLAIN)  ┊ Online ┊
╰─────────────────────────────────────────────────────────╯
root@VXNL01A:~# linstor storage-pool list
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node    ┊ Driver   ┊ PoolName          ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName                   ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ VXNL01A ┊ DISKLESS ┊                   ┊              ┊               ┊ False        ┊ Ok    ┊ VXNL01A;DfltDisklessStorPool ┊
┊ DfltDisklessStorPool ┊ VXNL01B ┊ DISKLESS ┊                   ┊              ┊               ┊ False        ┊ Ok    ┊ VXNL01B;DfltDisklessStorPool ┊
┊ drbd0                ┊ VXNL01A ┊ LVM_THIN ┊ vg_nvme0/thinpool ┊     1.75 TiB ┊      1.75 TiB ┊ True         ┊ Ok    ┊ VXNL01A;drbd0                ┊
┊ drbd0                ┊ VXNL01B ┊ LVM_THIN ┊ vg_nvme0/thinpool ┊     1.75 TiB ┊      1.75 TiB ┊ True         ┊ Ok    ┊ VXNL01B;drbd0                ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
root@VXNL01A:~# linstor resource-group list
╭──────────────────────────────────────────────────────────────╮
┊ ResourceGroup ┊ SelectFilter          ┊ VlmNrs ┊ Description ┊
╞══════════════════════════════════════════════════════════════╡
┊ DfltRscGrp    ┊ PlaceCount: 2         ┊        ┊             ┊
╞┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄┄╡
┊ drbd0-rg      ┊ PlaceCount: 2         ┊ 0      ┊             ┊
┊               ┊ StoragePool(s): drbd0 ┊        ┊             ┊
╰──────────────────────────────────────────────────────────────╯
root@VXNL01A:~# linstor resource list
╭──────────────────────────────────────────────────────────────────╮
┊ ResourceName ┊ Node ┊ Layers ┊ Usage ┊ Conns ┊ State ┊ CreatedOn ┊
╞══════════════════════════════════════════════════════════════════╡
╰──────────────────────────────────────────────────────────────────╯
root@VXNL01A:~# lvs
  LV       VG       Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  thinpool vg_nvme0 twi-a-tz-- <1.75t             0.00   10.42
root@VXNL01A:~# pvs
  PV           VG       Fmt  Attr PSize  PFree
  /dev/nvme2n1 vg_nvme0 lvm2 a--  <1.75t    0


Packages:
ii proxmox-kernel-6.8 6.8.12-9 all Latest Proxmox Kernel Image
ii proxmox-kernel-6.8.12-9-pve-signed 6.8.12-9 amd64 Proxmox Kernel Image (signed)
ii proxmox-kernel-helper 8.1.1 all Function for various kernel maintenance tasks.
ii proxmox-mail-forward 0.3.2 amd64 Proxmox mail forward helper
ii proxmox-mini-journalreader 1.4.0 amd64 Minimal systemd Journal Reader
ii proxmox-offline-mirror-docs 0.6.7 all Proxmox offline repository mirror and subscription key manager
ii proxmox-offline-mirror-helper 0.6.7 amd64 Proxmox offline repository mirror and subscription key manager helper
ii proxmox-termproxy 1.1.0 amd64 Wrapper proxy for executing programs in the system terminal
ii proxmox-ve 8.4.0 all Proxmox Virtual Environment
ii linstor-client 1.25.4-1 all Linstor client command line tool
ii linstor-common 1.31.3-1 all DRBD distributed resource management utility
ii linstor-controller 1.31.3-1 all DRBD distributed resource management utility
ii linstor-proxmox 8.1.2-1 all DRBD distributed resource management utility
ii linstor-satellite 1.31.3-1 all DRBD distributed resource management utility

Can someone recommend what to do from here?
 
Last edited:
I would also recommend installing the Linstor GUI package and try managing it from the GUI. It does a lot of the hard-work for you that can snag you up if you're doing it manually through the CLI.
 
Oh, and is it because you don't have a controller listed in your node list?
Code:
╭────────────────────────────────────────────────────────────╮
┊ Node       ┊ NodeType  ┊ Addresses                ┊ State  ┊
╞════════════════════════════════════════════════════════════╡
┊ pve-m720-1 ┊ COMBINED  ┊ 10.10.4.205:3366 (PLAIN) ┊ Online ┊
┊ pve-m720-2 ┊ SATELLITE ┊ 10.10.4.154:3366 (PLAIN) ┊ Online ┊
┊ pve-m720-3 ┊ SATELLITE ┊ 10.10.4.250:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────────╯

You should see one that says "COMBINED". There should be at least one controller in the cluster.
 
The drbd0 is NOT present in /dev. I think it would be if I had gone through the route of using drbdadm to set it up.

And I switched the nodetype to combined and does not seem to make a difference and in the original video and documentation does not seem it was set this way.

Thanks for the input.

If you have any other ideas, it would be appreciated.

Thank you.