Search results

  1. P

    Unable to view ZFS status in GUI for specific pool

    I have two ZFS pools on this PBS server that were created from the GUI. The standard rpool for OS boot, and a storage pool for backups. When attempting to view the details in the GUI for the storage pool, I'm met the the following error: unable to parse zfs status config tree - 0: at line 36...
  2. P

    4 node iscsi multipath not working on 2 of the nodes

    I logged in to all the missing sessions on the nodes again and everything appears normal. Thank you. Looks like we might have a switch flaking out. I'm curious why it doesn't auto remount them if I were to restart the node however.
  3. P

    4 node iscsi multipath not working on 2 of the nodes

    Seems to have issue connecting to the TrueNAS box (which is on 10.201.201.x and 10.202.202.X.) The 201 path seems to fail on this node (fine on two others.. very odd): Jun 29 07:20:31 c4-pve-01 iscsid[2639]: Connection1:0 to [target: iqn.2005-10.org.freenas.ctl:ssd-z2-x10-600gb-2, portal...
  4. P

    4 node iscsi multipath not working on 2 of the nodes

    Unfortunately rebooting doesn't resolve it, this was one of the first things I attempted. I'll try to look at the logs but not completely sure what to look out for.
  5. P

    4 node iscsi multipath not working on 2 of the nodes

    For the storage I added the ISCSI Target at the datacenter level and created an LVM on top of the disk, then added that also via the Proxmox GUI. No manual setup. I believe you actually helped me with the initial setup in...
  6. P

    4 node iscsi multipath not working on 2 of the nodes

    Doesn't appear to be any change pve-01:~# iscsiadm -m session --rescan Rescanning session [sid: 1, target: iqn.2005-10.org.freenas.ctl:ssd-z2-x10-600gb-2, portal: 10.202.202.5,3260] Rescanning session [sid: 2, target: iqn.2005-10.org.freenas.ctl:ssd-z2-x10-600gb-1, portal: 10.202.202.5,3260]...
  7. P

    4 node iscsi multipath not working on 2 of the nodes

    pve-01:~# lsscsi [0:0:0:0] disk ATA INTEL SSDSC2BB12 0370 /dev/sda [0:0:1:0] disk ATA INTEL SSDSC2BB12 0370 /dev/sdb [5:0:0:0] cd/dvd TSSTcorp DVD-ROM SN-108DN D150 /dev/sr0 [7:0:0:0] disk TrueNAS iSCSI Disk 0123 /dev/sdc [8:0:0:0] disk TrueNAS...
  8. P

    4 node iscsi multipath not working on 2 of the nodes

    PVE NODE 4 (working) (trimmed first part due to character limit) pve-04:~# multipath -v3 ===== paths list ===== uuid hcil dev dev_t pri dm_st chk_st vend/pro 36589cfc000000564f17ba1e2c35fde22 10:0:0:0 sdf 8:80 50 undef undef TrueNAS...
  9. P

    4 node iscsi multipath not working on 2 of the nodes

    Are here are the multipath -v3 outputs on both: PVE NODE 1 (no longer working) pve-01:~# multipath -v3 Jun 30 09:08:41 | set open fds limit to 1048576/1048576 Jun 30 09:08:41 | loading //lib/multipath/libchecktur.so checker Jun 30 09:08:41 | checker tur: message table size = 3 Jun 30 09:08:41 |...
  10. P

    4 node iscsi multipath not working on 2 of the nodes

    I had multipath working in the past on all 4 nodes that are connecting to a TrueNAS share. I'm not sure when, but now 2 of the nodes won't utilize multipath anymore and have been struggling to figure out why. I've attempted to verify config files, wwid on the drives, have rebooted them and...
  11. P

    MPIO with Proxmox ISCSI and Truenas

    Great reply thank you for spending the time to explain all that, it helps. Going to review and chew on this for a bit, and read some more docs on multipath configuration options.
  12. P

    MPIO with Proxmox ISCSI and Truenas

    I did just test again and tried to perform some disk actions on the Windows VM but this hung the VM. After bringing the storage back online I had to force restart the VM. Ping was working the entire time there was no real indication it was hung until trying to access it. Still an odd behavior to...
  13. P

    MPIO with Proxmox ISCSI and Truenas

    It was a Windows VM for this test. Not sure what outputs to provide.
  14. P

    MPIO with Proxmox ISCSI and Truenas

    Attempting to understand what's going on when the underlying storage "fails" when using iscsi multipath (testing disaster scenario with a full storage outage). I can see both links go down using "multipath -ll". Oddly the VM's appear online and I'm able to ping from them, and to them. I waited...
  15. P

    MPIO with Proxmox ISCSI and Truenas

    Alright so I add ISCSI into Storage first, then use vgcreate, then add LVM. When adding LVM you don't select the base storage as ISCSI but instead choose the previously created LVM under Existing Volume Groups. Is this correct?
  16. P

    MPIO with Proxmox ISCSI and Truenas

    For the life of me I cannot figure out how to get LVM working on top of an ISCSI share coming from a TrueNAS box. I believe multipath is working and I'm able to get the iscsi storage into proxmox, but not able to get LVM on top of it per the error below: mpatha...
  17. P

    [SOLVED] Major issues after upgrading to 7.2

    Resolved: apt-get dist-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done You might want to run 'apt --fix-broken install' to correct these. The following packages have unmet dependencies: libpve-common-perl : Depends: libproxmox-rs-perl...
  18. P

    [SOLVED] Major issues after upgrading to 7.2

    root@pve:~# systemctl status pve* ● pve-ha-crm.service - PVE Cluster HA Resource Manager Daemon Loaded: loaded (/lib/systemd/system/pve-ha-crm.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Thu 2022-05-05 17:09:06 EDT; 5min ago Process: 4948...
  19. P

    [SOLVED] Major issues after upgrading to 7.2

    Was on PVE 7.1 prior to update via web gui. The standard update and dist-upgrade command ran and I rebooted the server. Noticed upon boot-up that I can SSH to the machine, but the web interface does not work. Seems a lot of processes will not start. root@pve:~# pveversion -v proxmox-ve: not...
  20. P

    Snapshot hangs if qemu-guest-agent is running / Cloudlinux

    Just ran into this today too. When using cloudlinux 8 and qemu guest agent is enabled it will lock up the VM on the freeze operation. Turning off guest agent in proxmox works with no issues.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!