I have two ZFS pools on this PBS server that were created from the GUI. The standard rpool for OS boot, and a storage pool for backups.
When attempting to view the details in the GUI for the storage pool, I'm met the the following error:
unable to parse zfs status config tree - 0: at line 36...
I logged in to all the missing sessions on the nodes again and everything appears normal. Thank you. Looks like we might have a switch flaking out. I'm curious why it doesn't auto remount them if I were to restart the node however.
Seems to have issue connecting to the TrueNAS box (which is on 10.201.201.x and 10.202.202.X.) The 201 path seems to fail on this node (fine on two others.. very odd):
Jun 29 07:20:31 c4-pve-01 iscsid[2639]: Connection1:0 to [target: iqn.2005-10.org.freenas.ctl:ssd-z2-x10-600gb-2, portal...
Unfortunately rebooting doesn't resolve it, this was one of the first things I attempted. I'll try to look at the logs but not completely sure what to look out for.
For the storage I added the ISCSI Target at the datacenter level and created an LVM on top of the disk, then added that also via the Proxmox GUI. No manual setup. I believe you actually helped me with the initial setup in...
pve-01:~# lsscsi
[0:0:0:0] disk ATA INTEL SSDSC2BB12 0370 /dev/sda
[0:0:1:0] disk ATA INTEL SSDSC2BB12 0370 /dev/sdb
[5:0:0:0] cd/dvd TSSTcorp DVD-ROM SN-108DN D150 /dev/sr0
[7:0:0:0] disk TrueNAS iSCSI Disk 0123 /dev/sdc
[8:0:0:0] disk TrueNAS...
PVE NODE 4 (working)
(trimmed first part due to character limit)
pve-04:~# multipath -v3
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/pro
36589cfc000000564f17ba1e2c35fde22 10:0:0:0 sdf 8:80 50 undef undef TrueNAS...
Are here are the multipath -v3 outputs on both:
PVE NODE 1 (no longer working)
pve-01:~# multipath -v3
Jun 30 09:08:41 | set open fds limit to 1048576/1048576
Jun 30 09:08:41 | loading //lib/multipath/libchecktur.so checker
Jun 30 09:08:41 | checker tur: message table size = 3
Jun 30 09:08:41 |...
I had multipath working in the past on all 4 nodes that are connecting to a TrueNAS share. I'm not sure when, but now 2 of the nodes won't utilize multipath anymore and have been struggling to figure out why. I've attempted to verify config files, wwid on the drives, have rebooted them and...
Great reply thank you for spending the time to explain all that, it helps. Going to review and chew on this for a bit, and read some more docs on multipath configuration options.
I did just test again and tried to perform some disk actions on the Windows VM but this hung the VM. After bringing the storage back online I had to force restart the VM. Ping was working the entire time there was no real indication it was hung until trying to access it. Still an odd behavior to...
Attempting to understand what's going on when the underlying storage "fails" when using iscsi multipath (testing disaster scenario with a full storage outage). I can see both links go down using "multipath -ll". Oddly the VM's appear online and I'm able to ping from them, and to them. I waited...
Alright so I add ISCSI into Storage first, then use vgcreate, then add LVM. When adding LVM you don't select the base storage as ISCSI but instead choose the previously created LVM under Existing Volume Groups. Is this correct?
For the life of me I cannot figure out how to get LVM working on top of an ISCSI share coming from a TrueNAS box. I believe multipath is working and I'm able to get the iscsi storage into proxmox, but not able to get LVM on top of it per the error below:
mpatha...
Resolved:
apt-get dist-upgrade
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
You might want to run 'apt --fix-broken install' to correct these.
The following packages have unmet dependencies:
libpve-common-perl : Depends: libproxmox-rs-perl...
Was on PVE 7.1 prior to update via web gui. The standard update and dist-upgrade command ran and I rebooted the server.
Noticed upon boot-up that I can SSH to the machine, but the web interface does not work. Seems a lot of processes will not start.
root@pve:~# pveversion -v
proxmox-ve: not...
Just ran into this today too. When using cloudlinux 8 and qemu guest agent is enabled it will lock up the VM on the freeze operation. Turning off guest agent in proxmox works with no issues.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.