I have a local SATA drive that i use for transferring files between 2 VMs i have. I am not able to setup a network share since 1 is almost always on a VPN and the network doesnt play well.
What i have done is i have the Drive on the Proxmox host. and i was able to grab it by-ID
I am...
That seemed to work out, and the hosts are back
One issue it seems one of the VMs that was moving when this all triggered caused a disk to get stuck between local storage and the SAN
its running on the SAN one but i cant remove the local one or migrate the machine because of it.
So i noticed over the weekend the locks in seemed to drop so im able to access the VG again, however the machine still shows ? and the tasks are still showing as running but i cant stop them. the VMs are also running as expected.
--- Volume group ---
VG Name pve
System ID...
i did notice that i have this task running. i can see it in the GUI task area. im guessing its related to my lock. im not sure if there is a way to kill it from the cli?
root@Proxmox03:~# lsof | grep "/var/lock/lvm/"
root@Proxmox03:~#
returns nothing
also looked in the lvm lock folder
root@Proxmox03:/var/lock/lvm# ls
P_global V_pve V_pve:aux
On other host I get the following
Name Type Status Total Used Available %
local dir active 98559220 12012976 81496696 12.19%
local-lvm lvmthin active...
I have the same issue, when i try and search around for the storage device with issue i get a reply of
vgdisplay
Displays first volumne group.
then
Giving up waiting for lock.
Can't get lock for pve.
Cannot process volume group pve
the volume group having issues should be the...
Running Multipath -v3 i get the following wonder if this might be part of the issue? not in wwids file, skipping sde
===== paths list =====
uuid hcil dev dev_t pri dm_st chk_st vend/pro
362cea7f099dce7002850e7f79912a523 0:2:0:0 sda 8:0 1 undef undef...
Yes i have, but i still appear to be stuck. i think the issue might be the all 4 targets share the same wwid, but again the documentation seems straight forward but doesnt quiet work even when it appears to match up.
Having issues with Multipath configuration.
I have A Dell Unity that im connecting to on 4 targets
But when i look at my Volume groups i get errors
I have the following in my /etc/multipath.conf
cat /etc/multipath.conf
defaults {
polling_interval 2
path_selector...
I currently have a cluster of 6 Host, Looking at the PVE 7.0 release (excited good work team) i see there is a path for upgrade from 6.4 which is great. question is however since i have a few nodes and a larger VM count it will take me a while to upgrade a host. and move everything around...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.