Hi there, ProxMox newbie here. This is probably a dumb question but after re-reading the Storage wiki pages three times and numerous searches of these forums I still seem to be at a standstill.
So here's the situation: I installed ProxMox this afternoon onto two drives (ZFS mirror, /dev/sda and /dev/sdb). No issues with the installation. Next I inserted two more drives (/dev/sdc and /dev/sdd) into my box, and tried creating some temporary LVMs on them, just for the sake of familiarizing myself with the software.
The first LVM, called "blah1", I created on /dev/sdc using the GUI: Node > Disks > LVM > Create Volume Group
The second LVM, called "blah2", I created on /dev/sdd in the shell, via: pvesm add lvm blah2 --vgname blah2vgxx
So far so good:
Then I go to remove the LVMs:
Both storages are gone, but "blah1" is still listed in the LVM list, and still showing in the LVM page in the GUI.
So how do I get rid of the "blah1" LVM fully? Is this what pvesm free is for? If so, how do I determine the VOLUME_ID to use?
Here's my storage.cfg file currently, just showing the zfs pool for the installation disks and nothing from /dev/sdc or /dev/sdd:
So here's the situation: I installed ProxMox this afternoon onto two drives (ZFS mirror, /dev/sda and /dev/sdb). No issues with the installation. Next I inserted two more drives (/dev/sdc and /dev/sdd) into my box, and tried creating some temporary LVMs on them, just for the sake of familiarizing myself with the software.
The first LVM, called "blah1", I created on /dev/sdc using the GUI: Node > Disks > LVM > Create Volume Group
The second LVM, called "blah2", I created on /dev/sdd in the shell, via: pvesm add lvm blah2 --vgname blah2vgxx
So far so good:
root@typhon:~# pvesm status
Name Type Status Total Used Available %
blah1 lvm inactive 0 0 0 0.00%
blah2 lvm inactive 0 0 0 0.00%
local dir active 1885338368 910848 1884427520 0.05%
local-zfs zfspool active 1884427692 96 1884427596 0.00%
Name Type Status Total Used Available %
blah1 lvm inactive 0 0 0 0.00%
blah2 lvm inactive 0 0 0 0.00%
local dir active 1885338368 910848 1884427520 0.05%
local-zfs zfspool active 1884427692 96 1884427596 0.00%
Then I go to remove the LVMs:
root@typhon:~# pvesm remove blah1
root@typhon:~# pvesm remove blah2
root@typhon:~#
root@typhon:~# pvesm scan lvm
blah1
root@typhon:~#
root@typhon:~# pvesm status
Name Type Status Total Used Available %
local dir active 1885338368 910848 1884427520 0.05%
local-zfs zfspool active 1884427680 96 1884427584 0.00%
root@typhon:~# pvesm remove blah2
root@typhon:~#
root@typhon:~# pvesm scan lvm
blah1
root@typhon:~#
root@typhon:~# pvesm status
Name Type Status Total Used Available %
local dir active 1885338368 910848 1884427520 0.05%
local-zfs zfspool active 1884427680 96 1884427584 0.00%
Both storages are gone, but "blah1" is still listed in the LVM list, and still showing in the LVM page in the GUI.
So how do I get rid of the "blah1" LVM fully? Is this what pvesm free is for? If so, how do I determine the VOLUME_ID to use?
Here's my storage.cfg file currently, just showing the zfs pool for the installation disks and nothing from /dev/sdc or /dev/sdd:
root@typhon:~# cat /etc/pve/storage.cfg
dir: local
path /var/lib/vz
content backup,iso,vztmpl
zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1
Thanks for your help!dir: local
path /var/lib/vz
content backup,iso,vztmpl
zfspool: local-zfs
pool rpool/data
content images,rootdir
sparse 1