proxmox installed on softraid (zfs raid10) - local-lvm empty

vicspb

New Member
Sep 2, 2023
5
0
1
Hi!
I'm try install proxmox on softraid zfs raid10. After reboot on web-interface - i have questions on local-lvm.

In journalctl -f - message "no such logical volume pve/data"

df -hT:
udev devtmpfs 36G 0 36G 0% /dev
tmpfs tmpfs 7,1G 1,2M 7,1G 1% /run
rpool/ROOT/pve-1 zfs 1,8T 1,4G 1,8T 1% /
tmpfs tmpfs 36G 63M 36G 1% /dev/shm
tmpfs tmpfs 5,0M 0 5,0M 0% /run/lock
rpool zfs 1,8T 128K 1,8T 1% /rpool
rpool/ROOT zfs 1,8T 128K 1,8T 1% /rpool/ROOT
rpool/data zfs 1,8T 128K 1,8T 1% /rpool/data
/dev/fuse fuse 128M 52K 128M 1% /etc/pve
tmpfs tmpfs 7,1G 0 7,1G 0% /run/user/0

lvmdiskscan
/dev/sda2 [ 1,00 GiB]
/dev/sda3 [ 930,51 GiB]
/dev/sdb2 [ 1,00 GiB]
/dev/sdb3 [ 930,51 GiB]
/dev/sdc2 [ 1,00 GiB]
/dev/sdc3 [ 930,51 GiB]
/dev/sdd2 [ 1,00 GiB]
/dev/sdd3 [ 930,51 GiB]
0 disks
8 partitions
0 LVM physical volume whole disks
0 LVM physical volumes

vgscan -vvvv - in attach.


How to restor local-lvm?

local-lvm is empty now, but I can't understand why it crached (after installation) and how restore this one (withowt reinstall)

Can you help me with it?
 

Attachments

  • vgscan.log
    35 KB · Views: 0
I'm try install proxmox on softraid zfs raid10. After reboot on web-interface - i have questions on local-lvm.

Well..., there is no LVM available when using ZFS ;-)

(While you can create a system using both technologies it is just unusual and unnecessary.)


You've got an rpool which gives you both a "normal" filesystem and devices (ZVOL) to be used for virtual disks in VMs.

The current conditions can be checked by zpool status and zfs list. Both commands have a large set of man pages.


Have fun - and welcome to the club
 
Hi!
Thank you for answer and greetings =)

If i undestend write, my error - install proxmox on raid10 with zfs, selected in install menu?

Now is the best choise - remoove lvm-thin and use zfs as array?


zpool status
pool: rpool
state: ONLINE
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-ST31000524AS_5VPC9G7A-part3 ONLINE 0 0 0
ata-ST1000DM003-1CH162_S1DFV1G8-part3 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
ata-TOSHIBA_DT01ACA100_14ONM87NS-part3 ONLINE 0 0 0
ata-TOSHIBA_DT01ACA100_Y3GZTXMNS-part3 ONLINE 0 0 0


zfs list
NAME USED AVAIL REFER MOUNTPOINT
rpool 1.35G 1.76T 104K /rpool
rpool/ROOT 1.34G 1.76T 96K /rpool/ROOT
rpool/ROOT/pve-1 1.34G 1.76T 1.34G /
rpool/data 96K 1.76T 96K /rpool/data


I'm cant remove local-lvm couse this missing.
 
Last edited:
Now is the best choise - remoove lvm-thin and create zfs-array?
There is no LVM you could remove as far as I can see.

You have only the "rpool", shown in both "zpool list" and "zfs list". (Btw: please use [CODE] xyz [/CODE]-Tags for posting output of command line commands.) That "rpool" has a size of approx. 1.8 TByte. Is this plausible for you?

So... where is your problem? In the Proxmox Gui you should see two storages labeled "local" and "local-zfs". The first one is a directory type storage (usually used for .iso-Images and/or Container storage) and the latter is a block-device storage, usually used for virtual disks of a virtual machine. Both work simultaneously without further configuration and are setup by defaut during a standard installation.

Please post the the output of cat /etc/pve/storage.cfg to verify this.

----
Just to show a reasonable expection, this is from my configuration:
Code:
~# grep -A4 -E "local|local-zfs" /etc/pve/storage.cfg
dir: local
        path /var/lib/vz
        content iso,backup,vztmpl
        prune-backups keep-daily=5,keep-hourly=3,keep-last=7,keep-monthly=4,keep-weekly=2,keep-yearly=2
        shared 0
--
zfspool: local-zfs
        pool rpool/data
        content images,rootdir
        sparse 1

Have fun!
 
Thank you.

I'm use 4 disks for 1000Gb with RAID10 and 1.8Tb - real volume.

Code:
 cat /etc/pve/storage.cfg
dir: local
    path /var/lib/vz
    content vztmpl,iso,backup

lvmthin: local-lvm
    thinpool data
    vgname pve
    content rootdir,images
 
Then the question is why you got a "local-lvm" storage. Looks like you restored your config files from a backup and the old machine was using LVM?
 
At the firs time I try to install proxmox on RAID (set RAID in BIOS of seup parametrs) but after build array in RAID utility I'm see only 4 hdd and can't find array, I install proxmox on first hdd. After reboot in sysstem was 4 devises. Then I'm try to find information in internet. In any forums, some people, who try to install proxmox, write about problems with softRAID - it was my situation. After that I'm destroy RAID, change propertis in BIOS to AHCI and install proxmox againe. And this I see after reboot server.

At now I see rpool 1.99Tb and disks. I'm can't understen where did I get thin-lvm and I think - It help me understand proxmox. Thats all.

If it only record in web-interfae, can you said me, where I can remove it?
 
I'am remove this record from storage.conf - and now 'journalctl -f' don't show me errors and web-interface show real situation.

A lot of thanks! :)
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!