RAID1 LVM Problem

spearox

Renowned Member
Jun 6, 2013
97
2
73
Hello,

I want to try create raid 1 on just installed proxmox.
I followed this link:
http://www.petercarrero.com/content/2012/04/22/adding-software-raid-proxmox-ve-20-install

First time this is work fine but i needed reinstall the server and i formated the 2 disk and reinstalled proxmox. But now won't work the raid. If i do the steps and i check my lvscan i see this and if i reboot the boot won't success.
Code:
  WARNING: Duplicate VG name pve: Existing lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  WARNING: Duplicate VG name pve: lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  WARNING: Duplicate VG name pve: Existing lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  Couldn't find device with uuid 2F6xwT-YR2c-b2OO-MHSo-qz9u-t7I0-12PTHa.
  inactive          '/dev/pve/swap' [31.00 GiB] inherit
  inactive          '/dev/pve/root' [96.00 GiB] inherit
  inactive          '/dev/pve/data' [788.02 GiB] inherit

Anyone can help me? Have any idea?

Sorry for my bad english.
 
Hello,

I want to try create raid 1 on just installed proxmox.
I followed this link:
http://www.petercarrero.com/content/2012/04/22/adding-software-raid-proxmox-ve-20-install

First time this is work fine but i needed reinstall the server and i formated the 2 disk and reinstalled proxmox. But now won't work the raid. If i do the steps and i check my lvscan i see this and if i reboot the boot won't success.
Code:
  WARNING: Duplicate VG name pve: Existing lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  WARNING: Duplicate VG name pve: lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  WARNING: Duplicate VG name pve: Existing lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  Couldn't find device with uuid 2F6xwT-YR2c-b2OO-MHSo-qz9u-t7I0-12PTHa.
  inactive          '/dev/pve/swap' [31.00 GiB] inherit
  inactive          '/dev/pve/root' [96.00 GiB] inherit
  inactive          '/dev/pve/data' [788.02 GiB] inherit

Anyone can help me? Have any idea?

Sorry for my bad english.
Hi,
due to your broken software raid the OS see the lvm-disks twice.
Look with following commands:
Code:
pvs
vgs
lvs
You can do as quick solution remove one old raid member (wipe the lvm-info with "pvremove /dev/sdb2" - or which device you use).

Udo
 
I checked. No broken raid. The disks fully deleted and clean. No partition, etc. Another idea?
 
Both hdd fully cleared and fully new proxmox reinstalled without any raid try.
And the output:
blkid
Code:
/dev/sda1: UUID="0259f72e-b64c-42f2-9d3f-859ff67626b4" TYPE="ext4"
/dev/sda2: UUID="uba01V-juwl-LiNp-iPte-QPJc-hT34-k1CEta" TYPE="LVM2_member"
/dev/mapper/pve-root: UUID="78b55cc8-259b-4cad-9a8a-df796764b7e1" TYPE="ext4"
/dev/mapper/pve-swap: UUID="d5f2dc43-03e1-46f6-abef-b07fb1705255" TYPE="swap"
/dev/mapper/pve-data: UUID="5d684701-2988-4b85-8fe8-c3964f1c26be" TYPE="ext4"

dmsetup info
Code:
Name:              pve-swap
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 1
Number of targets: 1
UUID: LVM-T6d4fymXNbjxGmEnGFYgi83ar6BLFJPAptYUoBqtC1f8GLOGVPdw3xvVWdDAn4Yf


Name:              pve-root
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 0
Number of targets: 1
UUID: LVM-T6d4fymXNbjxGmEnGFYgi83ar6BLFJPAFIMLkviP5FDXzB1KkmWLc2q4bNuu5Uht


Name:              pve-data
State:             ACTIVE
Read Ahead:        256
Tables present:    LIVE
Open count:        1
Event number:      0
Major, minor:      253, 2
Number of targets: 1
UUID: LVM-T6d4fymXNbjxGmEnGFYgi83ar6BLFJPAWm9dAqis4O9tO4w60Iwu87yf2PIXtsLy

lvm.conf
Code:
devices {
    dir = "/dev"
    scan = [ "/dev" ]
    obtain_device_list_from_udev = 1
    preferred_names = [ ]
    filter = [ "a/.*/" ]
    cache_dir = "/run/lvm"
    cache_file_prefix = ""
    write_cache_state = 1
    sysfs_scan = 1
    multipath_component_detection = 1
    md_component_detection = 1
    md_chunk_alignment = 1
    data_alignment_detection = 1
    data_alignment = 0
    data_alignment_offset_detection = 1
    ignore_suspended_devices = 0
    disable_after_error_count = 0
    require_restorefile_with_uuid = 1
    pv_min_size = 2048
    issue_discards = 1
}


log {
    verbose = 0
    syslog = 1
    overwrite = 0
    level = 0
    indent = 1
    command_names = 0
    prefix = "  "
}


backup {
    backup = 1
    backup_dir = "/etc/lvm/backup"
    archive = 1
    archive_dir = "/etc/lvm/archive"
    retain_min = 10
    retain_days = 30
}


shell {
    history_size = 100
}


global {
    umask = 077
    test = 0
    units = "h"
    si_unit_consistency = 1
    activation = 1
    proc = "/proc"
    locking_type = 1
    wait_for_locks = 1
    fallback_to_clustered_locking = 1
    fallback_to_local_locking = 1
    locking_dir = "/run/lock/lvm"
    prioritise_write_locks = 1
    abort_on_internal_errors = 0
    detect_internal_vg_cache_corruption = 0
    metadata_read_only = 0
    mirror_segtype_default = "mirror"
    use_lvmetad = 0
}


activation {
    checks = 0
    udev_sync = 1
    udev_rules = 1
    verify_udev_operations = 0
    retry_deactivation = 1
    missing_stripe_filler = "error"
    use_linear_target = 1
    reserved_stack = 64
    reserved_memory = 8192
    process_priority = -18
    mirror_region_size = 512
    readahead = "auto"
    raid_fault_policy = "warn"
    mirror_log_fault_policy = "allocate"
    mirror_image_fault_policy = "remove"
    snapshot_autoextend_threshold = 100
    snapshot_autoextend_percent = 20
    thin_pool_autoextend_threshold = 100
    thin_pool_autoextend_percent = 20
    thin_check_executable = "/sbin/thin_check -q"
    use_mlockall = 0
    monitoring = 0
    polling_interval = 15
}


dmeventd {
    mirror_library = "libdevmapper-event-lvm2mirror.so"
    snapshot_library = "libdevmapper-event-lvm2snapshot.so"
    thin_library = "libdevmapper-event-lvm2thin.so"
}
 
Hello,

I want to try create raid 1 on just installed proxmox.
I followed this link:
http://www.petercarrero.com/content/2012/04/22/adding-software-raid-proxmox-ve-20-install

First time this is work fine but i needed reinstall the server and i formated the 2 disk and reinstalled proxmox. But now won't work the raid. If i do the steps and i check my lvscan i see this and if i reboot the boot won't success.
Code:
  WARNING: Duplicate VG name pve: Existing lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  WARNING: Duplicate VG name pve: lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  WARNING: Duplicate VG name pve: Existing lt6acH-dXCA-8S1U-5Pdf-yG8W-1cbA-qeswny (created here) takes precedence over lefXWk-clug-BvTn-o3P9-NesF-uG5d-RxaDia
  Couldn't find device with uuid 2F6xwT-YR2c-b2OO-MHSo-qz9u-t7I0-12PTHa.
  inactive          '/dev/pve/swap' [31.00 GiB] inherit
  inactive          '/dev/pve/root' [96.00 GiB] inherit
  inactive          '/dev/pve/data' [788.02 GiB] inherit

Anyone can help me? Have any idea?

Sorry for my bad english.

Hi spearox

I made this practice (of petercarrero) and it worked, but one more consistent and easy that will do work well PVE (for me since many months ago without any problem), you can see this link:
http://www.howtoforge.com/proxmox-2-with-software-raid

I normaly use other commands, ie cfdisk, etc.
or the command "sfdisk" for do partition on RAID instead of fdisk
for example:
sfdisk -c /dev/sdb 1 fd
sfdisk -c /dev/sdb 2 fd

But if you do step by step this tutorial, PVE will work on Soft RAID, will see that is very easy.

Best regards
Cesar
 
Hmm, i now tryed everything with gparted new uuid and i formated 20 times. :) And now WORK! Thanks for help.
 
Last edited:

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!