HowTo: Proxmox VE 7 With Software RAID-1

When would sda1 and sda2 need to be resynced with sdb1 and sdb2? Are those ever updated during apt upgrade or apt full-upgrade?

I am assuming one would also need to be aware which of those two disks the BIOS booted into if they get updated? For example, if you booted into sdb then you would need to resync that over to sda, not the other way around correct?
 
Last edited:
  • Like
Reactions: melroy89
Never use apt upgrade with PVE if you don't wnt to screw up your installation.
Wait what? The upgrade button in PVE just execute: apt-get dist-upgrade under Debian. I actually find that more dangerous than just executing apt-get upgrade. See screenshot below of the web interface (which opens this pop-up window of the console).
 

Attachments

  • upgrade_via_webui.png
    upgrade_via_webui.png
    38.9 KB · Views: 13
Last edited:
apt dist-upgrade and apt full-upgrade are fine. But apt upgrade doesn't make sure all dependencies are met, which might break stuff.

the docs seems clear, so why do you want to "apt-get upgrade"?

apt-get upgrade will break your system, so do NOT do this.
 
Last edited:
  • Like
Reactions: melroy89
apt dist-upgrade and apt full-upgrade are fine. But apt upgrade doesn't make sure all dependencies are met, which might break stuff.

Thanks for mentioning that. I did some research myself all over the Internet. But wow.. this topic got big really fast.

I hear people saying "apt" is not the same as "apt-get" (which is correct). And you might want to use: apt-get upgrade to avoid package removals or installing new packages, unless it reports back that packages are being "held back", in that case run apt-get dist-upgrade to pick-up the remaining updates.

Then indeed the question is why not just always use apt-get dist-upgrade? The answer is of stability and predictability.

"As a system admin managing mission-critical servers running various services with different configured software. In that case, you cannot let the machine decide the removal of packages, no matter how ‘intelligent’ or ‘smart’ it is."

"You don’t want your meticulously configured system to behave strangely because some package was removed automatically by apt."

And yes apt-get is a legacy command, apt is the new command. So I think apt upgrade is actually fine (note: I'm saying apt instead of apt-get). apt upgrade can install new packages. Man page of apt command:

upgrade - upgrade the system by installing/upgrading packages

EDIT: To only difference now being is that you may want to remove packages manually once you are sure to remove them by using: apt autoremove. In contrast to apt-get dist-upgrade will remove packages for you, which might break your system!

ps. Under Debian apt upgrade can for instance also upgrade the kernel.
Ps. ps. apt-get can be seen as a low-level command. Apt can be used better to avoid these kind of issues.
 
Last edited:
Then indeed the question is why not just always use apt-get dist-upgrade? The answer is of stability and predictability.

"As a system admin managing mission-critical servers running various services with different configured software. In that case, you cannot let the machine decide the removal of packages, no matter how ‘intelligent’ or ‘smart’ it is."

"You don’t want your meticulously configured system to behave strangely because some package was removed automatically by apt."

I guess, the difference here is, that we are not talking about a e.g. Debian server, where one has put several (different) third party services on (where every single service is developed, maintained and tested by different people/teams), but instead a Debian server, where it is assumed, that it is only running e.g.: PVE (where e.g.: PVE is fully developed, maintained and tested for/on Debian from one team, the Proxmox developers).

EDIT: To only difference now being is that you may want to remove packages manually once you are sure to remove them by using: apt autoremove. In contrast to apt-get dist-upgrade will remove packages for you, which might break your system!

This might be true for the "several (different) third party services" server, but on a e.g. PVE installation, not letting remove e.g. being in conflict with the upgrade standing packages on an upgrade might break your e.g.: PVE (or at least might give you trouble/errors):
The recommendation to always use full-upgrade is 100% correct, so just for slight clarifications: while apt (not apt-get) installs new dependencies, it will never remove packages due to getting obsolete and being in conflict with the upgrade (e.g., like python2 due to EOL here is).
apt full-upgrade is always required for major upgrades in Debian in general and for upgrades in Proxmox products, which are following a rolling release model.
https://forum.proxmox.com/threads/proxmox-ve-7-1-released.99847/post-463941


apt upgrade is used to install available upgrades of all packages currently installed on the system from the sources configured via sources.list(5). New packages will be installed if required to satisfy dependencies, but existing packages will never be removed. If an upgrade for a package requires the removal of an installed package the upgrade for this package isn't performed.
apt full-upgrade performs the function of upgrade but will remove currently installed packages if this is needed to upgrade the system as a whole.
https://manpages.debian.org/bullseye/apt/apt.8.en.html

apt-get upgrade is used to install the newest versions of all packages currently installed on the system from the sources enumerated in /etc/apt/sources.list. Packages currently installed with new versions available are retrieved and upgraded; under no circumstances are currently installed packages removed, or packages not already installed retrieved and installed. New versions of currently installed packages that cannot be upgraded without changing the install status of another package will be left at their current version. An update must be performed first so that apt-get knows that new versions of packages are available.
apt-get dist-upgrade in addition to performing the function of upgrade, also intelligently handles changing dependencies with new versions of packages; apt-get has a "smart" conflict resolution system, and it will attempt to upgrade the most important packages at the expense of less important ones if necessary. The dist-upgrade command may therefore remove some packages. The /etc/apt/sources.list file contains a list of locations from which to retrieve desired package files. See also apt_preferences(5) for a mechanism for overriding the general settings for individual packages.
https://manpages.debian.org/bullseye/apt/apt-get.8.en.html
 
When would sda1 and sda2 need to be resynced with sdb1 and sdb2? Are those ever updated during apt upgrade or apt full-upgrade?

I am assuming one would also need to be aware which of those two disks the BIOS booted into if they get updated? For example, if you booted into sdb then you would need to resync that over to sda, not the other way around correct?
While some here went off into the weeds nobody actually answered my question. When would sda1 and sda2 need to be resynced with sdb1 and sdb2?
 
Hello, I come here to throw a bottle into the sea.

It is not possible to apply the mdadm procedure for Proxmox ve 7.1-8 after the step :

Code:
# Move data from / dev / sda3 to / dev / md0
pvmove / dev / sda3 / dev / md0

We have an error message:

Code:
  Insufficient free space: 127871 extents needed, but only 127839 available
  Unable to allocate mirror extents for pve / pvmove0.
  Failed to convert pvmove LV to mirrored.

We have tested on several PVE 7.1-8 servers it is identical.

We tested with the old updated PVE version 6.4-13 and it works fine!

We don't understand, probably an LVM problem maybe, but why it doesn't work anymore?

Code:
root @ PVE7: ~ # pvdisplay
  --- Physical volume ---
  PV Name / dev / sda3
  VG Name pve
  PV Size <499.50 GiB / not usable 2.98 MiB
  Allocatable yes (but full)
  PE Size 4.00 MiB
  Total PE 127,871
  Free PE 0
  Allocated PE 127871
  PV UUID HLHwID-HmkC-JFdX-HkXd-ngXk-Sboc-p2fK42

  --- Physical volume ---
  PV Name / dev / md0
  VG Name pve
  PV Size 499.37 GiB / not usable <1.94 MiB
  Allocatable yes
  PE Size 4.00 MiB
  Total PE 127,839
  Free PE 127839
  Allocated PE 0
  PV UUID XQUEDm-swYp-j102-H0x1-kDih-qYPq-6vsGTF

root @ PVE7: ~ # pvmove / dev / sda3 / dev / md0
  Insufficient free space: 127871 extents needed, but only 127839 available
  Unable to allocate mirror extents for pve / pvmove0.
  Failed to convert pvmove LV to mirrored.
root @ PVE7: ~ #

How should we proceed ?

Thanks for your help.
In order to get around this error, you need to create sdb3 of a larger size, for example(!) +1Gb.
After transferring lv to an incomplete raid1 (only sdb3), deleting sda3 and creating sda3 with a new size (+1G) for inclusion in raid1, it is important to re-read the partition table on the sda disk.
For example, the partx utility, it is already present in the system by default.
partx -u /dev/sda.

If necessary, I can give the complete sequence of commands.
 
For Proxmox 8.0:

px:~# cat /etc/debian_version
12.0
Add /etc/apt/sources.list.d/pve-no-subscription.list lines:
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
deb http://security.debian.org/debian-security bookworm-security main contrib
Commenting lines in files:
/etc/apt/sources.list.d/ceph.list
/etc/apt/sources.list.d/pve-enterprise.list
px:~# apt update && apt upgrade -y
px:~# reboot
px:~# apt-get install grub-efi-amd64
px:~# apt-get install mdadm
px:~# sfdisk -d /dev/sda > part_table
px:~# grep -v ^label-id part_table | sed -e 's/, *uuid=[0-9A-F-]*//' |grep -v sda3 | sfdisk /dev/sdb
px:~# partx -u /dev/sdb
px:~# lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
data pve -wi-a----- 1.00g
root pve -wi-ao---- 32.00g
swap pve -wi-ao---- 8.00g

Create a larger sdb3 than sda3, for example(!) +10G:
px:~# gdisk /dev/sdb
Command (? for help): n
Partition number (3-128, default 3):
First sector (2099200-1953525134, default = 2099200) or {+-}size{KMGTP}:
Last sector (2099200-1953525134, default = 1953525127) or {+-}size{KMGTP}: +42G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'
Command (? for help): w

px:~# gdisk -l /dev/sdb

Number Start (sector) End (sector) Size Code Name
1 34 2047 1007.0 KiB EF02
2 2048 2099199 1024.0 MiB EF00
3 2099200 90179583 42.0 GiB FD00 Linux RAID

px:~# mdadm --create /dev/md0 --level 1 --raid-devices 2 /dev/sdb3 missing
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
px:~# pvcreate /dev/md0
Physical volume "/dev/md0" successfully created.
px:~# vgextend pve /dev/md0
Volume group "pve" successfully extended
px:~# pvmove /dev/sda3 /dev/md0
/dev/sda3: Moved: 0.12%

/dev/sda3: Moved: 100.00%
px:~# vgreduce pve /dev/sda3
Removed "/dev/sda3" from volume group "pve"
px:~# gdisk /dev/sda
GPT fdisk (gdisk) version 1.0.9

Command (? for help): d
Partition number (1-3): 3
Command (? for help): n
Partition number (3-128, default 3):
First sector (2099200-1953525134, default = 2099200) or {+-}size{KMGTP}:
Last sector (2099200-1953525134, default = 1953525127) or {+-}size{KMGTP}: +42G
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300): fd00
Changed type of partition to 'Linux RAID'
Command (? for help): w
...
Do you want to proceed? (Y/N): y
...
The operation has completed successfully.
px:~# partx -u /dev/sda
px:~# mdadm --add /dev/md0 /dev/sda3
mdadm: added /dev/sda3
px:~# dd if=/dev/sda1 of=/dev/sdb1
px:~# dd if=/dev/sda2 of=/dev/sdb2
px:~# mdadm --detail --scan | awk '/ARRAY/ {print}' >> /etc/mdadm/mdadm.conf
px:~# update-initramfs -u -k all
px:~# mkdir /efi
px:~# mount /dev/sda2 /efi
px:~# grub-install --target=x86_64-efi --efi-directory=/efi --no-nvram --force-extra-removable /dev/sda
Installing for x86_64-efi platform.
Installation finished. No error reported.
px:~# umount /efi
px:~# mount /dev/sdb2 /efi
px:~# grub-install --target=x86_64-efi --efi-directory=/efi --no-nvram --force-extra-removable /dev/sdb
Installing for x86_64-efi platform.
Installation finished. No error reported.
px:~# reboot

https://forum.proxmox.com/threads/howto-proxmox-ve-7-with-software-raid-1.93745/
https://forums.debian.net/viewtopic.php?t=153302

Attention!
If the BIOS boot setting is not UEFI boot, but normal boot, then the grub boot error( disk 'lvmid/....' not found.) will still remain; grub will not have access to the lvm partition and the required /boot directory. You need to go into the BIOS and change to UEFI boot type.
 
Last edited:
For Proxmox 8.0:

px:~# cat /etc/debian_version
12.0
Add /etc/apt/sources.list.d/pve-no-subscription.list lines:
deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
deb http://security.debian.org/debian-security bookworm-security main contrib
Commenting lines in files:
/etc/apt/sources.list.d/ceph.list
/etc/apt/sources.list.d/pve-enterprise.list
px:~# apt update && apt upgrade -y
Somebody told me above in this thread to use apt dist-upgrade instead of apt upgrade. But I think it's fine..

Your code blocks won't work via ```. You need to use the "Insert code" button in this forum.

Ps. Also why would you install grub-efi-amd64 package, this one is already installed, right?

Ps.ps.

Create a larger sdb3 than sda3, for example(!) +10G
Why?? We are setting up a RAID 1 mirror with two identical disks.
 
Last edited:
When would sda1 and sda2 need to be resynced with sdb1 and sdb2? Are those ever updated during apt upgrade or apt full-upgrade?

I am assuming one would also need to be aware which of those two disks the BIOS booted into if they get updated? For example, if you booted into sdb then you would need to resync that over to sda, not the other way around correct?
Very good question, sorry your question was kept unanswered all this time.

The EFI partition contain bootloader, EFI boot manager, and other essential files for the operating system to boot in UEFI mode, incl but not limited to motherboard firmware, secure boot encryption keys, potentially multiple kernels (if configured to store them here).

Assuming that a new Linux kernel update or especially a firmware package updates might cause to update this partition, that would be a good moment to sync these partitions again.

Or.. should we also setup a RAID 1 mirror on those partitions?
 
Hi,Howto works for 8.1.
Usually i install debian with mdadm raid configured during debian install process.This makes raid1 only from sda3/sdb3 partitions.Why not to make whole installation using one raid1 to keep it sync while updates installed?Also using mdraid1 from pair of ssd as r\w cache for lvm-thin,consider add this section to howto)
 
There are reasons why mdadm isn't officially supported. See for example here: https://forum.proxmox.com/threads/z...ty-and-reliability-of-zfs.116871/#post-505697


And no need for raid1 for partition 2 as the proxmox-boot-tool is used to already sync the bootloader between the disks.
Main issue is VM caching mode,which i never use in default value "none".Some scenarios are limited by hw\budget\etc.I would use officially supported btrfs, but didn't find yet proper instructions how to add ssd caching in raid1 with bcache for existing btrfs raid1 of pair hdd.
 
I'm keenly interested in this subject. Built up a server for a homelab setup using some items I had lying around and threw it all into a NAS style case. 8th Gen i5 with 32GB RAM. Boot/System drives will be SATA SSD (qty: 2 x 1TB) and planning a 12tb ZFS raidz1 pool (4 x 4TB) of spinning rust. System OS and VM virtual disks will live on the SSD's that I plan to have in a mirrored pair. Data will be on the ZFS pool. Knowing the overhead already of ZFS on the system, thinking to use mdraid for the System OS/VM virtual disk mirror and leverage ZFS only for the spinning rust data array. Guessing if building out from scratch, the best means to accomplish this is a Debian install and then overlay PVE on top. Is this a sound approach? What would be the recommended disk layout for the boot/system mirrored drives? I'll need enough of that 1tb to house the VM virtual disks (~700-800GB).

Grateful for any thoughts...
 
Grateful for any thoughts...
Nowadays, I almost exclusively use special devices with ZFS if spinning rust disks are invovled.

Simplification: So, you create the pool, add the SSDs as mirrored special device to the pool, setup a second dataset for pve with special_small_blocks=0 so that everything that is written there will directly go to the SSDs for the guest VMs. To move your OS dataset, you need to set the special_small_blocks for the root dataset and send/receive it once to have the data moved to the SSDs.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!