[SOLVED] Unable to use all the SSD disks on the server

blcksys

New Member
Sep 13, 2019
20
0
1
36
I installed proxmox VE as per command line (apt-get install proxmox-ve), everything's fine and works ok.
However, when I attempt to create VMs or allocate space for them, on local storage, it shows only one of my SSD's (I can see only 921GB)

On the server there is 2 disks of 1TB each.
I can see them both on Disks page, however, if I try to create any LVM space, it says "no disks unused".

So, how do I actually use both the disks for the VMs?

---------------------

I update the request with some more specific informations.

So, these are the disks installed on the host:
disks.jpg

And this is the Storage on Datacenter:
storage.jpg

I suppose the host system and proxmox-ve are installed on the NVMe0n1 disk (I suppose because I have no idea how to check that), so I think for this reason it gives me only 900GB on local storage for VMs, instead of 2 storages or 2TB space

local.jpg

If I attempt to create a new LVM volume, that is what I get:

nounused.jpg

Those are some informations from host:

Code:
root@cluster1 ~ # fdisk -l

Disk /dev/nvme1n1: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SAMSUNG MZVLB1T0HALR-00000
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x74104ab3

Device         Boot    Start        End    Sectors   Size Id Type
/dev/nvme1n1p1          2048   67110911   67108864    32G fd Linux raid autodete
/dev/nvme1n1p2      67110912   68159487    1048576   512M fd Linux raid autodete
/dev/nvme1n1p3      68159488 2000407215 1932247728 921.4G fd Linux raid autodete


Disk /dev/nvme0n1: 953.9 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: SAMSUNG MZVLB1T0HALR-00000
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xe9ea91e2

Device         Boot    Start        End    Sectors   Size Id Type
/dev/nvme0n1p1          2048   67110911   67108864    32G fd Linux raid autodete
/dev/nvme0n1p2      67110912   68159487    1048576   512M fd Linux raid autodete
/dev/nvme0n1p3      68159488 2000407215 1932247728 921.4G fd Linux raid autodete


Disk /dev/md2: 921.2 GiB, 989175545856 bytes, 1931983488 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md0: 32 GiB, 34325135360 bytes, 67041280 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md1: 511 MiB, 535822336 bytes, 1046528 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Code:
root@cluster1 ~ # df

Filesystem     1K-blocks     Used Available Use% Mounted on
udev            32862252        0  32862252   0% /dev
tmpfs            6577380      876   6576504   1% /run
/dev/md2       949781556 21222524 880243064   3% /
tmpfs           32886896    46800  32840096   1% /dev/shm
tmpfs               5120        0      5120   0% /run/lock
tmpfs           32886896        0  32886896   0% /sys/fs/cgroup
/dev/md1          498532   152965    319404  33% /boot
/dev/fuse          30720       28     30692   1% /etc/pve
tmpfs            6577376        0   6577376   0% /run/user/0

Code:
root@cluster1 ~ # lsblk -l

NAME      MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINT
md0         9:0    0    32G  0 raid1 [SWAP]
md0         9:0    0    32G  0 raid1 [SWAP]
md1         9:1    0   511M  0 raid1 /boot
md1         9:1    0   511M  0 raid1 /boot
md2         9:2    0 921.2G  0 raid1 /
md2         9:2    0 921.2G  0 raid1 /
nvme1n1   259:0    0 953.9G  0 disk
nvme0n1   259:1    0 953.9G  0 disk
nvme1n1p1 259:2    0    32G  0 part
nvme1n1p2 259:3    0   512M  0 part
nvme1n1p3 259:4    0 921.4G  0 part
nvme0n1p1 259:5    0    32G  0 part
nvme0n1p2 259:6    0   512M  0 part
nvme0n1p3 259:7    0 921.4G  0 part

Code:
root@cluster1 ~ # pvesm status
Name         Type     Status           Total            Used       Available        %
local         dir     active       949781556        21222532       880243056    2.23%

Now, the questions is, how do I actually use both the disks for storage/VMs deployment?
 
Last edited:
Both NVMe SSDs are used by mdraid as RAID1 volume. Aside from that, we do not support mdraid setups. Hence not tested by us.
 
Both NVMe SSDs are used by mdraid as RAID1 volume. Aside from that, we do not support mdraid setups. Hence not tested by us.

Is the mraid setup dictated by the hosting provider or was it an option by proxmox-ve setup?
Can I get rid of that setup and use a standard one? How?
I'm sorry for the lot of questions, sadly I am new to proxmox and even searching on the docs I couldn't find anything related that
 
Is the mraid setup dictated by the hosting provider or was it an option by proxmox-ve setup?
Hosting provider. As said above, we don't support it.

Can I get rid of that setup and use a standard one? How?
Ask your hosting provider, if you can run your own installation on that server.

I'm sorry for the lot of questions, sadly I am new to proxmox and even searching on the docs I couldn't find anything related that
No worries. But your questions are general linux and not Proxmox VE specific. ;)
 
Hosting provider. As said above, we don't support it.


Ask your hosting provider, if you can run your own installation on that server.


No worries. But your questions are general linux and not Proxmox VE specific. ;)

So basically, installing a new debian 10 system by ISO would get rid of the RAID setup?
Or it is a physical setup on the server itself?
I am using hetzner as provider
(first time working with raids)

I know those are general linux now, but if you could help I would appreciate a lot, I don't know where to bang my head anymore o_O

About more Proxmox related, in case I have to reinstall the system host, creating a backup of the VMs, exporting them and importing them again on the fresh install, is possible?
 

Yes, I installed proxmox-ve following their guide and some other instructions on proxmox official documentation.

I have a question though, isn't related to proxmox itself, but should give me a start point to fix those stuffs, since I haven't understand what a RAID setup should be.
Does the raid setup means both the disks are showns as 1 (so 1TB+1TB) and on unix should give 2TB available or them are just like a standard configuration and storage is separate?

An example on my question:

a) raid = disk 1 1TB + disk 2 1TB = show as disk 3 2TB
b) raid = disk 1 1TB, disk 2 1 TB shown as separate

Which one is it?

Because if the case is the first, I might have misconfigured the RAID on the debian setup and left one of the disk as unallocated.
If the case is the second, then I have zero clue how to bring that space as available.

Thank you very much for helping me.

---------------- EDIT

I update the post in case for future use of someone who has the same issue as me.
I wrote directly to the hetzner support, and this was their response:

Thank you for your reply.
As you can see there is a RAID 1 configured so you have only the capacity of one drive. This is the default configuration if you perform a automatic installation.

So I suppose that is the reason :rolleyes:

Which now brings me to the question:
How do I maintain my current proxmox configuration, all the VMs and reinstall the host system?

If I do the following, will it work?

- Shutdown the VM
- Convert to Template
- Download Template on a second server/drive
- Reinstall the host system (wiping all drives, etc)
- Reinstall proxmox
- Upload the Template
- Clone Full from Template

Does it keep the VM as it was before?
 
Last edited:
Does the raid setup means both the disks are showns as 1 (so 1TB+1TB) and on unix should give 2TB available or them are just like a standard configuration and storage is separate?

An example on my question:

a) raid = disk 1 1TB + disk 2 1TB = show as disk 3 2TB
b) raid = disk 1 1TB, disk 2 1 TB shown as separate

Which one is it?

You have RAID1, so that you have redundancy: Space of one disk, but one can fail and you still have all your data.
There is also RAID0 that strips your data across both devices so that you have the space of both. This increases the possibility of a unrecoverable crash of your system dramatically and no sane person would run a production environment on this without a good backup concept.

If I do the following, will it work?

- Shutdown the VM
- Convert to Template
- Download Template on a second server/drive
- Reinstall the host system (wiping all drives, etc)
- Reinstall proxmox
- Upload the Template
- Clone Full from Template

Does it keep the VM as it was before?

Shutdown, Backup, copy from the server, reinstall (the only supported software raid configuration would be ZFS), upload backup and restore.
 
You have RAID1, so that you have redundancy: Space of one disk, but one can fail and you still have all your data.
There is also RAID0 that strips your data across both devices so that you have the space of both. This increases the possibility of a unrecoverable crash of your system dramatically and no sane person would run a production environment on this without a good backup concept.



Shutdown, Backup, copy from the server, reinstall (the only supported software raid configuration would be ZFS), upload backup and restore.

Well, if I reinstall, I do that without any RAID configurations, so I'll just have /dev/sda and /dev/sdb, so hopefully I'll be able to also use /dev/sdb space.

About the Backup, could you be more specific? How you do that? Isn't backup too large to download/upload?
 
I solved the issue with the following:

- converted the VMs to template
- uploaded the VMs disk outside the host
- Downloaded all the configs of VMs and SSL certs
- wiped everything from host and reinstalled the host without RAID
- I left out 1 SSD (unmounted)
- Installed Proxmox-VE
- Reuploaded the VMs disk on correct place
- Reuploaded the configs of VMs and removed the Template: 1 line in configs
- Reuploaded the SSL certs
- Made a new LVM disk out of the second unmounted disk
- Reboot

All good now
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!