Can't initialize NVMe SSD disk with GPT

TomSawyer

New Member
Aug 24, 2020
29
0
1
37
I'm getting this error in proxmox when trying to initialize an empty NVMe SSD drive (Samsung 970 Evo plus) -
Code:
Invalid partition data!
TASK ERROR: command '/sbin/sgdisk /dev/nvme0n1 -U R' failed: exit code 2
 
Was this NVMe part of another system before?

You may want to zap it (destroys all data left on it!) as root using the console: sgdisk --zap-all /dev/nvme0n1 and then retry.
 
Thank you very much! It worked!

I have another question please if I may - I know about the VGS command but I can't seem to find the command that explicitly shows the actual free unused space on a certain disk.
 
Free space can be a few things, depending on the point of view and the volume manager and file-system combinations in use, especially as some technologies allow overcommitement and some do not.

So what filesystem do you use and specific metric would you be interested in?
 
I don't need that anymore after managing to delete local-lvm and move the space it used to local.

But for the future - let's say that during install I limited the hdsize. Then when using vgs it calcualtes only the free space out of the size I set during install. How do I see (instead of calculating by my self lol) the free space left on that entire physical disk?
 
Ah, so you mean the unpartitioned space. You could use the gdisk utilities "print partition table" command:
gdisk -l /dev/nvme0n1

This includes information like "Total free space", the single partitions and unpartitioned space.
 
  • Like
Reactions: TomSawyer
Thank you! One last question PLEASE :)...I want all my VMs to go through my firewall VM. I know I need to bridge all of them but when I check the network on the host it shows the pve IP I chose during install.

Do I need to add a new Linux Bridge for every VM or just use the same vmbr0 for all of them and proxmox will do the rest and route/switch between them?

The only thing that confuses me is the pve's IP assignment on that vmbr0 bridge.

I'm thinking of this solution - leave vmbr0 (pve's bridge) untouched, add a new bridge vmbr1 and make all the VMs use it as a "switch", assign this new bridge as a network device on the pfSense VM and I think it should work.

What do you say?
 
Last edited:
Normally the Firewall VM has two (virtual) network interfaces, one assigned to vmbr0 (which is connected to the outside) and one onto a second bridge, for example, "vmbr1" which is effectively providing the function of a LAN switch for the VMs.

Then you configure all remaining VMs to use the vmbr1 LAN bridge, so they'll route through the firewall VM.

The Proxmox VE host can and should keep its address on vmbr0, that's just fine.
 
Ok...I understand but my setup is a little different. Also, I've read your post.

I'm passing a PCI-E Intel network card to the firewall VM. On the motherboard I have only one NIC (Realtek).
All that is left for me to do is route all VMs through a vmbr(x) bridge.

But I still have some questions (please bear with me a little):

1. Can I disable the physical connection of the motherboard NIC that hosts vmbr0 and make it internal only?

2. How do I pass through the firewall VM not only all my VMs but also the pve host itself?

Thank you,

Screen Shot 2020-08-25 at 22.13.11.png
 
Last edited:
1. Can I disable the physical connection of the motherboard NIC that hosts vmbr0 and make it internal only?

Yes, but why should you want to do that? Then the Proxmox VE host itself would not be connected to the outside (and not get updates, and such). I mean, you could possibly also route it through the FW VM, if that's what you want?

2. How do I pass through the firewall VM not only all my VMs but also the pve host itself?

Ah OK, so that IS what you want :)

So, remove the "slave port" property from the one bridge you want to make internal, add a fixed IP (or change it to dhcp) from the FW VM provided IP range and set the gateway of that internal bridge to the FW VM's internal IP.
 
So, remove the "slave port" property from the one bridge you want to make internal, add a fixed IP (or change it to dhcp) from the FW VM provided IP range and set the gateway of that internal bridge to the FW VM's internal IP.

My setup is currently like this - I physically connect the motherboard's NIC (vmbr0) to a switch that is controlled by the FW VM (via its passed through PCI-E Intel card).

1. In order to achieve it internally without any loop cabling, I need to follow your suggestion, correct?

2. If I do as you suggested and I use DHCP it would eliminate the option to access the host if the FW VM goes down, wouldn't it?

3. If I keep a fixed address as the "gate" to the host, what addresses would all the VMs connected to that bridge get? That fixed IP or does proxmox knows how to do switching/routing on these bridges (even when they are assigned with a fixed IP)?

Maybe it's all just semantics but it can still be confusing.
 
Last edited:
You may want to zap it (destroys all data left on it!) as root using the console: sgdisk --zap-all /dev/nvme0n1 and then retry.
This and many other attempts did not work for one of my HDDs.
This did the job:
dd if=/dev/zero of=/dev/sd<X> bs=4M status=progress

Best regards, Peter
 
This and many other attempts did not work for one of my HDDs.
This did the job:
dd if=/dev/zero of=/dev/sd<X> bs=4M status=progress

Best regards, Peter
Yes, depending on what you want to do afterwards with that there's more required.
In the installer we actually do a combination of clearing the first ~200 MiB with zeros with dd + sgdisk zap and ZFS label clear.

Fully writing the whole disk to zero does the trick but may need a lot of time and be overkill.

The next time try the probably quicker:
Bash:
# WARNING: dangerous, triple check before executing
 sgdisk --zap-all /dev/DEVICE
 zpool labelclear -f /dev/DEVICE
 dd if=/dev/zero of=/dev/DEVICE bs=1M count=200 conv=fdatasync status=progress
 
  • Like
Reactions: i_am_jam
Thanx, next time I`ll try Your advice. I still wonder, what was stored on my intractable HDD: the protective MBR, the GPT and all partition tables I did erase with "dd count" and several other tools/attempts, but nothing helped. I would have liked to test Your hints to a faster fix, but since I sleged down all data an this HDD, it is not possible anymore.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!