Setting up PVE

proxmoxia

New Member
Nov 17, 2020
12
1
3
41
I am planning on setting up Proxmox on a server with 2 x 480 GB SSD.

Server will be used for test/dev purposes and expected to be deploying 3-4 VMs on it.

This is new to me so any recommendations on how I should be partitioning this and then build it?

Should I install Proxmox directly from ISO or install on top of an OS of my choice, say Debian?

Thank you in advance!
 
hi,

This is new to me so any recommendations on how I should be partitioning this and then build it?
if you use the ISO then you can select it, e.g., ZFS or LVM, and the partitioning will be done automatically.

Should I install Proxmox directly from ISO or install on top of an OS of my choice, say Debian?
the recommended way is with the ISO, but you can also install on top of debian [0] without any problems

[0]: https://pve.proxmox.com/wiki/Install_Proxmox_VE_on_Debian_Buster
 
  • Like
Reactions: proxmoxia
I have this now installed on a single disk (480GB).

Would I need to re-partition this single disk and split to 2 or more drives for ISO and VM/Containers?
 
Should I install Proxmox directly from ISO or install on top of an OS of my choice, say Debian?
You only need to do that if you want to use mdraid or full system encryption.


Would I need to re-partition this single disk and split to 2 or more drives for ISO and VM/Containers?
What do you want to accomplish?
Do you just want two separate drives?
Do you want the full capacity and more speed without safty (raid0/stripe)?
Do you want safty but with just the half capacity usable (raid1)?

What is your hardware?
ZFS for example likes reliable hardware (ECC RAM, SSDs with powerloss protection, alot of RAM for caching) and LVM will work fine with cheaper hardware.
 
You only need to do that if you want to use mdraid or full system encryption.



What do you want to accomplish?
Do you just want two separate drives?
Do you want the full capacity and more speed without safty (raid0/stripe)?
Do you want safty but with just the half capacity usable (raid1)?

What is your hardware?
ZFS for example likes reliable hardware (ECC RAM, SSDs with powerloss protection, alot of RAM for caching) and LVM will work fine with cheaper hardware.

The hardware is a E3-1270 v3 with 32GB memory and a single 480GB SSD disk.

This is for R&D/test purposes so it can go full on for performance.
 
I have installed Proxmox on top of Debian Buster but how do I manage partitions since I only have 1 disk.

Proxmox doesn't seem to see those other partitions which I have created, supposedly for ISO and VM data.

Code:
Device     Boot     Start       End   Sectors   Size Id Type
/dev/sda1  *         2048 195315711 195313664  93.1G 83 Linux
/dev/sda2       870678526 937701375  67022850    32G  5 Extended
/dev/sda3       195315712 273440767  78125056  37.3G 83 Linux
/dev/sda4       273440768 870676479 597235712 284.8G 83 Linux
/dev/sda5       870678528 937701375  67022848    32G 82 Linux swap / Solaris

Partition 2 does not start on physical sector boundary.
Partition table entries are not in disk order.

Code:
Disk /dev/sda: 480GB
Sector size (logical/physical): 512B/4096B
Partition Table: msdos
Disk Flags:

Number  Start   End    Size    Type      File system     Flags
1      1049kB  100GB  100GB   primary   ext4            boot
3      100GB   140GB  40.0GB  primary   ext3
4      140GB   446GB  306GB   primary   ext3
2      446GB   480GB  34.3GB  extended
5      446GB   480GB  34.3GB  logical   linux-swap(v1)
 
Proxmox doesn't identify partitions/pools/datasets by itself. You need to add every storage manually to proxmox using the GUI: "Datacenter -> Storage -> Add" and choose the right storage type.
 
  • Like
Reactions: proxmoxia
Proxmox doesn't identify partitions/pools/datasets by itself. You need to add every storage manually to proxmox using the GUI: "Datacenter -> Storage -> Add" and choose the right storage type.
Would they need to be pre-partitioned or can it be done all in Proxmox?
 
That options are quite confusing. Adding a storage that way will not create it on your host, you just point proxmox to an already existing storage you manually created earlier so that proxmox can use it. If you for example want a place to store your ISOs you can add a storage of type "directory" that points to any already existing folder.
 
Last edited:
That options are quite confusing. Adding a storage that way will not create it on your host, you just point proxmox to an already existing storage so that proxmox can use it.
Yeah that would be best but I'm currently just utilizing this unused dedicated server which only has 1 disk :|
 
The storage model on Proxmox is very flexible you can use directories, zfs pools and datastores, nfs and cifs shares, Gluster, Ceph, and LVM

you may be better of installing proxmox from the ISO image to begin learning, at least from that point your install will have a common configuration as starting point.

for example, ISO images and Container templates would normally be stored at /var/lib/vz on a standard install but you can change this to be whatever suits your setup.

There is comprehensive documentation available at https://pve.proxmox.com/pve-docs/index.html
 
Yeah that would be best but I'm currently just utilizing this unused dedicated server which only has 1 disk :|
I don't see the problem. You got 1 disk and you already created 2 additional partitions. Now you just need to point proxmox to that two partitions so proxmox can utilize them.
 
you may be better of installing proxmox from the ISO image to begin learning, at least from that point your install will have a common configuration as starting point.
Why are you installing proxmox ontop of a debian at all and not just installing from the preconfigured Proxmox VE ISO? That way all partitions and folders are created for you. Using Debian is only useful if you need advanced stuff like mdraid, full system encryption and so on what the installer of the PVE ISO doesn't support.
 
Why are you installing proxmox ontop of a debian at all and not just installing from the preconfigured Proxmox VE ISO? That way all partitions and folders are created for you. Using Debian is only useful if you need advanced stuff like mdraid, full system encryption and so on what the installer of the PVE ISO doesn't support.
Yeah figured I'd rebuild using the ISO now.
 
Yeah figured I'd rebuild using the ISO now.
Code:
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C3C843DA-C836-4124-B07C-E067B6160FBE

Device       Start       End   Sectors   Size Type
/dev/sda1       34      2047      2014  1007K BIOS boot
/dev/sda2     2048   1050623   1048576   512M EFI System
/dev/sda3  1050624 937703054 936652431 446.6G Linux LVM

Partition 1 does not start on physical sector boundary.


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/pve-root: 60 GiB, 64424509440 bytes, 125829120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

This is what I have now.
 
  • Like
Reactions: Dunuin
Got it up and running, finally.

Tested creating a container and that seems to work as well.

Code:
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C3C843DA-C836-4124-B07C-E067B6160FBE

Device       Start       End   Sectors   Size Type
/dev/sda1       34      2047      2014  1007K BIOS boot
/dev/sda2     2048   1050623   1048576   512M EFI System
/dev/sda3  1050624 937703054 936652431 446.6G Linux LVM

Partition 1 does not start on physical sector boundary.


Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes


Disk /dev/mapper/pve-root: 60 GiB, 64424509440 bytes, 125829120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes

Now this box has some IPs and I'd like to assign them to the VMs.

Am I doing this right? With regards to the network configuration.

Code:
# network interface settings; autogenerated
# Please do NOT modify this file directly, unless you know what
# you're doing.
#
# If you want to manage parts of the network configuration manually,
# please utilize the 'source' or 'source-directory' directives to do
# so.
# PVE will preserve these directives, but will NOT read its network
# configuration from sourced files, so do not attempt to move any of
# the PVE managed interfaces into external files!

auto lo
iface lo inet loopback

auto eno1
iface eno1 inet static
        address 192.168.0.101/24

auto eno2
iface eno2 inet static
        address 192.168.0.102/24

auto eno3
iface eno3 inet static
        address 192.168.0.103/24

auto eno4
iface eno4 inet static
        address 192.168.0.104/24

auto vmbr0
iface vmbr0 inet static
        address 192.168.0.100/24
        gateway 192.168.0.1
        bridge-ports eno1 eno2 eno3 eno4
        bridge-stp off
        bridge-fd 0
 
Well you don't need to have an IP address on the interface and the bridge and I would just use the bridge address as the management IP

Also, I would not bridge all of my physical ports on one bridge. If you wanted redundancy then bridge two ports but I can't see the benefit in putting them all together - it won't give you 4Gb/s bandwith if that's what you are thinking.

The usual approach with multiple host ports is to separate network traffic by function - e.g host-to-host or host-to-storage, virtual-to-network, etc
 
Well you don't need to have an IP address on the interface and the bridge and I would just use the bridge address as the management IP

Also, I would not bridge all of my physical ports on one bridge. If you wanted redundancy then bridge two ports but I can't see the benefit in putting them all together - it won't give you 4Gb/s bandwith if that's what you are thinking.

The usual approach with multiple host ports is to separate network traffic by function - e.g host-to-host or host-to-storage, virtual-to-network, etc
So having the eno* interfaces with their IP addresses is pointless? Since the IP will be assigned manually when creating a container?

What about cloud-init?
 
You may choose to use one of your ports as a dedicated management port with an IP directly assigned so that it is isolated and independent of VM traffic so there are circumstances where you might want to do that. Also you could create a bridge on a port and not assign a default IP at all - it depends on what you're trying to achieve.

I don't have much experience with cloud-init, but as far as I am aware, the only requirement is that web access is possible
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!