Worth using ZFS as main prox installation on a single SSD disk?

Skyrider

Active Member
May 11, 2020
55
1
28
37
So I've been looking into proxmox for the past few days, and I'm quite interested and impressed how it works thus far. But the main thing that concerns me is that with a single 140GB SSD disk (don't have any more HDD's), I can only use RAID0. Seeing I don't have any other HDD's to set up as a RAID.

So the question is, is it worth installing proxmox as a ZFS disk? Or should I install it using EXT4 instead, and use ZFS for the containers?

Regards,
Skyrider
 
Zfs adds a little overhead, but offers way more features.

I would go for zfs especially due to compression and snapshots.

Also note that proxmox integrates zfs for things like snapshots/backups. So you will get more out of proxmox.

It makes things easier in emergency when you can simply mount the zfs volumes etc. too.


However in your case, you can get ssd's starting at 20$ for 128gb.

Would not hurt dropping one extra in and go zfs raid1. Depends on what you are going to do with it.
 
Last edited:
@H4R0

Thanks for the reply!

Thing is, I'm paying for a root server and thus only got allocated space given to me. I'm pretty sure the SSD's on the server are already set-up in a RAID, but I highly doubt that would do me any good. Does it?
 
@H4R0

Thanks for the reply!

Thing is, I'm paying for a root server and thus only got allocated space given to me. I'm pretty sure the SSD's on the server are already set-up in a RAID, but I highly doubt that would do me any good. Does it?

In this case you should check if a raid controller is available. List pci devices for a controller if its a bare metal root server. Read its configuration and make sure a raid is setup and in good standing. Oterwise ask your hosting provider.

As this seems to be for production use i would not go without a raid.

Hardware raid is fine if it is setup, then go with raid0 zfs on proxmox install.
 
So the question is, is it worth installing proxmox as a ZFS disk? Or should I install it using EXT4 instead, and use ZFS for the containers?


Yes, if you want to be more safety for your data if you setup zfs with copies=2(so for each data who is write to the zfs you will have 2 different copies). Also the zfs encryption, snapshots could be very usefull!

Good luck /Bafta
 
In this case you should check if a raid controller is available. List pci devices for a controller if its a bare metal root server. Read its configuration and make sure a raid is setup and in good standing. Oterwise ask your hosting provider.

As this seems to be for production use i would not go without a raid.

Hardware raid is fine if it is setup, then go with raid0 zfs on proxmox install.

00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02)
Subsystem: Red Hat, Inc. Qemu virtual machine
Flags: fast devsel

00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
Subsystem: Red Hat, Inc. Qemu virtual machine
Flags: medium devsel

00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] (prog-if 80 [ISA Compatibility mode-only controller, supports bus mastering])
Subsystem: Red Hat, Inc. Qemu virtual machine
Flags: bus master, medium devsel, latency 0
[virtual] Memory at 000001f0 (32-bit, non-prefetchable)
[virtual] Memory at 000003f0 (type 3, non-prefetchable)
[virtual] Memory at 00000170 (32-bit, non-prefetchable)
[virtual] Memory at 00000370 (type 3, non-prefetchable)
I/O ports at c0e0
Kernel driver in use: ata_piix
Kernel modules: pata_acpi

00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) (prog-if 00 [UHCI])
Subsystem: Red Hat, Inc. QEMU Virtual Machine
Flags: bus master, fast devsel, latency 0, IRQ 11
I/O ports at c0c0
Kernel driver in use: uhci_hcd

00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03)
Subsystem: Red Hat, Inc. Qemu virtual machine
Flags: medium devsel, IRQ 9
Kernel driver in use: piix4_smbus
Kernel modules: i2c_piix4

00:02.0 VGA compatible controller: Device 1234:1111 (rev 02) (prog-if 00 [VGA controller])
Subsystem: Red Hat, Inc. Device 1100
Flags: bus master, fast devsel, latency 0
Memory at f8000000 (32-bit, prefetchable) [size=64M]
Memory at febd0000 (32-bit, non-prefetchable) [size=4K]
Expansion ROM at 000c0000 [disabled] [size=128K]

00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device
Subsystem: Red Hat, Inc. Virtio network device
Physical Slot: 3
Flags: bus master, fast devsel, latency 0, IRQ 10
I/O ports at c000
Memory at febd1000 (32-bit, non-prefetchable) [size=4K]
Memory at fc000000 (64-bit, prefetchable) [size=16K]
Expansion ROM at feb80000 [disabled] [size=256K]
Capabilities: [98] MSI-X: Enable+ Count=14 Masked-
Capabilities: [84] Vendor Specific Information: VirtIO: <unknown>
Capabilities: [70] Vendor Specific Information: VirtIO: Notify
Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg
Capabilities: [50] Vendor Specific Information: VirtIO: ISR
Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg
Kernel driver in use: virtio-pci

00:04.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI
Subsystem: Red Hat, Inc. Virtio SCSI
Physical Slot: 4
Flags: bus master, fast devsel, latency 0, IRQ 11
I/O ports at c040
Memory at febd2000 (32-bit, non-prefetchable) [size=4K]
Memory at fc004000 (64-bit, prefetchable) [size=16K]
Capabilities: [98] MSI-X: Enable+ Count=9 Masked-
Capabilities: [84] Vendor Specific Information: VirtIO: <unknown>
Capabilities: [70] Vendor Specific Information: VirtIO: Notify
Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg
Capabilities: [50] Vendor Specific Information: VirtIO: ISR
Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg
Kernel driver in use: virtio-pci

00:05.0 Communication controller: Red Hat, Inc. Virtio console
Subsystem: Red Hat, Inc. Virtio console
Physical Slot: 5
Flags: bus master, fast devsel, latency 0, IRQ 10
I/O ports at c080
Memory at febd3000 (32-bit, non-prefetchable) [size=4K]
Memory at fc008000 (64-bit, prefetchable) [size=16K]
Capabilities: [98] MSI-X: Enable+ Count=2 Masked-
Capabilities: [84] Vendor Specific Information: VirtIO: <unknown>
Capabilities: [70] Vendor Specific Information: VirtIO: Notify
Capabilities: [60] Vendor Specific Information: VirtIO: DeviceCfg
Capabilities: [50] Vendor Specific Information: VirtIO: ISR
Capabilities: [40] Vendor Specific Information: VirtIO: CommonCfg
Kernel driver in use: virtio-pci

00:1f.0 System peripheral: Intel Corporation 6300ESB Watchdog Timer
Subsystem: Red Hat, Inc. QEMU Virtual Machine
Physical Slot: 31
Flags: fast devsel
Memory at febd4000 (32-bit, non-prefetchable)
Kernel modules: i6300esb

The only information I can find is that it's using hardware RAID. Not much info I can find regarding that on the server:

Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: QEMU Model: QEMU HARDDISK Rev: 2.5+
Type: Direct-Access ANSI SCSI revision: 05
 
Last edited:
The only information I can find is that it's using hardware RAID. Not much info I can find regarding that on the server:

Thats the output of the Host running proxmox ?

Its a linux vm itself using KVM, nothing about a raid controller.

Cant tell how the host is configured.

You are probably fine.
 
That's the output of the host running on Ubuntu, I haven't installed proxmox yet on a live production system. Gathering as much information I can before I do so.

As for the configuration, do share what you are looking for. So I know what to look for :p
 
As for the configuration, do share what you are looking for. So I know what to look for :p

As H4R0 tried to explain: The output you showed us implies that this is already a virtualized system in which you want to install another virtualizer (this is called nesting). If works properly for LXC but you need support for nesting KVM, if you want to run KVM-based virtual machines. Therefore, there is no RAID, because your system is already virtualized and propable already raided on the hypervisor itself, so you don't need to have another RAID, you can just install ZFS and are good to go.
 

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!