First I will try to explain our use case:
- Remote industrial hardware
- Only two drive ports, 1x Sata + 1x M2
- HMI + DAQ functionality (Human Machine Interface and Data Acquisition)
- Deployed on hard to access areas, with no IT personal deployed in locally
- Uptime is important
Now I will try to explain why we are considering PVE for this use case:
- Solid and reliable (Debian based)
- Native ZFS (once of the very few distros that offer official ZFS support)
- Proxmox boot tool (ZFS would provide rootfs redundancy, and this would seal the deal providing ESP redundancy)
Why?
1. The fact that this HW is in remote hard-to-reach areas without having IT personal there makes it much easier to ship a new drive when one fails and have non-tech people physically change it. It's people that you would trust with their hands but never behind a keyboard and much less on a grub or dracut emergy shell. So a RAID is something that would give us a "progressive failure" scenario that would allow people on site give maintenance to the system without data loss and without uptime loss (planned maintenance)
2. Uptime is important. RAID increases uptime.
As HW raid is not an option (no PCIe expansion) and ZFS seems to be the only feasible software approach to an unattended software RAID (BTRFS refuses to boot from a degraded array and seems dangerous to run it with degraded flag perpetually), ZFS seems to be our only option. And as we don't really 100% trust community openZFS support for a production environment like this, we remain only with proxmox and NixOS as the only two distros with official ZFS support. And proxmox having proxmox-boot-tool for ESP redundancy, then proxmos feels like the ultimate software drive redundancy linux distribution.
The bad?
Since we don't need virtualization for anything in this application (or at least that's what we naively think) and sometimes it could be even counter productive like when using DAQ hardware with drivers that need direct kernel/HW access (it could work with hardware passthrough and things like that but it's not officially supported). Same for the graphical output with the HMI with iGPU passthrough... So our potential approach is to completely gut the PVE working phylosophy and install everything on host instance without using any of the virtualization features. This obviously sound as bad as it can, but what would be the real world consquences of that?
So the actual questions are:
1. What are the real consequencues of use PVE like that?
2. Can we uninstall all the virtualization related services and software?
3. Does any of this make any sense?
- Remote industrial hardware
- Only two drive ports, 1x Sata + 1x M2
- HMI + DAQ functionality (Human Machine Interface and Data Acquisition)
- Deployed on hard to access areas, with no IT personal deployed in locally
- Uptime is important
Now I will try to explain why we are considering PVE for this use case:
- Solid and reliable (Debian based)
- Native ZFS (once of the very few distros that offer official ZFS support)
- Proxmox boot tool (ZFS would provide rootfs redundancy, and this would seal the deal providing ESP redundancy)
Why?
1. The fact that this HW is in remote hard-to-reach areas without having IT personal there makes it much easier to ship a new drive when one fails and have non-tech people physically change it. It's people that you would trust with their hands but never behind a keyboard and much less on a grub or dracut emergy shell. So a RAID is something that would give us a "progressive failure" scenario that would allow people on site give maintenance to the system without data loss and without uptime loss (planned maintenance)
2. Uptime is important. RAID increases uptime.
As HW raid is not an option (no PCIe expansion) and ZFS seems to be the only feasible software approach to an unattended software RAID (BTRFS refuses to boot from a degraded array and seems dangerous to run it with degraded flag perpetually), ZFS seems to be our only option. And as we don't really 100% trust community openZFS support for a production environment like this, we remain only with proxmox and NixOS as the only two distros with official ZFS support. And proxmox having proxmox-boot-tool for ESP redundancy, then proxmos feels like the ultimate software drive redundancy linux distribution.
The bad?
Since we don't need virtualization for anything in this application (or at least that's what we naively think) and sometimes it could be even counter productive like when using DAQ hardware with drivers that need direct kernel/HW access (it could work with hardware passthrough and things like that but it's not officially supported). Same for the graphical output with the HMI with iGPU passthrough... So our potential approach is to completely gut the PVE working phylosophy and install everything on host instance without using any of the virtualization features. This obviously sound as bad as it can, but what would be the real world consquences of that?
So the actual questions are:
1. What are the real consequencues of use PVE like that?
2. Can we uninstall all the virtualization related services and software?
3. Does any of this make any sense?