Wish List (maybe some possible?)

derringer

Renowned Member
May 17, 2012
5
1
68
Hello,

I have a few suggestions for future updates and also questions on whether anyone has solutions for some of them. Most of this there are manual workarounds, but the product could be so much better if these were looked at. I am coming from a small-business perspective, and not a large enterprise:

1. ZFS installs to dissimilar disk sizes -- I do not understand why we implement installers that do not allow this very basic option, but Truenas also imposes this oddity. It is very easy in ZFS under linux to create a mirror pool of dissimilar disk sizes using the smaller disk as max size. There are really only two options which would be easy to prompt the user on and I simply don't know why the option isn't allowed: 1. Use the size of the smallest disk inside the vdev for a 2 disk mirror or raidz, and 2. use the size of the smallest disk inside the vdev in an X disk mirror, which allows for larger pools. No, neither is 'optimal', but we are talking about the difference between the system waiting a millisecond or two for a non-identical drive.. its very well supported in ZFS to use the smaller disk in a 2disk install mirror, for instance. Why is it not supported? At least make a 2 disk mirror install option 'using the smaller disk.' It would take like 30 seconds of coding to --force at the max size of the smaller disk with a warning to the install user. Instead, we have to install ZFS Mirror Raid0, and then spend 5 mins in the CLI looking up UUIDs to make it a Mirror. Just unnecessary for such a common thing in the real world, at least on the install.

2. 2 host cluster support without HA (Master-Slave.), (not technically a cluster) -- This is very common in the homelab and also common, in my experience, in small business. I know of the workarounds, but why not support it officially with GUI options? This is a master-slave configuration. I have tried several installs, for instance, where the slave is just used as a 'hot backup,' and receives ZFS replication every 15 minutes. The only migrations are admin initiated, and there is no HA. This is technically a little more difficult, but it just seems like it should be more supported as common as it is, especially when you already have a seperate 'HA' area in the GUI. This is replication without HA and manual snapshot backup without HA, and is done all the time in real-world small business installations that don't need rapid failover, but instead manual failover with no shared storage is fine.

3. Replication via ZFS allowing for user settable 'keep X snapshots.' This one seems like an absolutely easy no-brainer. If you're using ZFS replication for 15 minute interval backup, its pretty easy for corruption or whatever caused the first host to go down to be in the few minute old snapshot on the other host. Why not let the user specify how many snapshots to keep and offer a rollback mechanism when manually recovering from a host outage on the other cluster machine end? This is possible, again, by manually setting up replication in the CLI, but then you lose all the niceness of the 'Replication' and logging tab in Proxmox-- why not make it an option in the 'Replication' tab of the cluster? Seems like it would take a very small amount of time to add this.

4. Why does one have to do manual CLI kernel load changes to support IOMMU PCI passthrough? it seems common enough to just have a button to add them in the GUI.

5. GUI ability to look at traffic on individual NICs/ more customizable 'dashboard/reporting' pane. Self explanatory, but would be nice to have more options and more customizations in the dashboard panes.

6. Storing of the 'config' information of a host when using ZFS replication. Why should I have to go to CLI and issue a 'mv /etc/pve/nodes/... XXX.conf' to a different host to restore the VM on a different hose when one goes down unexpectedly (see I have it memorized for how often I have had to look it up,) It seems like this would be an easy fix.. when attempting to restore, just copy and/or mv the vm.CONF file to the other pve server /etc/pve/nodes... directory. This trips up so many new users and is really unintuitive for the ZFS replication form of emergency backup (please note that alot of what I detail above is for people using ZFS replication as emergency backup, so I think a look at how this feature is being used in the real world in small businesses and homelabs could really help here with some easy and low effort development time given to it.) Its not really replicated if a 15 line .CONF file is not also backed up (and the really odd part is it is backed up in the shared cluster filespace, so why not support its movement automatically when restoring?


Thanks for the time. We are considering moving vmware production servers in small business to proxmox and some of these items would go a long way to making the decision positive to pay for licensing when/if we do so. (i.e. if we decide to go with the product, we will absolutely pay the licensing fee, but the point here is that many smaller shops use ZFS replication instead of shared storage for psuedo-high availability with manual admin intervention, and there could be easy fixes to better support this. It just seems obvious that this use-case is not as common as larger enterprises, but the product is a great fit for the smaller enterprise, so please take a look.


Thanks for reading-- maybe some others can chime in if they disagree/agree with any of this. I have used the software in various homelab and even small production instances since 2012, so I am familiar with where it has come from and think I'm at least knowledgeable on the small business use case.
 
  • Like
Reactions: entilza