I'm not sure if this is the best way to make a comment, but I've been using proxmox for home usage trying it for various use cases.
Here are my thoughts:
Have others had similar experiences but found that proxmox did solve their problems? Thanks for sharing your thoughts.
Simon
- Intended usage: adhoc "development", "testing" of VMs or containers or maybe k8s or similar as needed
- I run various services at home such as webserver, DNS, email and have a NAS to provide permanent storage. The services are expected to work and ideally require least effort to keep them running. e.g. "tiny scale" production (from my perspective).
- I've been using Unix or Linux for years so am comfortable with that and learning the more "modern" container technologies. They don't seem to be easily built at home without a lot of care.
- I don't need clustering, but I don't want to lose data
Here are my thoughts:
- The good
- the GUI is great. Not completely intuitive but easy to use and gives you a pretty good overview of what's going on
- you provide an API for driving proxmox. That's also good so in theory a proxmox box/cluster can be "driven" remotely without logging on to the system
- There's a lot of good work done here and proxmox looks to be a solid product if your usage fits it's intended user base.
- The bad
- pvesh seems hard to build if I want to run it adhoc on any say ubuntu / debian / centos / fedora box outside of the proxmox server/cluster. If there are instructions please point them to me. pvesh seems to be "Action based" not "intended state" based (like kubectl with yaml config).
- there's no direct docker / k8s support in proxmox. I can understand why, but there's a lot of interest there. While you support containers LXC containers are not the same as docker containers and a lot of people would like an easy way to manage their containers "directly" in a proxmox cluster. I've seen similar comments from others. So maybe making docker images and related network storage management integration work there too would bring a lot more people.
- getting hold of images, either VM or LXC images seems hard. It's not 100% clear to me exactly how to do this. You have to find them in the appropriate format, download them and can then use them, but the process while it works feels clumsy. With docker I can do docker run .... or docker pull .... and it's pulled from a remote repository and downloaded. If it's tagged against "latest" I'll get an update when one is available.
- I think that providing a way of linking to external image sources would be good so that we can simply search them, and potentially do this automatically mentioning the appropriate image.
- so make it easy to find both iso and VM or container images from remote repositories so they can be easily found. Even if you don't add these by default as some people may not want to provide a list of repos for the major distributions so the VMs or their appropriate cloud images can be used with less effort
- I use CentOS 8 atm. I'm aware of the debate with the switch to CentOS 8 stream. I've now moved to stream, others have chosen differently and I am surprised that you don't provide a recent (the least) CentOS 8 iso or runnable VM image or any CentOS 8 stream ones, or even CentOS 9 stream which has been out for some time now. Again making it easy to add a url so they can be found would be really good but I don't see clear documentation of how to do this.
- The VM or container images (LXC) work well but these days the good thing is to keep as much state out of the "container image" but to link in via mounts (overlay or simply extra mounted disks) on external storage. Technically you can do that but I don't see an easy way to upgrade an LXC container image when updates happen which leads me to the same inconveniences of managing a VM, even if it's lightweight. Coming up with a good way to link into the image the "user/app data" then allows for the base image to be updated more easily and this would give something closer to a docker experience which would be good.
- storage is confusing. I installed with local zfs (from a single HD) and that seems to work fine and tried to attach my external zpool to the proxmox server. That works but I see there are multiple storage types and zfs seems not to support all types of usage (I really wanted to have all "user data" on my external zpool with absolute minimum data on the rpool that holds the OS). Then I read you can get fuller storage support if you export the zpool via NFS and load it back onto the proxmox box (e.g. self mount). Very confusing and unclear.
- I wasn't able to download container images and use them to the external zfs pool. Again reasoning was not clear.
- if you can please simplify the storage layer as seen by the user as much as possible so the container/vm images can be downloaded easily, and can be moved to the right place to be used. I couldn't use Ceph on a single box. That may be the recommended clustered storage solution but does not seem to be appropriate in my case.
Have others had similar experiences but found that proxmox did solve their problems? Thanks for sharing your thoughts.
Simon