Thanks for the feedback.
I couldn't see how to specify the pools to use for the storage in the GUI. The default seems to use both hdd and ssd pools.
I found this https://blog.svedr.in/posts/setting-up-ceph-fs-on-a-proxmox-cluster/
and it looks close to what I'm trying to do - use the hdd for...
I'm looking for some suggestions for running our LAN file server for a small office. I have a three node proxmox cluster running Ceph. There is an SSD pool and HDD.
As a test, I created a Debian LXC on the HDD pool, gave it a large disk size and setup a samba server. It was easy enough and...
Thanks for the tips. Here's what I got.
If it helps, I appended the CA bundle to the certificate file to create pveproxy-ssl.pem.
● pveproxy.service - PVE API Proxy Server
Loaded: loaded (/lib/systemd/system/pveproxy.service; enabled; vendor preset: enabled)
Active: active (running)...
I've installed a valid wildcard certificate in pveproxy-ssl.pem and pveproxy-ssl.key. I don't get any errors restarting pveproxy but
curl -vv https://localhost:8006
outputs:
* Expire in 0 ms for 1 (transfer 0x5574f1894f50)
......
* Expire in 0 ms for 1 (transfer 0x5574f1894f50)
* Trying...
I'd like to free up some SATA ports for ceph storage. So I'm thinking of moving my installation from a SATA RAID 1 to an NVME "disk". Is there a preferred method?
Thanks in advance for any guidance.
I'm putting together a cluster and confused about fencing. It looks like the recommended way is to use a watchdog timer to reboot a host that is hung. In my previous experience with clusters, the remaining hosts would "STONITH" the non-responsive host. Is there a reason that this method isn't...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.