Hi,
I'm running a cluster with 8 nodes of PVE 5.4 with Ceph RBD (HCI) as storage. For many reasons I prefer to switch to NFS (NetApp All Flash) for shared storage, and after upgrading PVE to the latest release.
My idea is to:
- configure NetApp and add the NFS storage to all PVE nodes
-...
Hi,
I'm running PVE 5.4 with Ceph (Hyper-Converged Ceph Cluster) for VM storage. Usually I create a new VM starting with a clone from another VM.
Example: centos7-vm cloned to web-vm and mail-vm.
Can I safely delete centos7-vm without problems or web-vm and mail-vm will have some trouble?
Thanks
Hi,
I'm runing PVE 5.4 with Ceph. I notice that we can clone a VM also when is power on, and works fine. But is safe to clone a VM when is on or the disk can be corrupted?
Thanks
Hi,
I'm running a PVE cluster with 8 node (Supermicro with Intel Xeon), each running PVE 5.4 and Ceph in hyper converged mode. Ceph is configured with 3 monitor, replica x 3, 6 OSD per node with a total of 48 and 2048 PG. Each OSD is an Intel SSD D3-S4510 960GB.
Now it's time to do some...
Stefano, I have buy yestarday :)
2 SuperMicro TwinPro (2029TP-HC0R), with 4 nodes, each with:
2 CPU Xeon 4114
192 GB RAM
4 port 10GB SPF
2 128GB SSD SATADOM for OS
6 Intel D3-S4510 960GB for Ceph
Why you have also 4 port 1 GB? My intention is to use 2x10Gbit for Ceph and 2x10Gbit for Internet...
Hi Stefano,
I have buy now the same configuration (Supermicro Twin 2029TP-HC0R) with the intention to run Proxmox+Ceph cluster. Have you already install on new hardware and all works fine? Have you request to Supermicro to set LSI 3008 in IT Mode?
Thanks
Hi alexskysilk,
thanks for your suggestions that are probably true. SSD are half of my budget.
I update my configuration like this for each of 8 nodes:
CPU 2 x Intel Xeon 4114 10C/20T
RAM 12 x 16GB (192GB)
6 x SSD Intel D3-S4510 960GB
4 x 10Gbit SPF
2 x 128GB SATADOM for Proxmox
and create...
Hi,
based on my budget I have update my configuration with 6 x 1.92TB SSD Intel D3-S4510 on each of 8 nodes for a total of 48 SSD and 192GB of RAM per node.
My question is, how usable space can I consider for a safe enviroment with x3 replica?
For RAM, can I consider 64GB of reserved Ram for...
Thanks to all for informations. Our VM are small but after your example I understand that 480GD for SSD are too small, so we evaluate at least 960GB SSD.
Currently we have VMs (for now 40 but will grow) hosted by an Hosting Provider. We need to migrate from a public cloud to a private infrastructure so we are evaluating PVE. We are not interesting into have a dedicated external storage via iSCSI, NFS so we are looking for Ceph.
PVE node will be...
Thanks Tim,
I'm evaluating 4 or 8 node because hardware will be SuperMicro Twin where for each 2U case we have 4 server inside.
With 8 node with 6 SSD each we will have a total of 48 SSD (so will be able to setup 48 OSD). Is this a good configuration for a PVE 5.3 in HCI mode with Ceph?
Hi,
how many nodes do you reccomend for a Proxmox Cluster with Ceph (HCI mode)? We would like to start with 4 node with 6 SSD disk each, so will have 6 OSD per node and PVE OS on SATA Dom.
The other option is 8 node with the same 6 SSD disk on each.
Is fine to start with 4 node? Somebody...
Hello,
I'm interesting into setup a cluster of 4 node with Proxmox VE 5.3 and Ceph on local nodes (like HCI).
I'm not sure about the hardware to buy, my idea is to buy 4 SuperMicro with Intel Xeon, 10Gbit Ethernet and 6 Intel D3-S4610 SSD disk each, for a total of 24 SSD like SuperMicro Twin...
Hi,
I'm running the latest version Proxmox Mail Gateway, all works fine but sometimes the ctasd.bin process eat all CPU resources and the load grow up (10-20-30 ...) until I restart the "commtouch-ctasd" daemon (via console). This is not related to the email traffic volume, the problem happens...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.