Search results

  1. A

    Proxmox 5 and switch from Ceph RBD to NFS (and after upgrade PVE)

    Hi, I'm running a cluster with 8 nodes of PVE 5.4 with Ceph RBD (HCI) as storage. For many reasons I prefer to switch to NFS (NetApp All Flash) for shared storage, and after upgrading PVE to the latest release. My idea is to: - configure NetApp and add the NFS storage to all PVE nodes -...
  2. A

    Cloned VM, can I delete the source VM?

    Hi, I'm running PVE 5.4 with Ceph (Hyper-Converged Ceph Cluster) for VM storage. Usually I create a new VM starting with a clone from another VM. Example: centos7-vm cloned to web-vm and mail-vm. Can I safely delete centos7-vm without problems or web-vm and mail-vm will have some trouble? Thanks
  3. A

    What is the right procedure for doing maintenance and upgrade to PVE and Ceph?

    Thanks, so an "# ceph osd set noout" before start the upgrade is sufficient, and a "# ceph osd unset noout" after the reboot.
  4. A

    Clone a VM when is Power On

    Hi, I'm runing PVE 5.4 with Ceph. I notice that we can clone a VM also when is power on, and works fine. But is safe to clone a VM when is on or the disk can be corrupted? Thanks
  5. A

    What is the right procedure for doing maintenance and upgrade to PVE and Ceph?

    Hi, I'm running a PVE cluster with 8 node (Supermicro with Intel Xeon), each running PVE 5.4 and Ceph in hyper converged mode. Ceph is configured with 3 monitor, replica x 3, 6 OSD per node with a total of 48 and 2048 PG. Each OSD is an Intel SSD D3-S4510 960GB. Now it's time to do some...
  6. A

    New 3-nodes cluster suggestion

    Stefano, I have buy yestarday :) 2 SuperMicro TwinPro (2029TP-HC0R), with 4 nodes, each with: 2 CPU Xeon 4114 192 GB RAM 4 port 10GB SPF 2 128GB SSD SATADOM for OS 6 Intel D3-S4510 960GB for Ceph Why you have also 4 port 1 GB? My intention is to use 2x10Gbit for Ceph and 2x10Gbit for Internet...
  7. A

    New 3-nodes cluster suggestion

    Hi Stefano, I have buy now the same configuration (Supermicro Twin 2029TP-HC0R) with the intention to run Proxmox+Ceph cluster. Have you already install on new hardware and all works fine? Have you request to Supermicro to set LSI 3008 in IT Mode? Thanks
  8. A

    Number of nodes recommended for a Proxmox Cluster with Ceph

    Hi alexskysilk, thanks for your suggestions that are probably true. SSD are half of my budget. I update my configuration like this for each of 8 nodes: CPU 2 x Intel Xeon 4114 10C/20T RAM 12 x 16GB (192GB) 6 x SSD Intel D3-S4510 960GB 4 x 10Gbit SPF 2 x 128GB SATADOM for Proxmox and create...
  9. A

    Number of nodes recommended for a Proxmox Cluster with Ceph

    Hi, based on my budget I have update my configuration with 6 x 1.92TB SSD Intel D3-S4510 on each of 8 nodes for a total of 48 SSD and 192GB of RAM per node. My question is, how usable space can I consider for a safe enviroment with x3 replica? For RAM, can I consider 64GB of reserved Ram for...
  10. A

    Number of nodes recommended for a Proxmox Cluster with Ceph

    Thanks to all for informations. Our VM are small but after your example I understand that 480GD for SSD are too small, so we evaluate at least 960GB SSD.
  11. A

    Number of nodes recommended for a Proxmox Cluster with Ceph

    Currently we have VMs (for now 40 but will grow) hosted by an Hosting Provider. We need to migrate from a public cloud to a private infrastructure so we are evaluating PVE. We are not interesting into have a dedicated external storage via iSCSI, NFS so we are looking for Ceph. PVE node will be...
  12. A

    Number of nodes recommended for a Proxmox Cluster with Ceph

    Thanks Tim, I'm evaluating 4 or 8 node because hardware will be SuperMicro Twin where for each 2U case we have 4 server inside. With 8 node with 6 SSD each we will have a total of 48 SSD (so will be able to setup 48 OSD). Is this a good configuration for a PVE 5.3 in HCI mode with Ceph?
  13. A

    Number of nodes recommended for a Proxmox Cluster with Ceph

    Hi, how many nodes do you reccomend for a Proxmox Cluster with Ceph (HCI mode)? We would like to start with 4 node with 6 SSD disk each, so will have 6 OSD per node and PVE OS on SATA Dom. The other option is 8 node with the same 6 SSD disk on each. Is fine to start with 4 node? Somebody...
  14. A

    What hardware for 4 node Proxmox in HCI mode with Ceph?

    Hello, I'm interesting into setup a cluster of 4 node with Proxmox VE 5.3 and Ceph on local nodes (like HCI). I'm not sure about the hardware to buy, my idea is to buy 4 SuperMicro with Intel Xeon, 10Gbit Ethernet and 6 Intel D3-S4610 SSD disk each, for a total of 24 SSD like SuperMicro Twin...
  15. A

    ctasd.bin process eat all CPU resources

    Hi, I'm running the latest version Proxmox Mail Gateway, all works fine but sometimes the ctasd.bin process eat all CPU resources and the load grow up (10-20-30 ...) until I restart the "commtouch-ctasd" daemon (via console). This is not related to the email traffic volume, the problem happens...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!