Search results

  1. J

    RAID suggestions

    I use both BTRFS & ZFS for RAID-1 for the OS boot drives. For the VMs, I use ZFS (local) & Ceph (distributed).
  2. J

    hyperconverged pve: To Upgrade or simply stay with a running system

    If you have the time and storage to do full backups, do a clean install. Less issues this way. Can backup the VMs to Proxmox Backup Server. Promxox Ceph allows you to update each node a time. Just following those instructions carefully.
  3. J

    Moving from RAID to Ceph in 3 node cluster - will this work?

    It's considered best practice to separate Corosync, Ceph Public and Private to separate networking infrastructure. I didn't setup my 5-node cluster this way but have a primary 10GbE active setup in a fault-tolerance mode. There's a second NIC on standby to step-in if the primary fails. It's...
  4. J

    Advice needed for a 3-node proxmox HA cluster

    If you don't really want to deal with a switch, I suggest a 4 x 10GbE network card. I run a 3-node Ceph cluster on 12-year old servers using a full-mesh 4 x 1GbE broadcast network https://pve.proxmox.com/wiki/Full_Mesh_Network_for_Ceph_Server#Broadcast_Setup I use the last two ports on the...
  5. J

    Ceph Slow Ops

    I run a 3-node and 5-node all SAS HDD Ceph cluster. The 3-node is on 12-year old servers using a full-mesh 4 x 1GbE broadcast network. The 5-node Ceph cluster is Dell 12th-gen servers using 2 x 10GbE networking to ToR switches. Not considered best practice but the Corosync, Ceph Public &...
  6. J

    Best practices for Backup on SATA HDD

    I run PBS on a Dell R200 8GB RAM using SAS HDDs running ZFS. Working fine. It's also the POM (Proxmox Offline Mirror) server as well.
  7. J

    Moving Windows Server install to Proxmox VE?

    Do you have a Linux SME (subject matter expert) on call? If not, may want to contract for 3rd-party Proxmox support or stick with what you know, which is Windows. If you decide to continue with Proxmox, I highly recommend homogeneous hardware which means same CPU, RAM, and storage. As for the...
  8. J

    Recommendations on Promox install, ZFS/mdadm/somehting else

    It's considered best practice to separate storage into OS and data (VMs, etc) filesystems. I don't use SSDs but if I did I would make sure they are enterprise quality for the intensive writes. In my case, the clusters are enterprise servers using SAS HDDs. Also considered best practice to...
  9. J

    Dell R620 Boot Failed: Linux Boot Manager

    I tried once installing PVE using UEFI. I think I had the same issues you had. So, I just setup to boot in BIOS mode. Also, I'm using a H310 flashed in IT mode with https://fohdeesha.com/docs/perc.html
  10. J

    ceph: 5 nodes with 16 drives vs 10 nodes with 8 drives

    Best practice for Ceph is lots of nodes with fewer OSDs versus fewer nodes with more OSDs. This is to spread out the I/O load. In order to avoid split-brain issues and have quorum, you'll want an odd number of nodes.
  11. J

    Hardware recomendation for painles update?

    Best bang for the currency will be used enterprise servers. Specifically 13th-gen Dells. It has built-in drive controller which can used as either IR or IT mode (need IT mode for ZFS/Ceph) and a built-in rNDC (rack network daughter card) upgradable to 10GbE networking (fiber or wired or both)...
  12. J

    Ceph performance issue

    I use the following optimizations in a 5-node 12th-gen Dell cluster using SAS drives: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none Set VM to use VirtIO-single SCSI controller and enable IO thread and discard option Set VM CPU type to 'host' Set VM CPU...
  13. J

    PBS with HDDs

    Granted that flash is the way to go but I do backup two Ceph clusters with a Dell R200 and Dell R620 using SAS drives. These Dells are decommissioned and still functional so they make great PBS servers. Not the fastest but it does backup/restore just fine. PBS benchmarks...
  14. J

    ceph production with 3 nodes and SAS HDD disks?

    It's true that you need a minimum of 3-nodes for Ceph but it's highly recommended to get mores nodes. With that being said, I do run a 3-node full-mesh broadcast bonded 1GbE Ceph Quincy cluster on 14-year old servers using 8xSAS drives per node (2 of them are used for OS boot drives using ZFS...
  15. J

    Shared SAS Storage and choice fs

    May want to read this https://forum.proxmox.com/threads/2-node-cluster-w-shared-disc.109269 If you really want a 2-node "shared" storage, can use ZFS replication and use a 3rd non-cluster RPI/VM/PC as a QDevice for quorum. I run a full-mesh broadcast bonded 1GbE 3-node Ceph cluster on 14-year...
  16. J

    Help - CPU choice for proxmox

    It's true that Xeon E5's are EOL but functionally there is nothing wrong with them. I use E5v4-L Xeons in production on both Proxmox and VMware.
  17. J

    Local disk vs CEPH for clustered applications?

    It's true that local storage is faster but I use Ceph straight up for VMs. That includes apps that do their own replication. No issues.
  18. J

    Proxmox Offline Mirror released!

    Just created a Proxmox Offline Mirror instance. I've noticed the setup wizard for creating a Ceph mirror does not include an option to mirror the Quincy release of Ceph. How can I manually create it?
  19. J

    Ceph disk planning OSD journal

    May want to use that SSD also for the Ceph DB as well. Will help with writes to the SAS drives. The SSD is enterprise class, correct? Ceph eats consumer SSDs like nobody's business. Here is my Ceph VM optimizations I use: Set write cache enable (WCE) to 1 on SAS drives Set VM cache to none...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!