this is not ideal, but still good enoughBut please be aware that only overall throughput will increase, with the amount of users who access data. Every user will still experience only a data rate that is about equal to single thread performance. I only wanted to make this very clear.
i wont think so that ill do it. not looking to make any custom changes.I don't know of an option that allows you to partition your disk space in the initial setup of PVE. You wil need to install PVE on the NVME drive, then boot some rescue system, resize the filesystems and the lvm, and then use the rest for the other purposes. If you know how to resize LVM and ext4 this might not be a big issue.
To add to @Ingo S post, don't install Proxmox VE on the NVMe, when used with OSD DB+WAL on the same drive. These are separate concerns and are hard to control together. If there are enough IOps left, after hooking all the OSDs to the NVMe, you can move the MON DB (/var/lib/ceph/ceph-mon) to a partition on the NVMe, the get better latency.
what is the optimal install\configuration?
we are going to planning to get 2u servers each with 8*3.5 hdds, and 2 (we might add two more later due to high cost ) u2 nvme,
the servers can take up to 4 pcie lanes
- 2*40 network card
- optane 200-300 GB ? db+wal , do i really need it? case the ssd pool is quite fast (high end u2 nvme ) and the hdd pool is bary get write access? but if ill choose to write on it the WAL will make a large difference ? (the write speed should be as fast as the wal drive? )
- spare
- spare
what do you thinks is better
- 2* SEDNA - DDR4 Slot Mounting Adapter for M2 SSD for raid (sata ssd) for proxmox os
- 4x nvme pcie riser and put 2 on it (2 spare) for proxmox os
what is suggest size of ssd for proxmox only?
Last edited: