@Melanxolik, I encountered similar problem two days ago.
End up I did same steps like what mentioned by Q-wulf, it is a pain when you have a lot and large vDisk.
sorry about this, unfortunately at this moment I can't bring down in-operation sshd servers for ceph pool. but I just updated the benchmark for single SSHD, results seems weird and must slower compared to those with hardware RAID. https://forum.proxmox.com/threads/newbie-need-your-input.24176/...
Thanks for sharing.
As I am try to select which best for application server (web apps). In this case what is the best criteria to choose between KVM and LXC in proxmox if not using above benchmark as guideline?
Maybe this is not the right place to ask, but hopefully member here able to guide me to right direction to find out this differences.
I had created 2 VM (KVM and LXC) to do benchmark.
Seems like LXC disk I/O much slower than KVM with virtio. May I know what should I look into to have both same...
Hi,
I am trying to understand the usable space showed in proxmox under ceph storage. I tried to google but no luck to get direct answer. I am appreciate senior here can guide me about how to calculate usable space.
I had refer to...
hi Q-wulf, thanks for your details breakdown and explanation, appreciate very much!
Let me take down some of the nodes to do test run with your recommendation and come back with some results.
hi guys, thanks for your great explanation and input. This definitely helps.
To avoid data lost, short service life of consumer SSD might not be the option now.
But again, how difficult to replace or rebuild a faulty SSD that used as journal?
Replication will be on host basis instead of OSD...
Getting more confuse on ceph storage now.
From some reference mentioned that ceph will turn into best storage architecture but it seems like it is susceptible to data integrity as well.
Loss of journal may lead to data loss, so what are the advantages of ceph distributed block storage compared...
Hi,
Thanks for your input.
The whole setup is almost similar to Proxmox ceph test but I'm using 4 x 1TB SSHD (without hardware raid, just individual disk) for ceph storage x 4 nodes, and SSD basically for journal and OS.
Based on your description, do you recommend instead of journal on SSD...
What type of SSD are we talking about ?
>>> Samsung EVO 850
I am assuming you will be using 1 SSD for OS and 1 SSD for journal ?
>>> Yes, correct.
1.) at how many "--numjobs" does your SSD max out?
--numjobs=1 bw=25884KB/s, iops=6471
--numjobs=2 bw=42705KB/s, iops=10676
--numjobs=3...
Thanks for your reply.
It is DGS-1210 (48 ports), should be 104Gbps for backplane speed.
The reason why doing so is we don't have 10Gbe interface for both switch and server. To my knowledge network bonding does help on network throughput, maybe I am wrong here.
Hi all,
I am new to Proxmox and really impressed by PVE4.0 HA and live migration of KVM VM after tested with 3 nodes.
Unfortunately my management not going to pump additional capital to invest on new full set hardware and currently we have following ready hardware:
2 x Dell C6100 4 nodes...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.