Search results

  1. A

    Problem with remove disk on ceph storage

    @Melanxolik, I encountered similar problem two days ago. End up I did same steps like what mentioned by Q-wulf, it is a pain when you have a lot and large vDisk.
  2. A

    Usable space on Ceph Storage

    sorry about this, unfortunately at this moment I can't bring down in-operation sshd servers for ceph pool. but I just updated the benchmark for single SSHD, results seems weird and must slower compared to those with hardware RAID. https://forum.proxmox.com/threads/newbie-need-your-input.24176/...
  3. A

    Usable space on Ceph Storage

    Thanks for the clear example and explanation. Definitely another meaniful class for me today!
  4. A

    Why LXC Disk I/O Slower than KVM?

    Thanks for sharing. As I am try to select which best for application server (web apps). In this case what is the best criteria to choose between KVM and LXC in proxmox if not using above benchmark as guideline?
  5. A

    Usable space on Ceph Storage

    Do you mean on same storage but with different pool configuration, eg. size =2 for replication of 2 per files?
  6. A

    Why LXC Disk I/O Slower than KVM?

    Maybe this is not the right place to ask, but hopefully member here able to guide me to right direction to find out this differences. I had created 2 VM (KVM and LXC) to do benchmark. Seems like LXC disk I/O much slower than KVM with virtio. May I know what should I look into to have both same...
  7. A

    Usable space on Ceph Storage

    Hi, I am trying to understand the usable space showed in proxmox under ceph storage. I tried to google but no luck to get direct answer. I am appreciate senior here can guide me about how to calculate usable space. I had refer to...
  8. A

    Newbie need your input

    hi Q-wulf, thanks for your details breakdown and explanation, appreciate very much! Let me take down some of the nodes to do test run with your recommendation and come back with some results.
  9. A

    Newbie need your input

    hi guys, thanks for your great explanation and input. This definitely helps. To avoid data lost, short service life of consumer SSD might not be the option now. But again, how difficult to replace or rebuild a faulty SSD that used as journal? Replication will be on host basis instead of OSD...
  10. A

    Newbie need your input

    Getting more confuse on ceph storage now. From some reference mentioned that ceph will turn into best storage architecture but it seems like it is susceptible to data integrity as well. Loss of journal may lead to data loss, so what are the advantages of ceph distributed block storage compared...
  11. A

    Newbie need your input

    Hi, Thanks for your input. The whole setup is almost similar to Proxmox ceph test but I'm using 4 x 1TB SSHD (without hardware raid, just individual disk) for ceph storage x 4 nodes, and SSD basically for journal and OS. Based on your description, do you recommend instead of journal on SSD...
  12. A

    Newbie need your input

    What type of SSD are we talking about ? >>> Samsung EVO 850 I am assuming you will be using 1 SSD for OS and 1 SSD for journal ? >>> Yes, correct. 1.) at how many "--numjobs" does your SSD max out? --numjobs=1 bw=25884KB/s, iops=6471 --numjobs=2 bw=42705KB/s, iops=10676 --numjobs=3...
  13. A

    Newbie need your input

    AFAIK, SSD in general faster then SSHD. sshd just a combination of NAND flash with HDD.
  14. A

    Newbie need your input

    Thanks for your input. Journal will be on SSD. SSHD is the primary storage.
  15. A

    Newbie need your input

    Thanks for your reply. It is DGS-1210 (48 ports), should be 104Gbps for backplane speed. The reason why doing so is we don't have 10Gbe interface for both switch and server. To my knowledge network bonding does help on network throughput, maybe I am wrong here.
  16. A

    Newbie need your input

    Hi all, I am new to Proxmox and really impressed by PVE4.0 HA and live migration of KVM VM after tested with 3 nodes. Unfortunately my management not going to pump additional capital to invest on new full set hardware and currently we have following ready hardware: 2 x Dell C6100 4 nodes...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!