Search results

  1. M

    Understanding ceph performance and scaling

    There definitely should have been at least some activity during the faults (or host reboots). At the very least my test splunk server should receive a permanent inflow of data. It does looks like ceph has a way to know that the data on the OSD that stayed up and were written to need to be sent...
  2. M

    Understanding ceph performance and scaling

    I am using a pool with 2/1 replication for my lab VMs in the last 5 years, and the cluster survived a couple of SSD losses and a lot of node reboots. Maybe I am just lucky, but I grew to trust that ceph does OK even wtih the non-recommended configuration. If you know what you are doing and OK...
  3. M

    What is the best way to mount a CephFS inside LXC

    I believe it's host mount, then the mp parameter specifies where it should be mounted within the container, so it should be something like: mp0: /mnt/pve/cephfs/export1,mp=/mnt/export1,shared=1 mp1: /mnt/pve/cephfs/export2,mp=/mnt/export2,shared=1
  4. M

    Proxmox in ESXi. VMs have no access

    You would be better using a MAC learning switch on VMware rather than Promiscuous mode. You can check the following link for your options: https://williamlam.com/2023/05/refresher-on-nested-esxi-networking-requirements.html
  5. M

    Network optimization for ceph.

    Please don't forget that those 250MB/s throughput on HDDs are only for the large sequential operations. You can expect it to drop to 2.5MB/s or lower for random IOPS... Actually with truly random IO you should get only around 320KB/s throughput per disk (SATA drives doing 80 IOPS times 4K). I...
  6. M

    What is the best way to mount a CephFS inside LXC

    So, you mounted both cephfs filesystems to each of your proxmox nodes? After that you just modify the container config and add the mp0 and mp1 lines, each with the 'shared=1' option
  7. M

    Suggestions/Feedback on Proxmox Cluster with Ceph

    I built my cluster 6 years ago, so don't remember which resources I used at the time... I built a cluster in a nested lab several times, and it was very straightforward. If you have your single node environment I encourage you to test first in a virtual lab, you just need the three VMs with 3GB...
  8. M

    Suggestions/Feedback on Proxmox Cluster with Ceph

    Rest assured that it is entirely possible to run a Proxmox cluster on 3 nodes with a single NVMe drive. I actually run a similar configuration for several years. Note that you would basically end up with the ceph storage equivalent to the size of your single NVME drive, as the best practice is...
  9. M

    Ceph tier cache question

    Hi @plastilin I played with enabling LVM cache on the OSD logical volume, and it worked but I did not do any performance comparisons, I don't believe I noticed a huge difference so eventually I decided it's not worth it to have the extra fault domain. Not sure how it's different from dm-cache...
  10. M

    Network optimization for ceph.

    You can test your network using iperf3. I believe the bottleneck should be your hard disks. Note that each OSD process can easily consume several GB of memory on the host, so testing of just 1GB might be a test of how fast your nodes can read or write to the memory cache. That's where the...
  11. M

    What is the best way to mount a CephFS inside LXC

    It definitely allows me to migrate the containers between the nodes. I use Proxmox Backup Server, so don't really use the snapshots. I just checked and indeed the snapshots are disabled for the configuration with the mount points. I guess you can use the backups as a workaround. Note that there...
  12. M

    Ceph Disks

    You don't need those extra logical volumes if you plan to use ceph. You also don't need that much space for your root volume. I use 32GB as a root partition, and it seems to be enough. You might also want to leave 8-16GB for swap, so I think that 40-64GB for the root LVM partition should be more...
  13. M

    Ceph Disks

    Hi @MoniPM Great job noticing that. I had an older lab cluster that was running Proxmox 7.0 and I checked that it could not add a partition as OSD, but after I updated to the current 7.4.3, I too was able to add that partition as an OSD. So it became supported somewhere between those versions...
  14. M

    Tiny/Mini/Micro low-power hardware for PBS at home?

    Last GC took an hour and a half. Anyway, the local HDD seems to be quite fine as a PBS target. I see that you were talking about an NFS storage backed by HDD, and I think I would agree, PBS datastore on NFS mount might not be a good idea, and might not perform well even if backed by SSD (I...
  15. M

    Ceph Disks

    I used USB sticks for OS few years ago, and that was working fine for about a year, never disconnected or hang up, but then one of the servers started to produce odd errors during updates (so don't recommend USB sticks) . I switched to a USB SATA enclosure then but only for a few months, so I...
  16. M

    Tiny/Mini/Micro low-power hardware for PBS at home?

    Looked at my logs, the last verification job took 2 hours. Last prune finished in a minute. Nothing prevents you from trying on your local proxmox server and you should see if it is going to work...
  17. M

    Tiny/Mini/Micro low-power hardware for PBS at home?

    BTW, if you want to save the money just add a disk to your existing proxmox server, install pbs there and do the backups to it. For additional protection you can backup the pbs data to your NAS. From my experience HDD for backups work just fine, but surely if you can afford SSDs that would be...
  18. M

    Network problem bond+lacp

    In addition to @spirit option, I would also double-check if it cabled correctly to the intended ports on the switch. Assuming the partner port number parameter is the encoded port number, you probably connected to the wrong ports. I would start with one bond, confirm it's working fine, then...
  19. M

    Tiny/Mini/Micro low-power hardware for PBS at home?

    Pretty sure any one of those servers will work. I had a PBS running on an old Celeron-based mini-PC with 2TB HDD, and it was fine. Just had to disable verification immediately after backups, they would consume too much CPU and made backups to fail..

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!