Search results

  1. A

    150mb/sec on a NVMe x3 ceph pool

    Apologies, I should've said that from the beginning. The frontend network is 1gbit, the backend is 10gbit. The CPUs are at 20% at all times even though they are weak, the RAM on the cluster (3x replica as noted) is below 70%.
  2. A

    150mb/sec on a NVMe x3 ceph pool

    Hi, I have invested in SAMSUNG PM983 (MZ1LB960HAJQ-00007) x3 to run a fast pool on. However I am only getting 150mb/sec from these. The frontend network is 1gbit, the backend is 10gbit. The CPUs are at 20% at all times even though they are weak, the RAM on the cluster (3x replica) is below 70%...
  3. A

    UPS APC to shutdown VMs?

    I am quite sure you could passthrough a usb device through to docker.
  4. A

    UPS APC to shutdown VMs?

    I got this running with apcd in a docker container checking the UPS via the network, with a crontab script looking for the apcd created file. Simple setup.
  5. A

    Booting from USB stick works but not ext. SSD!

    Hi, I tracked this down to be the fault of my HP Z400. Worked fine with HPZ420s tho, so replaced 400s with 420s :)
  6. A

    [SOLVED]ceph Module 'telemetry' has experienced an error

    They upgraded the telemetry server VM and then this error message popped up. A restart of the mgr daemon solves the issue.
  7. A

    WAL/DB smaller than default 10%/1%?

    I am running about 34gb for my 6tb without issues. There is a flag specifying how big the journal is, however i do all creation of new OSDs via the GUI...
  8. A

    Partition NVME to host both journal and osd ?

    Tried with lvcreate says disk is busy, so I am back to finding out exactly which command is run at /usr/share/perl5/PVE/API2/Ceph/OSD.pm: print "creating block.$type on '$d->{dev}\n"; -> print "creating block.$type on '$d->{dev}' with $cmd\n"; ...
  9. A

    Partition NVME to host both journal and osd ?

    I tried that, but I fear that if I create it wrong, it might overwrite the whole disk. Hence I was actually digging into the perl code to see what the actual command for "creating block.db on '/dev/nvme0n1'" is, so that I could add one more partition for OSD usage after the block.db partitions...
  10. A

    Partition NVME to host both journal and osd ?

    Alwin, thank you for the reply. I was looking for the actual commands to partition up my NVMe after I have added the RocksDB. The limitations of the disk are understood, and looking at the current graphs I have now there is a LONG way to go before I hit the IOPS bottleneck of 200k that these...
  11. A

    Partition NVME to host both journal and osd ?

    I finally got my fast NVMe enterprise drives (Samsung PM983) at last, added a few journals to it for smaller spinners and would like to use the remaining disk space as an OSD/Monitor db/maybe swap. Are there any guides on how to achieve this ? Tried to peak at promox tooling in the GUI to tap...
  12. A

    Big discovery on virtio performance

    @mgiammarco - Could you please provide an URL to these drivers?
  13. A

    Proxmox 6 and CEPH iSCSI ?

    Hi, Am wondering if ceph has built iscsi for debian so we can use them on proxmox? I'm running iSCSI from a Windows VM right now and its not optimal I don't think, its got very low speeds. Thanks, A
  14. A

    [SOLVED] Ceph 14.2.3 / 14.2.4

    With that said, 14.2.4 was released today fixing issues with ceph-volume in 14.2.3 so I do appreciate the lag between releases in the upstream and proxmox releases!
  15. A

    [SOLVED] Ceph 14.2.3 / 14.2.4

    I keep refreshing to see this version popup as well, so here is an upbump! :)
  16. A

    Ceph Compression

    The pool itself is what decides which OSDs to reside on and also the compression. No need to instruct the OSDs individually.
  17. A

    Storage Idea

    Install ceph, activate RGW (S3) and multisite sync with another ceph cluster is my plan..
  18. A

    Any plans to add docker support?

    Has there ever been a reason to "never support Docker" tho ? Is it licensing or what is the reason ?
  19. A

    3-node Proxmox/Ceph cluster - how to automatically distribute VMs among nodes?

    Tokala, I just wanted to say thank you, that is truly awesome! :)
  20. A

    Ceph SSD for wal

    Actually, 30GB is enough for a journal, the 10% recommendation is just the "safe" recommendation even though it wastes ssd space..

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!