Search results

  1. Ceph config changes

    No ideas? This must be possible. At least to test the configs.
  2. ceph osd poor performance, one per node

    Ah yes, IT-Mode, that was the name of it and the mode they are in. OSD 0 is now osd 9, but here is just a sample and should be similiar with the other two nodes. two mx100s and 1 mx300 root@stor1:/etc/pve# hdparm -I /dev/sda | grep Model Model Number: Crucial_CT512MX100SSD1...
  3. ceph osd poor performance, one per node

    yes, but a recovery hurts performance with so little osd's. Anyway, they are Crucial MX100 and MX300's. They are connected to raid controllers (LSI SAS1068e) that can act as HBAs (required specific firmware, was a a few years ago), so full access is given to the disk.
  4. Ceph config changes

    1) Is there a way to test my ceph.conf before its applied to make sure my settings are valid for the version im using? 2) i am currently using cephx auth and i would like to disable that and set it to none. Is there a way to do that on the fly?
  5. ceph osd poor performance, one per node

    Thanks for that. Unfortunately I cant run those tests on drives that are already in "production" as they wipe the drive. That is very interesting to know though. If all of my drives are consumer and mostly the same model, why would i be getting such drastic differences on that perf test for just...
  6. ceph osd poor performance, one per node

    Consumer, but with supercapacitors. Why do you think it matters in this particular situation?
  7. ceph osd poor performance, one per node

    I have 3 ceph storage nodes with only 3 ssd's each for storage. They are only on sata II links, so they max out at about 141MB/s. I am fine with that, but I have 1 osd on each node that has absolutely awful performance and i have no idea why. Seems to be osd.0, osd.3, osd.4 that are just awful...
  8. View VM Disk IO summary?

    Seems in the gui I can view the disk io of a guest, but I want to be able to see an overview of all my guests and how much disk io each one is using at that time. Seems there are totals, but that isnt that useful. If its available in the gui per guest, how can i get that from the CLI from a...
  9. linux-headers not automatically found

    BTW, why does this have to be done manually? Why cant apt-get find them properly when doing upgrades? It shouldnt require manual intervention every time.
  10. Proxmox installation via PXE: solution

    To me, its not worth the hassle. Simply install debian with pxe and then proxmox.
  11. /etc/apt/apt.conf.d/75pveconf, why?

    Why do we have the following entry? root@stor2:/etc/apt/apt.conf.d# cat /etc/apt/apt.conf.d/75pveconf APT { NeverAutoRemove { "^pve-kernel-.*"; }; } I dont want to have to manually remove old kernels and there is a stock NeverAutoRemove entry that already keeps the newest kernels...
  12. linux-headers not automatically found

    Why is it when I update the proxmox kernels, it doesnt automatically update/find its required support files as well? Its irritating that i have to manually install them every time there is an update with apt-get. root@stor2:~# apt-get dist-upgrade -y Reading package lists... Done Building...
  13. Let’s Encrypt with Proxmox VE

    GUI for what? Shouldnt be any configuration needed.
  14. Let’s Encrypt with Proxmox VE

    Any ideas why this isn't being scripted fully? I mean, proxmox is obviously a controlled environment, so it's not like there are variations.
  15. Incremental backup script for CEPH Volumes

    It hasnt worked well for me. Not sure if its an NFS thing or what, but it never seems to finish. Maybe I just limit its loop to one so that it only tries it out on one small vm first.
  16. Incremental backup script for CEPH Volumes

    How has this worked out for you? How are you creating the rbd snapshots in the first place? Right now I only have a single ceph pool, so definitely looking for way to do more efficient backups.
  17. PVE 4.1, systemd-timesyncd and CEPH (clock skew)

    It seems like it still not being given the attention it deserves. Been a few months since they commented on the bug. =(
  18. PVE 4.1, systemd-timesyncd and CEPH (clock skew)

    I really wish proxmox would better handle this. We shouldnt need workarounds like this and time sync is very important for clusters and ceph.
  19. Problem with remove disk on ceph storage

    Great news, it's been resolved in 0.94.6 whenever we get that. Glad I put some pressure on them about it. woo hoo!
  20. Problem with remove disk on ceph storage

    Hmm, that error actually keeps us from removing images? Does not seem obvious to me when looking at that bug report that they are related.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!