Search results

  1. poor CEPH performance

    Any suggestions how to tune my Ceph setup?
  2. poor CEPH performance

    No, I've just followed the instructions in the PVE wiki for the upgrade.
  3. poor CEPH performance

    Sadly not. And yes, the 50 MB/s are from the Win10 install. I had a look at my Nagios graphs and they proof me wrong: Perhaps it's just me, but compared to singel nodes with RAID5 my Ceph cluster is slow.
  4. poor CEPH performance

    Different brands and models of 500 GB SATA disks. None, just the usage. Off course not. It runs in JBOD mode. Again: The problem popped up after upgrade from PVE4 to 5 and got even worse by switching to blue store.
  5. poor CEPH performance

    Poor means W10 setup takes about 30 minutes, instead of less than 10 minutes due to slow disks. VM's are slow. With my old PVE4 setup with Ceph and without blue store on the same hardware the problem did not exist. The old system was slower than singel nodes with a RAID controller too but not...
  6. poor CEPH performance

    Hi, I've a Ceph setup wich I upgraded to the latest version and moved all disks to bluestore. Now performance is pretty bad. I get IO delay of about 10 in worst case. I use 10GE mesh networking for Ceph. DBs are on SSD's and the OSD's are spinning disks. Situation while doing a W10...
  7. Ceph OSD stoped and out

    Hmm, now scrubbing errors are gone by doing nothing. Now I get: ~# ceph health detail HEALTH_WARN 1 osds down; 44423/801015 objects misplaced (5.546%) OSD_DOWN 1 osds down osd.14 (root=default,host=pve03) is down OBJECT_MISPLACED 44423/801015 objects misplaced (5.546%) # systemctl status...
  8. Ceph OSD stoped and out

    Hi, I've a problem with one OSD in my Ceph cluster: # ceph health detail HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent OSD_SCRUB_ERRORS 1 scrub errors PG_DAMAGED Possible data damage: 1 pg inconsistent pg 7.2fa is active+clean+inconsistent, acting [13,6,16] #...
  9. [SOLVED] Update: openrc / sysv-rc

    Hi, lately I've done (as usual) updates on my nodes and got: ********************************************************************** *** WARNING: if you are replacing sysv-rc by OpenRC, then you must *** *** reboot immediately using the following command: *** for file in...
  10. 3-node Proxmox/Ceph cluster - how to automatically distribute VMs among nodes?

    Exactly. It is ... I use Qnap or Synology devices for this.
  11. ata2 kernel message in KVM VM

    My fault ... the messages are sent by rsyslog from an other device. My Linux VMs user virtio and XFS and not SATA and EXT4 ...
  12. 3-node Proxmox/Ceph cluster - how to automatically distribute VMs among nodes?

    No. I'm waiting for this feature for about 2 years now :-) I use a shared NFS storage for templates and ISOs
  13. ata2 kernel message in KVM VM

    One of many Debian 9 KVM VMs logs once a day: Mar 13 22:34:37 - kernel ata2: hard resetting link Mar 13 22:34:37 - kernel ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Mar 13 22:34:37 - kernel ata2.00: configured for UDMA/133 Mar 13 22:34:37 - kernel sd 1:0:0:0: [sda] tag#9 FAILED...
  14. CEPHS NFS-Ganesha

    Ist Nutanics auch nicht und trotzdem kann man den HA-Storage dort auch anderweitig nutzen. Ich fände es gut, wenn Ganesha wieder mit in die Ceph-Packages käme, für die, die es nutzen möchten. Im Moment geht nämlich "apt-get install nfs-ganesha-ceph" nicht. Noch besser fände ich gleich in dem GUI...
  15. Proxmox VE Ceph Benchmark 2018/02

    My Setup: Initially setup with PVE4, Ceph Hammer and a 10 GE mesh network. Upgraded to 5.3. OSDs are 500GB spinning disks. Data: rados bench -p rbd 60 write -b 4M -t 16 --no-cleanup Total time run: 60.752370 Total writes made: 1659 Write size: 4194304 Object...
  16. Cluster traffic

    Thank, for pointing me to the documentation I was unable to find.
  17. Cluster traffic

    Hi, in my 3 node cluster I've a 10GE mesh network for Ceph traffic. I've read about "moving cluster traffic" to the Ceph network. How to do this? TIA
  18. Ceph upgrade

    Hi, there are a few things wich IMHO should be add to the Ceph upgrade pages in the wiki. Can I do this and how to get an account? Or should I wirte my additions down here? TIA
  19. W2016 shuts down with EventID 109 - Kenel-Power

    Hi, I run several Windows 2016 VMs on several hosts an clusters. All work as expected but one singel W2016 server VM. About every week it shuts down with EventID 109, source: Kernel-Power. When I google this I can find many posts about faulty power supplies but nothing else. IMO it's pretty...


The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get your own in 60 seconds.

Buy now!