Sadly not. And yes, the 50 MB/s are from the Win10 install.
I had a look at my Nagios graphs and they proof me wrong:
Perhaps it's just me, but compared to singel nodes with RAID5 my Ceph cluster is slow.
Different brands and models of 500 GB SATA disks.
None, just the usage.
Off course not. It runs in JBOD mode.
Again: The problem popped up after upgrade from PVE4 to 5 and got even worse by switching to blue store.
Poor means W10 setup takes about 30 minutes, instead of less than 10 minutes due to slow disks. VM's are slow. With my old PVE4 setup with Ceph and without blue store on the same hardware the problem did not exist. The old system was slower than singel nodes with a RAID controller too but not...
Hi,
I've a Ceph setup wich I upgraded to the latest version and moved all disks to bluestore. Now performance is pretty bad. I get IO delay of about 10 in worst case.
I use 10GE mesh networking for Ceph. DBs are on SSD's and the OSD's are spinning disks.
Situation while doing a W10...
Hmm, now scrubbing errors are gone by doing nothing. Now I get:
~# ceph health detail
HEALTH_WARN 1 osds down; 44423/801015 objects misplaced (5.546%)
OSD_DOWN 1 osds down
osd.14 (root=default,host=pve03) is down
OBJECT_MISPLACED 44423/801015 objects misplaced (5.546%)
# systemctl status...
Hi,
I've a problem with one OSD in my Ceph cluster:
# ceph health detail
HEALTH_ERR 1 scrub errors; Possible data damage: 1 pg inconsistent
OSD_SCRUB_ERRORS 1 scrub errors
PG_DAMAGED Possible data damage: 1 pg inconsistent
pg 7.2fa is active+clean+inconsistent, acting [13,6,16]
#...
Hi,
lately I've done (as usual) updates on my nodes and got:
**********************************************************************
*** WARNING: if you are replacing sysv-rc by OpenRC, then you must ***
*** reboot immediately using the following command: ***
for file in...
One of many Debian 9 KVM VMs logs once a day:
Mar 13 22:34:37 - kernel ata2: hard resetting link
Mar 13 22:34:37 - kernel ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)
Mar 13 22:34:37 - kernel ata2.00: configured for UDMA/133
Mar 13 22:34:37 - kernel sd 1:0:0:0: [sda] tag#9 FAILED...
Ist Nutanics auch nicht und trotzdem kann man den HA-Storage dort auch anderweitig nutzen. Ich fände es gut, wenn Ganesha wieder mit in die Ceph-Packages käme, für die, die es nutzen möchten. Im Moment geht nämlich "apt-get install nfs-ganesha-ceph" nicht. Noch besser fände ich gleich in dem GUI...
My Setup:
Initially setup with PVE4, Ceph Hammer and a 10 GE mesh network. Upgraded to 5.3. OSDs are 500GB spinning disks.
Data:
rados bench -p rbd 60 write -b 4M -t 16 --no-cleanup
Total time run: 60.752370
Total writes made: 1659
Write size: 4194304
Object...
Hi,
there are a few things wich IMHO should be add to the Ceph upgrade pages in the wiki. Can I do this and how to get an account? Or should I wirte my additions down here?
TIA
Hi,
I run several Windows 2016 VMs on several hosts an clusters. All work as expected but one singel W2016 server VM. About every week it shuts down with EventID 109, source: Kernel-Power. When I google this I can find many posts about faulty power supplies but nothing else. IMO it's pretty...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.