Reads are done in parallel, so I would expect more than the rate we achieve now. dd is in my book a pretty good way to simulate single threaded sequential reads.
[In a RAID1] ... In a degraded state ...A subsequent failure on the already stressed disks will likely produce data loss.
Very...
No ressources? Even without cache, the devices can fetch upwards of 2 Gigabyte/s with 128k blocksize. The network does 1,2 Gigabyte/s without Jumbo frames. With two copies, the theoretical bandwith should be worst case (date only remote) 1,2 Gigabyte/s on a idle Ceph cluster. Or am i wrong?
So...
In the next maintenance window I will check the performance. What kind of performance would one approximately expect to see from a 4-node, 10GbE cluster with one 1TB Samsung 970 NVMe SSDs (bluestore) on each host and a 2/1 replication rule?
We see only about 300-400 MB/s with 4k/1M blocksize...
I might suggest maybe it is wise to modify that default down to 1G, as on hyperconverged clusters the memory obviously has to be shared. Migrating from DRBD on pre-exisiting installations (as in our case) this will bite.
Is there any reason against changing that setting on a running cluster OSD...
And I have read up on documentation and my reaction was ... o_O
bluestore_cache_autotune
Default
true
osd_memory_target
Default
4294967296
Is this the proxmox default? Every OSD will shoot for 4 Gigabytes of cache?
This of course explains the problem we have. Is it safe to set...
Hello all,
with one customer I run a Proxmox 5.4 Ceph cluster with 4 nodes. Two of the nodes have 24 GB of memory and 5 OSDs each, and these nodes are tasked only with one (important) VM each.
On both nodes we see a disturbing pattern:
- after reboot, all the ceph-osd processes start out...
Uuuuuh, when inspecting the logfiles I found this:
Oct 28 21:22:12 vm2 kernel: [99920.391085] txg_sync D 000000000000000b 0 1175 2 0x00000000
Oct 28 21:22:12 vm2 kernel: [99920.391090] ffff88042c7ff608 0000000000000046 ffff880829dde400 ffff88042c7f0c80
Oct 28 21:22:12 vm2...
Well, please don't scare me ....
I knew ZFS was having trouble in borderline cases, but with my vanilla setup I had hope that it will run alright.
This night I will set the parameter:
/etc/modprobe.d# cat zfs
options spl spl_taskq_thread_dynamic=0
And run the backup (full backup). Next...
Can I set that parameter at runtime?
/sys/module/spl/parameters# echo 0 > spl_taskq_thread_dynamic
Also, will it have performance implications? Sorry I am not very deep in the architecture of ZFS.
At the moment, I am between a rock and a hard place for taking Proxmox with ZFS into a...
Hello $all,
I am running a Proxmox 4.0 Server for a customer, with qcow2 files and zfs as storage.
ZFS is set up this way:
- 2x2 mirror
- 2 cache devices (SSD)
The server has:
- 32gig of ram
- 16gig are used for the VMs (5, all windows XP to 7)
- 2 Xeon CPUs
For backup I run a...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.