Hello guys
I have 3 VDSs with proxmox ( Proxmox inside VPS don't know if this is the wrong thing to do), which are all linked with the Ceph pool with 10GB a ports My first Node with Micron 9400 Max and the others with gen 3 Nvmes intel drives.
when I try to do dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync on the main server on Ceph storage I get
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.97877 s, 180 MB/s
root@MariaDB:~# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.86801 s, 183 MB/s
root@MariaDB:~# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.17572 s, 207 MB/s
root@MariaDB:~#
But with the ZFS storage out of the Ceph storages I get
[root@test ~]# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.16568 s, 921 MB/s
[root@test ~]# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.0955 s, 980 MB/s
[root@test ~]# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync+
1+0 records in
Does this command read and write the data on all the Ceph or on the main node only ?
Also
Is the Ceph the main reason for that, I'm getting IO delays with 8-14 % on the main node only on the cluster ?
I have 3 VDSs with proxmox ( Proxmox inside VPS don't know if this is the wrong thing to do), which are all linked with the Ceph pool with 10GB a ports My first Node with Micron 9400 Max and the others with gen 3 Nvmes intel drives.
when I try to do dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync on the main server on Ceph storage I get
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.97877 s, 180 MB/s
root@MariaDB:~# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.86801 s, 183 MB/s
root@MariaDB:~# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 5.17572 s, 207 MB/s
root@MariaDB:~#
But with the ZFS storage out of the Ceph storages I get
[root@test ~]# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.16568 s, 921 MB/s
[root@test ~]# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.0955 s, 980 MB/s
[root@test ~]# dd if=/dev/zero of=/tmp/test1.img bs=1G count=1 oflag=dsync+
1+0 records in
Does this command read and write the data on all the Ceph or on the main node only ?
Also
Is the Ceph the main reason for that, I'm getting IO delays with 8-14 % on the main node only on the cluster ?
Last edited: