I am running a PVE 5-Cluster with Ceph bluestore OSDs. They are only HDD OSDs and connected over 2x1GBit bonds. Don't get me wrong here, it isn't productive yet and I don't expect any fancy performance out of this setup.
I am highly impressed with the performance of it under Linux-KVMs and when I monitor the OSDs with atop on the repective hosts they can quite busy when I run diskintense things on linuxhosts. The cephpanel in the webfrontend gives quite nice IOPS and good read and writeperformance.
I installed a windows server 2016 and tried different settings. The virtio-scsi with cache=none gave me about 20-30MB writeperformance, which is much less than with the linuxclients and looking at atop on the servers they are only about 30 % busy. When I use cache=writeback and copy something onto the windowsmachine it does this with fullspeed of the networklink for some gigabytes but the it stalls for about the time it had run before at 0 MB/s.
Are there any best settings for windowsmachines or is there a bug in the actual luminousversion? Some advice would be great.
I am highly impressed with the performance of it under Linux-KVMs and when I monitor the OSDs with atop on the repective hosts they can quite busy when I run diskintense things on linuxhosts. The cephpanel in the webfrontend gives quite nice IOPS and good read and writeperformance.
I installed a windows server 2016 and tried different settings. The virtio-scsi with cache=none gave me about 20-30MB writeperformance, which is much less than with the linuxclients and looking at atop on the servers they are only about 30 % busy. When I use cache=writeback and copy something onto the windowsmachine it does this with fullspeed of the networklink for some gigabytes but the it stalls for about the time it had run before at 0 MB/s.
Are there any best settings for windowsmachines or is there a bug in the actual luminousversion? Some advice would be great.