Search results

  1. D

    simulate qemu-ceph read pattern with fio

    Hi Alwin, thanks for your answer. How so? The data itself is on the spinners, the SSDs hold WAL and RocksDB. So I fail to see, how this would help with cold data. Sure when the data is hot, read performance is good. By creating a benchmark that does not flush the cache, e.g. do the same rados...
  2. D

    simulate qemu-ceph read pattern with fio

    Hi, I am running a proxmox 5.3 cluster with ceph storage (5 x Dell 720xd, each with 12 x 4 TB spinners, 2 DC S3700 for WAL and DB, 2 x 10 Gb ovs-slb-balance) and experience poor read performance in the vms (~50 MB/s sequential read), write performance is fine (1xx-2xx MB/s). I see similar...
  3. D

    [SOLVED] Import VMWare Centos 7 machine

    I had a similar problem, a centos 7 vm migrated from vmware booted only with scsi adapter set to VMware PVSCSI. Turns out the virtio_scsi module was missing so a new initramfs was needed. modprobe virtio_scsi dracut -f Afterwards the switch to VirtIO SCSI and booting the vm worked. Learned...
  4. D

    inconsitent monitors between ceph.conf and storage.cfg

    Thanks for the prompt reply. I will keep that in mind for the upgrade next year. :)
  5. D

    inconsitent monitors between ceph.conf and storage.cfg

    Hi, when changing the ceph monitors via the web gui in our proxmox cluster I noticed that the settings in /etc/pve/storage.cfg did not update, while the ones in /etc/pve/ceph.cfg did. This led to the situation that VMs could not be started (and migrated online) anymore, as the -drive parameter...
  6. D

    [SOLVED] from zfs to ceph, problems creating osds

    Hi, I ran into a problem yesterday when adding osds to a ceph cluster using pveceph createosd. pveceph reported success, but running ceph-deploy disk list listed the osds as unprepared, ceph-osd services were not running and /var/lib/ceph/osd was empty. ceph-disk zap, a simple wipefs, zeroing...