Search results

  1. R

    OpenVSwitch, pfSense and CARP

    Sadly no one could help. :-( Switched back to classical linux vlan and bridge-tools. Now everything is working as expected. Thought I'd go with sth more comfortable and fresh, but just too many issues arose.
  2. R

    OpenVSwitch, pfSense and CARP

    Hello everyone, for HA of some services I'm trying to setup two pfSense-Firewalls on two different Hosts which are connected via a vRack at OVH. The network is configured on top of OpenVSwitch with several VLANs which are working great between the cluster-nodes, except for the CARP of the...
  3. R

    Ceph - Bad performance in qemu-guests

    Re: Ceph - Bad performance with small IO Thanks phildefer for the link and clarifying the rbd-cache question. Changing the cache-setting to writeback and tuning debug and sharding really helped a lot: Thanks to everyone! :) @spirit or maybe someone else can answer: To further enhance...
  4. R

    Ceph - Bad performance in qemu-guests

    Re: Ceph - Bad performance with small IO Yeah well, that sure increases performance. The hosts are connected to UPSs, but how secure is it? Does this use the rbd-cache or RAM of the host? Probably the same but just to clarify.
  5. R

    Hardware recommend

    Hello onedread, why don't you go with just a simple NAS-Server with openmediavault and a AMD AM1. For example: http://geizhals.de/at/asus-am1i-a-90mb0ia0-m0eay0-a1080718.html?hloc=de http://geizhals.de/at/?cat=cpuamdam1&xf=5_AES#xf_top All the features except 6 are possible with omv and the...
  6. R

    Ceph - Bad performance in qemu-guests

    Re: Ceph - Bad performance with small IO Really some good insights in this thread: http://lists.opennebula.org/pipermail/ceph-users-ceph.com/2014-August/042498.html But disabling cephx left me with an unusable cluster. Had to revert the settings to get back to a working state. Do I need to...
  7. R

    Ceph - Bad performance in qemu-guests

    Re: Ceph - Bad performance with small IO Here's the config of the exampled linux guest: balloon: 1024 boot: dcn bootdisk: virtio0 cores: 4 ide2: none,media=cdrom memory: 4096 name: dios net0: virtio=82:65:63:AF:2E:CF,bridge=vmbr0 onboot: 1 ostype: l26 sockets: 1 tablet: 0 vga: qxl virtio0...
  8. R

    Ceph - Bad performance in qemu-guests

    Re: Ceph - Bad performance with small IO I reverted the settings to default and there is no big difference. The Min/Max sync intervals were an experiment. Read this on the ceph-users mailing list, but obviously it didn't help much. It was for a much bigger cluster. The strange thing really is...
  9. R

    Ceph - Bad performance in qemu-guests

    Re: Ceph - Bad performance with small IO Just finished defragmenting the OSDs and remounting them with the new mount options. Now the fragmentation factor is at maximum 0.34% over all OSDs. Also injected the new setting filestore_xfs_extsize. But still no improvement :-( Just to make sure...
  10. R

    Ceph - Bad performance in qemu-guests

    Re: Ceph - Bad performance with small IO The client is also giant and there was some improvement, but still the performance of small io is really bad. For example an ATTO Benchmark in a Windows guest: It seems to hit a limitation? I've been reading Sebastian Han's blog extensively - lots of...
  11. R

    Ceph - Bad performance in qemu-guests

    Re: Ceph - Bad performance with small IO Hi Udo, thanks for your fast reply! I use ceph giant - version 0.87.1. The Journals are symlinked to LVM-Volumes on Crucial M500 in RAID1. Is this then still file-based? The read ahead cache is already set to a higher value. If you mean: blockdev...
  12. R

    Ceph - Bad performance in qemu-guests

    Hello everyone, first of all I want to say thank you to each and everyone in this community! I've been a long time reader ( and user of pve ) and could get so much valuable information from this forum! Right now the deployment of the Ceph Cluster gives me some trouble. We were using DRBD but...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!