Search results

  1. M

    Ceph hardware

    No, we only think about introduceing to us project. Yes, I know, other way ceph you can change rebuilding parameters, but I think when you have a lot of drives, your storage always will rebuild. But, If you use raid controller and array storages it will be next situation with rebuilding system
  2. M

    Ceph hardware

    we have over 7 nodes with: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz 10gbit/s network adapter at this moment we are have 10ssd for 800gb by intel, and running over 200 customers virtual servers All ssd connected to ceph cluster and we have some trouble, when cluster rebuild. First if you have...
  3. M

    Ceph hardware

    How many do you want have virtual machines? What about configurations this virtual machines? I have bad experience with 200 virtual machines and 30 osd drives. Now we are using only ssd drives on ceph cluster.
  4. M

    Upgrade gluster client

    I have next version gluster on my server: rpm -qa|grep gluster glusterfs-libs-3.7.11-2.el6.x86_64 glusterfs-3.7.11-2.el6.x86_64 glusterfs-api-3.7.11-2.el6.x86_64 glusterfs-fuse-3.7.11-2.el6.x86_64 glusterfs-client-xlators-3.7.11-2.el6.x86_64 glusterfs-server-3.7.11-2.el6.x86_64...
  5. M

    CVE-2016-3710 very important problem

    Does proxmox has update this problem? https://security-tracker.debian.org/tracker/CVE-2016-3710
  6. M

    [SOLVED] ZFS Performance Tuning

    yes, that's why we have big problem with degradation performance, more different block size in virtual machine and zvols.
  7. M

    [SOLVED] ZFS Performance Tuning

    Could you run this command? fio –name fio_test_file –direct=0 –rw=randwrite –bs=4k –size=20G –numjobs=16 –time_based –runtime=180 –group_reporting similar block size on zfs and internal fs of virtual machine very important, if you have different block size, you will have a lot of over commit...
  8. M

    [SOLVED] ZFS Performance Tuning

    think about block-size on zfs pool and internal vm fs, example: freebsd default block 32kb, ext4 4k, but zfs pool 128kb.
  9. M

    Ceph cluster with ssd tiering.

    we have 30drives and performance very degraded, that's why we chose replace all disk to ssd. ceph is very stable system, I don't know what need do to kill ceph cluster.
  10. M

    kernel:BUG: soft lockup - CPU#6 stuck for 67s!

    sometimes we see this problem and connected this to problem with cheap network adapters, which have only two msix lines.
  11. M

    Different drives in ceph cluster

    Hi All. I have small question about proxmox. What are you thinking about ceph with in different drives, I want mixing 1TB SSHD with 800GB SSD Intel. What are you think this will be working fine? At this moment I have 5nodes with 30drives 1TB, but I have problem with drive performance, that why...
  12. M

    install proxmox on zfs with ssd

    Interesting question about zfs and trim Did proxmox support trim on zfslinux for ssd drives?
  13. M

    Ceph, change global options, How?

    Could you help me with change global options in ceph.conf? I would like to add: osd recovery op priority = 2 osd max backfills = 1 osd recovery max active = 1 osd recovery threads = 1 and how I can apply new settings without reboot ceph and disconnect clients?
  14. M

    Reinstall OS on primary node in cluster

    I have some small problem with cluster on proxmox 4. First story: when we create cluster https://pve.proxmox.com/wiki/Proxmox_VE_4.x_Cluster we do hp2# pvecm add IP-ADDRESS-CLUSTER where IP-ADDRESS-CLUSTER it's ip address first node on cluster. can I reinstall first node on cluster and add...
  15. M

    Upgrade proxmox cluster 3.4 to 4.1 + Ceph

    you'll be lucky if you can do it :) but, I have big problem with one node, when I upgraded from 3.4 to 4.1.
  16. M

    Can we use lvm cache in Proxmox 4.x?

    But zfs can't possible on cluster storage between nodes
  17. M

    Node crushed, kernel panic

    thanks about zfs swap, it's intresting, I will try disable swap on Mondey.
  18. M

    Node crushed, kernel panic

    Backup storage made glusterfs. I don't have any messages in syslog, but only a litle messages in kernel.log I can reproduce every-time when I do backup task from console or api query
  19. M

    Node crushed, kernel panic

    Nobody can't help me with this problem?
  20. M

    Node crushed, kernel panic

    images from console server in monet crash vzdump.

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!