Search results

  1. TwiX

    CEPH : SSD wearout

    It doesn't matter ;) I know what to check. But before buying lots of SSD in order to replace them, I want to know the wearout value where the drive must be replaced.
  2. TwiX

    CEPH : SSD wearout

    Hi, Unfortunately, the right values are provided by another attribute for me. CF values provided by Dell idrac => 90% remaining here is the result of 'smartctl -a /dev/sda' smartctl -a /dev/sda smartctl 6.6 2016-05-31 r4324 [x86_64-linux-4.15.18-9-pve] (local build) Copyright (C) 2002-16...
  3. TwiX

    CEPH : SSD wearout

    Right wearout values are also confirmed by DELL idrac :
  4. TwiX

    CEPH : SSD wearout

    Here are the S.M.A.R.T values : 90% remaining for this drive You can notice that GUI indicates wrong wearout.
  5. TwiX

    CEPH : SSD wearout

    HI, My 'oldest' prx ceph cluster is based on samsung SM863a drives. After 3 years, wearout for some drives is less than "88% remaining". I don't know if these values are safe enough. Under what kind of Wearout value it is recommended to change the SSD drive ? Thanks !
  6. TwiX

    Cloud-Init for beginners

    Thanks, I tried without the cloud-init drive and without removing the packages. Seems that everything is OK, OS starts quite fast (as usually)
  7. TwiX

    Cloud-Init for beginners

    Hi, I just provisioned some new VMs based on a debian10 template with cloud-init. It works as expected. So, after first boot, do I need to keep the cloud-init drive mounted (a then delete it) ? Also, remove cloud-init package ? Thanks !
  8. TwiX

    Ceph and a Datacenter failure

    thanks You solved your issue ? It was related to ceph min replicas ?
  9. TwiX

    Ceph and a Datacenter failure

    Hi, What is the required bandwidth between these 2 datacenter ? :)
  10. TwiX

    Corosync 3 - Kronosnet - link: host: x link: 1 is down

    Hi, Some nodes never showed up this message (for example dc-prox-22, dc-prox-24 and dc-prox-26). For the other nodes, seems that it never shows up at the same time as you can see :
  11. TwiX

    Corosync 3 - Kronosnet - link: host: x link: 1 is down

    Hello, I just built 6 new pmx v6 nodes (uptodate) with same hardware. Had 2 links per node (2 lacp bond on 2 Intel X520) : bond0 : 2x10 Gb (Management and VMs prod - MTU : 1500) bond1 : 2x10 Gb (Ceph Storage - MTU : 9000) bond0 is declared as primary corosync link (link 0), bond1 as link 1...
  12. TwiX

    Alternative to Samsung 863a for Ceph

    Hi Take a look for Intel DC S4610. They perform as well as sm863a ones.
  13. TwiX

    Ceph nautilus - Raid 0

    Hi, First thing first, I know it is not recommended to use raid 0 disks beyond ceph, however that's what I did on 4 Dell R430 servers with Perc 730 (with 6 15k SAS drives). I have pretty descent performance with it and absolutely no issues for the last 2 years. With full SSD nodes I don't use...
  14. TwiX

    max cluster nodes with pve6?

    Hello, And about the bandwidth for a 36 nodes cluster, what kind of traffic (Mbps) should we expect ?
  15. TwiX

    Corosync 3

    Thanks a lot :)
  16. TwiX

    Corosync 3

    Hi, Corosync 3 doesn't use multicast anymore. It uses unicast. Ok, so I guess the cluster traffic should grow a lot for clusters involving more than 3 nodes ? Thanks in advanced, Antoine
  17. TwiX

    Ceph ext4 mysql

    Hi, We plan to reactivate barrier. The solution is to add lots of RAM for MySQL innodb_buffer_pool_size. I guess you're right, 10GB with 20 SSDs maybe not enough. Thanks to all of you guys :)
  18. TwiX

    Ceph ext4 mysql

    Activating mysql binary logs and keep ext4 nobarrier could be safe enough ?
  19. TwiX

    Ceph ext4 mysql

    Hi, Yes benchmarks are done. Results are almost similar to what Proxmox team did. Benchmarking now with 40 vms on production could be irrelevant. Mysql instances perform about 500 to 1000 queries/sec each. If nothing could be done, no choice I must settle barrier=1 and keep writeback for vm...
  20. TwiX

    Ceph ext4 mysql

    Hi, I noticed that the only way to get low iowaits with ceph is to settle VM disk cache to writeback. But it's not enough, with mysql (innodb) we still have high iowaits on high load. We have to disable barrier on ext4 mount options. After that, disk performance is OK. On a 5 nodes cluster...