Search results

  1. K

    [SOLVED] Cehp timeout and losted disks

    Hi, iperf [ ID] Interval Transfer Bandwidth [ 6] 0.0-10.0 sec 2.73 GBytes 2.35 Gbits/sec [ 4] 0.0-10.0 sec 2.73 GBytes 2.35 Gbits/sec [ 5] 0.0-10.0 sec 2.73 GBytes 2.35 Gbits/sec [ 3] 0.0-10.0 sec 2.73 GBytes 2.35 Gbits/sec [SUM] 0.0-10.0 sec 10.9 GBytes 9.39...
  2. K

    [SOLVED] Cehp timeout and losted disks

    Hi all, Thank you for your helphull answers. The system has 12x 1T 7.2k SATA HDD. (4 nodes x 3 disk for ceph) Yes you are right the system is better than previously. But the the disks are very very slow. The system uses bluestore OSD. Therefore I can not use SSD for journal. I moved a 10G...
  3. K

    [SOLVED] Cehp timeout and losted disks

    Thanks your hints. I reconfigured the whole system. All raind controllers use HBA mode. But it seems it is not better. 2018-07-06 07:00:00.000157 mon.vCNT-host-1 mon.0 10.102.166.130:6789/0 62593 : cluster [WRN] overall HEALTH_WARN 1 nearfull osd(s); 1 pool(s) nearfull 2018-07-06...
  4. K

    [SOLVED] Cehp timeout and losted disks

    It is a HP Apollo server. The hosts have Smart Array P440 Raid controller, but all disk in raid0 for ceph and raid1 for OS. The Ceph health usually HEALTH_OK. But there are lot of events from slow request in the ceph log file. For example: cluster [WRN] Health check failed: 6 slow requests are...
  5. K

    [SOLVED] Cehp timeout and losted disks

    Yes, the cluster is working well. root@vCNT-host-4:~# pvecm status Quorum information ------------------ Date: Thu Jun 28 17:25:36 2018 Quorum provider: corosync_votequorum Nodes: 4 Node ID: 0x00000002 Ring ID: 1/436 Quorate: Yes Votequorum...
  6. K

    [SOLVED] Cehp timeout and losted disks

    We have the same 4 nodes. CPU(s) 48 x Intel(R) Xeon(R) CPU E5-2670 v3 @ 2.30GHz (2 Sockets) Mem: 256GB/node root@vCNT-host-4:~# ceph -s cluster: id: fb926dd9-17b9-42fb-88d6-27f4944fd554 health: HEALTH_OK services: mon: 4 daemons, quorum...
  7. K

    [SOLVED] Cehp timeout and losted disks

    Thanks your reply. The system has 10G optical network. Therefore I think there isn't network issue. he average speed between the nodes is 418.4MB/s (include the disk IO) But I don't know where the problem may be. Could you help me how can I use this to get more details about this errors...
  8. K

    [SOLVED] Cehp timeout and losted disks

    Hi all, I get many times "timeout" from cehp. Some VM lost its disk. And cannot boot from disk. I didn't analyze logs yet, but I 'll update this post. Add: The ceph log file contains lot of similar entry: check update: 59 slow requests are blocked > 32 sec
  9. K

    KSM

    Thanks. It means when the host memory usage more than 70-80% then the KSM will make free memory. If I use the default settings. It is very useful.
  10. K

    KSM

    Hi I migrated VMs from 4.4 to 5.2. These VMs used KSM but now the KSM usage is 0 or minimal. Which memory setting that when the system allow use KSM? Thanks
  11. K

    Convert to template

    Hi, I use the lastest proxmox version 5.5-2. My system is based on templates. After I reinstalled the whole system and created cluster with ceph shared filesystem I found this issue. I restored VMs from backups. First issue: The original system was a template. I created a backup from this...
  12. K

    API interface status of rootfs

    Yes, I can reproduce with both way. Thanks your help.
  13. K

    API interface status of rootfs

    I use API interface to manage the VMs. I found an interesting thing in the response. Request: GET /api2/json/nodes/vhost-2/status Part of response: {"data": {"rootfs":{"avail":8452661248, "free":-79251603456, "total":101327663104...
  14. K

    4 nodes ceph configuration

    Hi Thank you for your detailed answers. But I have to use what I have. I cannot increase the system. But you are right. I will have double redundancy. What do you think about this solution? I going to create 2x raid0 2-2 disk with more volumes? In this case I can use zfs for OS to be mirrored...
  15. K

    4 nodes ceph configuration

    Hi, I would like build a 4 nodes cluster system with shared filesystem using ceph. All nodes are the same. 48 core Intel CPU, 2x 100GB SFP network, 4x 1T normal disk, 256GB memory I don't need high IO speed, but the stable state is important. I going to create a raid5 storage from the 4 disk...
  16. K

    CT and ubuntu from 17.10

    Thanks a lot. It is working well. I have only a hint. The default configuration of the systemd-networkd is disabled. It is needed to enbale. sudo systemctl enable systemd-networkd sudo systemctl enable systemd-resolved Br Istvan
  17. K

    CT and ubuntu from 17.10

    Hi, Ubuntu 17.10 network configuration is completely changed. This new tool replaces the static interfaces (/etc/network/interfaces) file that had previously been used to configure Ubuntu network interfaces. Now you must use /etc/netplan/*.yaml to configure Ubuntu interfaces. I downloaded this...
  18. K

    Start a stopped qemu after rollback snapshot

    Thank you, it is very useful information.
  19. K

    Start a stopped qemu after rollback snapshot

    I don't know it is normal working or not. I stopped a VM, later I rolled back a previous snapshot. The VM state become to running. Could you give me more detail why it is happened?
  20. K

    Merge linked clone disks

    I use only local Storage. It is a mounted LVM (old style qemu storage).