Search results

  1. G

    ceph performance 4node all NVMe 56GBit Ethernet

    i removed the username line, but gui still shows user:admin ...
  2. G

    ceph performance 4node all NVMe 56GBit Ethernet

    i restarted the whole cluster .... /usr/bin/kvm -id 102 -chardev 'socket,id=qmp,path=/var/run/qemu-server/102.qmp,server,nowait' -mon 'chardev=qmp,mode=control' -pidfile /var/run/qemu-server/102.pid -daemonize -smbios 'type=1,uuid=45631093-fe88-4493-b759-297dd584725a' -name win2003 -smp...
  3. G

    ceph performance 4node all NVMe 56GBit Ethernet

    hmmm... set auth to none .... but know all vm's don't start anymore ! how to change startupcmds ?
  4. G

    ceph performance 4node all NVMe 56GBit Ethernet

    Tom Wich steps must be done? The plan to go into production in very short time frame Gerhard
  5. G

    ceph performance 4node all NVMe 56GBit Ethernet

    Hi Folks, Whis performanche shoud i expect for this cluster ? are my settings ok ? 4 nodes: system: Supermicro 2028U-TN24R4T+ 2 port Mellanox connect x3pro 56Gbit 4 port intel 10GigE memory: 768 GBytes CPU DUAL Intel(R) Xeon(R) CPU E5-2690 v4 @ 2.60GHz ceph: 28 osds 24 Intel Nvme 2000GB...
  6. G

    unexpected restart of node ?

    Hi Fabian, I updated firmware of Mellanox 3xg-pro to latest release : mstflint -d 81:00.0 q Image type: FS2 FW Version: 2.40.7000 FW Release Date: 22.3.2017 Product Version: 02.40.70.00 Rom Info: type=PXE version=3.4.746 devid=4103 proto=0xff Device ID: 4103...
  7. G

    unexpected restart of node ?

    Fabian, I just activated persistent journal. mkdir /var/log/journal; systemctl restart systemd-journald.service cabling is Mellanox QSFP 3m Cable to Mellanox switch sx1012 .. 12 port QSFP ... I see no errors on Mellanox switch and not on Mellanox cards in the nodes Regards Gerhard on all 4...
  8. G

    unexpected restart of node ?

    Fabian, I manually migrated vm100 to node4 at 10:25, Sadly, after 2 hours node1 restarted for a unknown reason... how to solve this ? i have definitely no glue :( root@pve03:~# journalctl --since "2017-06-12" --until "2017-06-13" -u 'pve-ha-*' -- Logs begin at Tue 2017-06-06 21:07:08 CEST...
  9. G

    unexpected restart of node ?

    Hi Fabian no more unexpected restart attempts after i removed vm100 from ha ! I now put it back into ha without group all ... let's see if this behavior returns :( Regards Gerhard
  10. G

    unexpected restart of node ?

    Fabian, I took vm out of ha after another restart .... at 17:06:00 :( i also checked on all 4 nodes with omping if I may have a multicast problem causing this strange fencing and node restart I have no glue why this happens ... omping -c 10000 -i 0.001 -F -q 192.168.221.141 192.168.221.142...
  11. G

    unexpected restart of node ?

    Fabian thanks for this advice I have no glue why this fencing occurs ... clusterlink is 40GbitE Mellanox .... and as far as i can see sx1012 Mellanox switch has no issues ... just another restart occurred on node1 not all nodes have corresponding entries ... i'm confused ... root@pve04:~#...
  12. G

    unexpected restart of node ?

    Fabian, yes i enabled ha but defined no watchdogs .... node 2 restarted 01:57:xx without any reason i see nothing in syslog or messages ... only strange thing is after reboot : Jun 9 04:17:34 pve02 kernel: [ 8398.132770] perf interrupt took too long (2502 > 2500), lowering...
  13. G

    unexpected restart of node ?

    Hi Folks, I have a strange problem, Fresh install of 4.4 on 4 nodes all NVMe last morning and today a vm has been stopped automatically and migrated to next node I can't find errors in logs, how to debug this strange behavior ? Worst of all vm has a mysql instance as slave after second...
  14. G

    ceph journaldisk multi partitions on NVMe nightmare

    Fabian, Thank you for suggestions. We thought a fast and more durable journal disk would be better. 24 OSD Disks: on 4 nodes: 2000GB Intel SSD DC P3520, 2,5", PCIe 3.0 x4,bulk NVMe 2.5" in PCIe 3.0, 20nm, MLC, Sequential Read: 1700 MB/s, Sequential Write: 1350 MB/s, Random Read (100% Span)...
  15. G

    ceph journaldisk multi partitions on NVMe nightmare

    Fabian, thanks, but how will ceph decide this ? i have a likely orphaned osd now, destroy osd does not work because where are no entries in /var/lib/ceph/osd/* ... how to overcome this remedy ? regards Gerhard will this work ? pveceph createosd /dev/nvme1n1 -journal_dev /dev/nvme0n1 pveceph...
  16. G

    ceph journaldisk multi partitions on NVMe nightmare

    Hi Folks, Just got my new cluster machines all NVMe I have one Journaldisk Fast NVMe with 1.5 TB you can not partion this on gui so i did this in cli... parted /dev/nvme0n1 mkpart journal01 1 250G mkpart journal02 250G 500G mkpart journal03 500G 750G mkpart journal04 750G 1000G mkpart...
  17. G

    Shall I wait for v5 or setup my new cluster on 4.4 ?

    My 1st thought was ceph on v5 most recent versus v4.4... If a smooth migration from v4.4 to upcoming v5 in a production environment is possible... I set the cluster up with v4.4 If not... Question over questions....
  18. G

    Shall I wait for v5 or setup my new cluster on 4.4 ?

    Hi! Ich bekomme die tage 4 neue Server all NVMe ... v5 beta2 aufsetzen oder v4.4 ? gruß Gerhard