Search results

  1. D

    VNC Problem, Server disconnected (code: 1006)

    Sorry to bother. We had some network problems. All OK. Once again, thanks.
  2. D

    VNC Problem, Server disconnected (code: 1006)

    Firefox, Chrome, Safari, latest version on MacOs, last version. I cleared browser cache, no effect.
  3. D

    VNC Problem, Server disconnected (code: 1006)

    Hello, After the last update, I encounter a problem with the VNC console. Each time I move a mouse, or touch a key, or maybe random it just disconnects. I rebooted all nodes twice, no effect. Problem is that I need to acces some VM that require fsck due to partition corruption (another old...
  4. D

    why would pvesm status -storage local be so slow?

    Same thing here with one of the compute node. On other nodes is working fine... How things worked out?
  5. D

    Ceph Optimization for HA. 7 nodes, 2 osd each

    Hello, Udo. Thanks for suggestion. I'm back with the result of the first tests. First scenario. Stop 2 osd's on thesame node. Ceph cluster recover without problems. Start the 2 osd's that where stopped. Ceph cluster recover to the initial point. There was no IO intreruption on the VM. Second...
  6. D

    Ceph Optimization for HA. 7 nodes, 2 osd each

    I removed ceph01 entry from crushmap. Now I'll torture a little bit the ceph cluster to see how it reacts. Be right back with the results. :)
  7. D

    Ceph Optimization for HA. 7 nodes, 2 osd each

    I was thinking to put it 1024 at the moment when we grow the cluster to 20 osd's or more. Now we have only 14 osds and the lower number for 1024 is 10 osd. <quote> Less than 5 OSDs set pg_num to 128 Between 5 and 10 OSDs set pg_num to 512 Between 10 and 50 OSDs set pg_num to 1024 </quote>...
  8. D

    Ceph Optimization for HA. 7 nodes, 2 osd each

    Hello, Thanks for the answers. We have only one pool. pg_num is 512, the lowest number according to pg calc. I choose 512 because pg_num's number can not be reduced without deleting the pool, if the number of osd's is also reduced. <quote> it is mandatory to choose the value of pg_num because...
  9. D

    Ceph Optimization for HA. 7 nodes, 2 osd each

    Hello, everyone. After a lot of reading on the web and trying to tune the ceph, we whre not able to make it HA. If one of the node is turned off, after some time we have partition corruption on the VM. The idea is if a node (2 osd) goes down, or if 2 osd's on different nodes goes down, the VM...
  10. D

    DRBD in Promox 4?

    Journal on SSD improve performance of the cluster. Enterprise class SSD is required for better durability. Ceph writes a lot on those journals. With more than 3 servers you will get decent performance. More servers, more drives (OSD) = more IOPS. About the networks, you could bond 2 gigabit...
  11. D

    After Ceph update from Hammer to Jewel, Ceph logs are not working

    We just finish the update from Ceph Hammer to Jewel according to the tutorial. We encounter some OSD/Journal problem that was solved ( I notice that also the tutorial was updated. nice.), also some snmp problem (osd graphs inside cacti not working) that was also solved by adding snmp near ceph...
  12. D

    OSD wont start after Ceph upgrade from Hammer to Jewel

    I found a workaround for the journal/udev problem. First, rm -f /var/lib/ceph/osd/<osd-id>/journal ln -s /var/lib/ceph/osd/<osd-id>/journal /dev/<ssd-partition-for-your-journal> then, sgdisk --typecode=1:45b0969e-9b03-4f30-b4c6-b4b80ceff106 /dev/<ssd-drive-for-your-journal> (where 1 here...
  13. D

    OSD wont start after Ceph upgrade from Hammer to Jewel

    I configure the journal on SSD like the tutorial in this post.
  14. D

    OSD wont start after Ceph upgrade from Hammer to Jewel

    After reboot, permissions are back to root root@ceph03:~# ls -hal /dev/sdd1 brw-rw---- 1 root disk 8, 49 Jan 18 11:40 /dev/sdd1 root@ceph03:~# ls -hal /dev/sdd2 brw-rw---- 1 root disk 8, 50 Jan 18 11:40 /dev/sdd2 and root@ceph03:~# ls -hal /dev/disk/by-partlabel/journal-2 lrwxrwxrwx 1 root...
  15. D

    OSD wont start after Ceph upgrade from Hammer to Jewel

    root@ceph03:~# readlink -f /var/lib/ceph/osd/ceph-2/journal /dev/sda2 Where /dev/sda is the HDD with one of the OSD. Journal is declared in ceph.conf as, [osd.2] osd journal = /dev/disk/by-partlabel/journal-2 osd ournal size = 10240 And is on a SSD drive (/dev/sdd1 for one OSD and /dev/sdd2...
  16. D

    OSD wont start after Ceph upgrade from Hammer to Jewel

    I changed back the permissions to ceph, wait for a while, start the osd's and they are OK. I'm sure that if I reboot the server permissions are switched back to root. root@ceph03:~# systemctl status ceph-osd@3.service ● ceph-osd@3.service - Ceph object storage daemon Loaded: loaded...
  17. D

    OSD wont start after Ceph upgrade from Hammer to Jewel

    Ok, I run, chown ceph: -R /dev/disk/by-partlabel/journal-2 chown ceph: -R /dev/disk/by-partlabel/journal-3 (location of the osd journals) Everything was ok. OSD's where up. I rebooted the server and permissions are back, root@ceph03:~# ls -hal /dev/disk/by-partlabel/journal-2 lrwxrwxrwx 1...
  18. D

    OSD wont start after Ceph upgrade from Hammer to Jewel

    Sure, root@ceph03:~# ls -hal /var/lib/ceph/osd/ceph-2/ total 60K drwxr-xr-x 3 ceph ceph 217 Jan 18 00:54 . drwxr-xr-x 4 ceph ceph 4.0K Jan 27 2016 .. -rw-r--r-- 1 ceph ceph 892 Nov 26 16:30 activate.monmap -rw-r--r-- 1 ceph ceph 3 Nov 26 16:30 active -rw-r--r-- 1 ceph ceph 37 Nov 26...