Search results

  1. A

    [SOLVED] Redis in LXC Container - best practice/container image available?

    I did exactly that testwise yesterday ... looks good in general. I still experiment around a bit
  2. A

    [SOLVED] Redis in LXC Container - best practice/container image available?

    Hi All, I plan to run a Redis Master/Slave configuration and want to use Sentinel to get a kind of HA setup. SO I thought how to best have Redis run. And indeed I could spin up multiple VMs for that ... or ... start using LXC containers. Is anyone here also running a Redis in LXC container...
  3. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I did not really understand the network topics that were discussed above. I have a 7 node cluster on Intel Nucs ... Would this config also be an idea for me? (Sorry for that Dummy questions)
  4. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I also suspended the upgrade because of this issue ... Was just about today to give it a second try (because I would have some spare time the next days to do the full upgrade for my 7 nodes) ... but then saw https://github.com/kronosnet/kronosnet/issues/261 (via...
  5. A

    PVE with glusterfs as shored storage and "Filesystem gets read only"/On Boot

    Hi All, I use PVE 5.x (upgrade to 6.x pending till the corosync probles are solved) with a 7 node HA cluster. On the nodes beside PVE also glusterfs (3-way-replicated) is installed as shared fs. In general it works well. I get problematic situations if the glusterfs gets readonly for whatever...
  6. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    @Dominic Maybe also add that to the 5->6 upgrade info page as a known issue and to warn that clusters maybe should not upgraded now?
  7. A

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Yeeayy it seems that they removed the packages for Debian Stretch from the Comunity Repo ... why ever ... I asked on glusterfs mailinglist ... but kind of off-topic for here
  8. A

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    I started with v5 so do not really can compare :-)
  9. A

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Yes 5.5-5.7. were not good ones ... 5.8 works well for me
  10. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    I did and yes I had some such lines also in my logs but not many ... but yes exactly on those days ... root@pm6:/var/log# zgrep MTU syslog.*.gz syslog.5.gz:Jul 31 07:26:26 pm6 corosync[1617]: [KNET ] pmtud: PMTUD link change for host: 6 link: 0 from 470 to 1366 syslog.5.gz:Jul 31 07:26:27 pm6...
  11. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    @Fabio M. Di Nitto Would this also help with the errors in the top posts (or in my post) ... there was nothing with MTU errors ... or is this something different?
  12. A

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    @Dominik But one question is still in my head: In PVE 5 everything works up to current glusterfs packages. They can simply be upgraded and the "gluster-common" and "gluster.client" gets updated too and so PVE is using the newer glusterfs version also as client. As we know this works with all...
  13. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    yes :-) Unless my second isue with the glusterfs dependency on upgrade is not solved I'm blocked anyway :-(
  14. A

    [SOLVED] PVE 5.4-11 + Corosync 3.x: major issues

    Would this also be testable for PVE 5.4? Because libknet was not in the list of installed packages when upgrading PVE 5.4 to corosync3 ... ?
  15. A

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Maybe add the link to that section as" check jknown upgrade issues if they affect you" in the "preconditions too with a link down to the konwon issues? (so easier for people that follwo checklists ;-) )
  16. A

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    I think it makes much sense to have it in both :-)
  17. A

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    I have send the glusterfs "packager" an email and also pointed to the bug tracker ticket ... let's hope he can support
  18. A

    Packages kept back on upgrade to 6.0 with glusterfs official repo

    Patch for the webpage? Thank you! Thank you very much PS: On my system the above command shows: root@pm7:~# dpkg -s glusterfs-common | egrep "Version|Conflict" Version: 5.8-1 Conflicts: libglusterfs-dev, libglusterfs0