Search results

  1. L

    Disk errors on FreeBSD 12.2 guest

    We have recently upgraded to kernel 5.15 and now were having disk errors on one guest running FreeBSD 12.2. The config is: undefinedagent: 1 balloon: 8192 boot: cd bootdisk: virtio0 cores: 8 ide2: cephfs:iso/FreeBSD-11.4-RELEASE-amd64-disc1.iso,media=cdrom memory: 32768 name: VO-IRIS-Poller...
  2. L

    All LXCs and VMs lost networking!

    Let me clarify that: Because everything is virtualised, I lost the firewalls and thus remote access too. The Remote Management interfaces of the nodes are configured on a non-public network, so I guess I'll have to find a secure way of accessing these via some out-of-band system.
  3. L

    All LXCs and VMs lost networking!

    I have ifupdown2 installed... also, all guests on all nodes went offline # dpkg -l 'ifupdown*' Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name...
  4. L

    All LXCs and VMs lost networking!

    I had a really perplexing situation last night. I had previously upgraded one or four nodes running the latest pve to pve-kernel -15.5. Because the naming of the network interfaces was changed at some stage, I had to recreate the /etc/network/interfaces file with the new nic names on all the...
  5. L

    lxc doesn't start properly after upgrade to pve7

    It's a legacy rails app for which there is no upgrade budget at this stage :) Needs substantial work and it probably won't happen.
  6. L

    lxc doesn't start properly after upgrade to pve7

    I think the only way forward to future proof these older guests is to move them to KVM machines.
  7. L

    Memory management: ceph

    Thanks! It seems that luminous doesn't have all the commands to manage this yet. I'm searching the docs now... I'm systematically upgrading this cluster to the latest version, but I need to understand how to limit the memory usage in the process. Since is just a test and dev cluster, so...
  8. L

    Memory management: ceph

    I see that ceph manages memory automatically according to https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#automatic-cache-sizing Is the following normal for automatic cache sizing then? Granted, some of the machines have only 8GB of RAM and are used a storage nodes...
  9. L

    Support External RBD And CephFS with Erasure Coded Data Pool

    Thanks for that, we'll definitely look into it. That is really good news!
  10. L

    lxc doesn't start properly after upgrade to pve7

    So, is it one of the other for all lxc's? In other words, if I impliment this kernel setting, will all containers revert to using cgroups instead cgroupsv2
  11. L

    lxc doesn't start properly after upgrade to pve7

    I have read through that, but something is not quite clear to me. In the Ubuntu 14.04 lxc image there is no /etc/default/grub as refered to by this linked reference. So should the systemd.unified_cgroup_hierarchy=0 parameter be set in the proxmox node kernel config??
  12. L

    [SOLVED] ceph health ok, but 1 active+clean+scrubbing+deep

    After updating to ceph 16 (pacific) on pve7, I have the following condition: ~# ceph health detail HEALTH_OK but ~# ceph status cluster: id: 04385b88-049f-4083-8d5a-6c45a0b7bddb health: HEALTH_OK services: mon: 3 daemons, quorum FT1-NodeA,FT1-NodeB,FT1-NodeC (age 13h)...
  13. L

    lxc doesn't start properly after upgrade to pve7

    No, that's the only container. But then it's also the only container that was running Ubuntu 14.04 when the upgrade to pve7 was done. The lxc was running perfectly before though. Now, when I enter the lxc and start all the services manually, they run. But of course, that should not be, they...
  14. L

    lxc doesn't start properly after upgrade to pve7

    If I start the networking (/etc/init.d/networking start), the network comes up. I can also start ssh then.
  15. L

    lxc doesn't start properly after upgrade to pve7

    Yes, pct enter 138 works. I'm in the container now, but there's no network, which is probably the main problem. I'll dig around to see what I can find.
  16. L

    OVH and Proxmox

    Been in contact with these, but not quite what I'm looking for. I'd like to have more "bare metal" where I can set up my own config, which is why I was hoping OVH would have a suitable offering.
  17. L

    lxc doesn't start properly after upgrade to pve7

    I upgraded my nodes from PVE 6.4 to 7, checked in advance with pve6to7 for any issues and all seemed to have gone well, except I have one container that starts, but not properly. If I do pct start 138, no error is returned, but the container doesn't run, although it's reported as running. ~#...
  18. L

    OVH and Proxmox

    We are looking to add some services in Europe for clients and have been scouting for suitable hosting space. OVH seems to be one of the only options that offer Proxmox hosting. However, from their many options to select from, we can't quite figure out which is the most suitable one. We would...