Search results

  1. B

    Ceph OSD Down & Out - can't bring back up - *** Caught signal (Segmentation fault) **

    Hi, I noticed that in my 3-node, 12-osd cluster (3 OSD per Node), one node has all 3 of its OSDs marked "Down" and "Out". I tried to bring them back 'In" and "Up", but, this is what the log shows: My setup is WAL and block.db is on SSD, but the OSD is SATA HDD. Each server has 2 SSDs, each SSD...
  2. B

    Containers not starting.

    You will be fine with reboots. I rebooted my system multiple times. Infact, debian containers are working fine. It looks like this issue is specific to 14.04 CTs. Makes me wonder what kind of testing is done before releasing these patches.
  3. B

    After updates, LXCs not working

    I also had the very same issue.. I actually posted a thread on this very forum a few minutes before you did, but got no responses ;) https://forum.proxmox.com/threads/containers-not-starting.50721/ The fix was to downgrade lxc-pve to an older version. apt-get install lxc-pve=3.0.2+pve1-2p
  4. B

    Containers not starting.

    I have fixed this. I downgraded the lxc-pve package to an older version # apt-get install lxc-pve=3.0.2+pve1-2p All good now. At least one other person is having this issue - https://forum.proxmox.com/threads/after-updates-lxcs-not-working.50698/
  5. B

    Containers not starting.

    Hi All, I've got some containers that won't start up after an update. Nothing special about this container - it is on local storage (NVMe), but it just won't start. Proxmox says its running, but the console shows a black screen and I can't connect / ping over network. Other containers are...
  6. B

    AppArmor, NFS Mounting

    The guide (https://pve.proxmox.com/wiki/Linux_Container) says
  7. B

    Proxmox Ceph - Connect external workloads to proxmox Ceph

    Turns out that is not the right way to do it. The right way is to update the config so that the OSDs are running on 10.10.10.0/24 network (dedicated 10gbps) and put the monitors/managers on your 172.16.254.0/24 (or whatever) network. In the below scenario, I am setting up a "Public" network for...
  8. B

    Proxmox Ceph - Connect external workloads to proxmox Ceph

    Hi, I’ve got a 3 node proxmox cluster running ceph. I’m also running Kubernetes on top of proxmox. Currently, my LAN is 172.16.254.0/24 and my Ceph network is 10.10.10.0/24 (separated over a different NIC and VLAN) How do I give my Kubernetes access to Ceph? I want to create a separate...
  9. B

    Proxmox 5.2 + Ceph Luminous

    Hi, I'm just wondering if with proxmox, running ceph monitors in a different subnet is a supported configuration? The official guide that walks you through configuring monitors only gives the option to create OSDs running on the nodes themselves, i..e on the one subnet (as defined by the...
  10. B

    How do you dist-upgrade LXC containers?

    As in title. I am running 17.04 now which is EoL as of Jan 2018. As a result apt etc are not working. If this was a VM, I'd just go the dist-upgrade route. Is there an easy way to upgrade the container(s) from 17.04 to the latest version 17.10 or 18.04?
  11. B

    Moving from ext4 storage (.raw) to thick provisioned-lvm

    As of the latest update, it is now possible to move containers from one storage to another (but the CT must be powered off)
  12. B

    Moving from ext4 storage (.raw) to thick provisioned-lvm

    Also, is it possible to shift a container using .raw disks to a lvm-thick datastore?
  13. B

    Moving from ext4 storage (.raw) to thick provisioned-lvm

    # pveversion -v proxmox-ve: 5.1-42 (running kernel: 4.13.16-2-pve) pve-manager: 5.1-51 (running version: 5.1-51/96be5354) pve-kernel-4.13: 5.1-44 pve-kernel-4.13.16-2-pve: 4.13.16-47 pve-kernel-4.13.13-6-pve: 4.13.13-42 pve-kernel-4.13.13-5-pve: 4.13.13-38 pve-kernel-4.13.13-2-pve: 4.13.13-33...
  14. B

    Moving from ext4 storage (.raw) to thick provisioned-lvm

    Hi Dietmar. First off I tried doing it through the WebGUI so the command will be whatever proxmox generates internally. Secodly, I SSH'ed into the host and tried it from there: # pct restore 124 vzdump-lxc-136-2018_05_07-15_08_48.tar --storage my.storage unable to detect disk size - please...
  15. B

    Moving from ext4 storage (.raw) to thick provisioned-lvm

    Short term solution... I have managed to restore this VM back to its original location. pct restore 124 vzdump-lxc-136-2018_05_07-15_08_48.tar.lzo -rootfs my.storage :SIZE=10 -mp0 my.storage:SIZE=100,mp=/data It looks like the built-in backup doesn't like the fact that this container has a 2nd...
  16. B

    Moving from ext4 storage (.raw) to thick provisioned-lvm

    Hi, I thought this would be as simple as creating a backup & restoring - but it looks like it is not. Restoring it to my lvm (thick) storage gives this error. TASK ERROR: unable to detect disk size - please specify mp0 (size) Restoring it back to the ext4 datastore (where this ct originally...
  17. B

    pveceph stop and pveceph purge

    After much searching & trial and error, I don't believe there is any way to actually purge the packages. The problem is this "pveceph" command is a black box. We have no idea what it is actually doing in the background. In the end, I removed the OSDs one by one. Here is the procedure to do...
  18. B

    pveceph stop and pveceph purge

    Ok, I now have this working (mostly). It appears the pveceph create <diskname> command simply aliases to sgdisk -Z <diskname> then ceph-disk prepare --bluestore <diskname> then finally ceph-disk activiate <diskname> Here are the commands I've used. In this example I'm preparing "sda" for use as...
  19. B

    pveceph stop and pveceph purge

    I didn't purge any packages, because I don't know which packages to purge. I installed these packages by typing "pveceph install --version luminous" This "pveceph" command is a black box. Is it possible to get a list of things this command does? The doc page for this command...
  20. B

    pveceph stop and pveceph purge

    I thought that’s what pveceph stop and pveceph purge are supposed to do - but I digress. So after I run those 2 commands - what don I need to do to well and truely purge ALL the config? OSDs, Crush maps, data on disks - it can all go. I’m happy to start setup from scratch.