Search results

  1. G

    Ceph: active+clean+inconsistent

    It has been a couple of days since I tried repair. At the time I only had replication set to 2. I have since changed it to 3, but it was two when the issue arose. I can see from the logs that the primary is unable to to be read, but the other copies contain good data. I am using bluestore...
  2. G

    Ceph: active+clean+inconsistent

    I have an issue where I have 1 pg in my ceph cluster marked as: pg 2.3d is active+clean+inconsistent, acting [1,5,3] I have tried doing ceph pg repair 2.3d but no success. I am following this guide to fix it: https://ceph.com/geen-categorie/ceph-manually-repair-object/ I have identified the...
  3. G

    Upgrade Motherboard - Reinstall?

    Just wanted to report back that this went smoothly - after the new motherboard was installed, the onboard NIC name changed, so the network config had to be updated to reflect that (/etc/network/interfaces). Other than that, things were smooth.
  4. G

    Upgrade Motherboard - Reinstall?

    Thank you, I will give it a go when the boards arrive. :)
  5. G

    Upgrade Motherboard - Reinstall?

    I have a 3-way Proxmox VE cluster. Due to lack of PCIe slots on the servers' motherboards, I need to replace them. I will be changing the motherboards, but retaining all existing storage, CPU, RAM, network cards etc. Is it better for me to completely reinstall Proxmox VE and re-add it to the...
  6. G

    VM not booting after restore

    I have a VM running on one PVE server which I backed up while it was running (snapshot). I've restored to another PVE server, but when I start it, it won't boot. It's a standard KVM VM running Ubuntu Mate 18.04 and has two virtual disks, / and one for /home. All my other VMs restore and run...
  7. G

    mon low on available space

    Ah, understand! Thanks, the root file system was nearly full on this node, that's what was causing this message. Cleared some space, now the message has gone.
  8. G

    mon low on available space

    I've got quite a fresh install of Proxmox with Ceph and things are working well. However, on the ceph Health page it reports "mon sb2 is low on available space". But when I check the OSDs, no disk is more than 6% used. Any idea what's going on? Note that I have Ceph set up with two crush...
  9. G

    parse error - uncexpected '}' (500)

    Yes, the cluster members are fully up to date using the free repo configs (for now). In the end I just reinstalled - I didn't want to risk there being a problem with clustering down the line.
  10. G

    parse error - uncexpected '}' (500)

    I'd like to gently bump this. This cluster will go into production and I can't afford for there to be any issues with HA. If I can't resolve it I'll need to reinstall it from scratch which I would prefer to avoid.
  11. G

    parse error - uncexpected '}' (500)

    Thanks for the suggestion - I checked and both files are identical on all 3 cluster members. root@smiles3:~# diff /etc/corosync/corosync.conf /etc/pve/corosync.conf -s Files /etc/corosync/corosync.conf and /etc/pve/corosync.conf are identical
  12. G

    parse error - uncexpected '}' (500)

    I have an issue where I get the following error when trying to create a new HA resource in my cluster (Datacenter -> HA -> Resources -> Add). parse error - uncexpected '}' (500) I am sure this is my own fault - I modified corosync.conf to add a separate corosync network as backup as per the...
  13. G

    Recommended method for secondary Ceph Pool

    Update: I was able to get this working using Ceph's new device class feature: https://ceph.com/community/new-luminous-crush-device-classes/ I added the HDDs as OSDs first via GUI. Then I I created one fast crush rule (using NVMEs) and one slow crush rule (using HDDs) using the CLI; ceph osd...
  14. G

    Recommended method for secondary Ceph Pool

    Has anything changed in this regard? Can it be done from GUI now with more recent PVE versions, or do I still need to manually edit the crushmap and create a second rule set?
  15. G

    Understanding Ceph Failure Modes

    Alwin, many thanks for taking the time to explain things in such detail, and for the references, that's extremely helpful. It sounds like 5 servers is a nice luxury and something for us to work towards, but for now, a 3-way cluster with an independent warm standby is a workable level of risk for...
  16. G

    Understanding Ceph Failure Modes

    I was reading the thread on recent Ceph benchmarks stickied in this forum, and saw some comments from PigLover about how the author of the benchmarks "make the claim about being able to run a 3-node cluster and still access the data with a node OOS. While it is "true", it is also dangerous...
  17. G

    Online Resizing of / Partition

    I was able to get this done using guidance from: https://myshell.co.uk/blog/2012/08/how-to-extend-a-root-lvm-partition-online/ LVM is awesome.
  18. G

    Online Resizing of / Partition

    I've deployed Proxmox VE 5.1 to a bare metal cloud instance that required me to deploy a pre-configured qcow2 image of Proxmox with cloud-init installed. It worked fine, things are up and running. However, because I wanted to keep the image size small, I made the whole disk only 8 GB, which is...
  19. G

    Restore to multiple disks

    That's a good tip thank you, I wasn't aware of that. Unfortunately the naming is not the same (the cluster uses ceph), so that won't work in this situation. I guess it's not possible to specify it using qmrestore in some way?
  20. G

    Restore to multiple disks

    We're using backup / restore scripts to move our VMs to a hot spare Proxmox box each day in case our main cluster goes down - in that case, we would simply start the restored VMs on the backup server and carry on. Here's a typical restore script we use: /sbin/lvremove -f...

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!