Search results

  1. B

    PBS2: proxmox-file-restore failed: Error: mounting 'drive-virtio0.img.fidx/part/["5"]' failed: all mounts failed or no supported file system (500)

    I have 2 Proxmox clusters, both on v7.0-10 and each with a PBS running v2.0.7-1. On each cluster I have a Debian based file server, configuration of these are identical with the exception that one is encrypted LVM, the other is unencrypted LVM. On the unencrypted one, file-restore works as...
  2. B

    To Bridge or not to Bridge?

    Thanks Dominik, I did think that was the case but I came across a tutorial this morning while looking for something else and the poster had a bridge for each interface so I just wanted to sanity check my set up.
  3. B

    To Bridge or not to Bridge?

    In my cluster I have 3 network interfaces configured, the first is as default a port of vmbr0 and is for all of the day to day traffic. The other 2 interfaces have just been configured with IP Addresses, one (1GBE) is for Corosync, the other (10GBE) is for CEPH. Should I have configured a bridge...
  4. B

    Cluster Node unreachable during 'Move Disk'

    I've recently moved from a single node on a RAID10 to a 3 node cluster on CEPH and since then one particular VM has intermittant speed issues The guest in question is a Windows Server 2008 R2, it serves web applications through IIS which pull data from a seperate dedicated SQL Server (Linux), a...
  5. B

    CEPH Nearfull Limit

    Thanks again Alwin. I've turned on the autoscaler in 'warn' mode and it doesn't tell me I should have more PGs. From what the pgcalc tells me, I should have a pg_num setting of 512, I've read on this forum that I should increment the PGs gradually (i.e. change it to 256 and let it rebalance...
  6. B

    CEPH Nearfull Limit

    Thanks Alwin, makes sense. I have several disks which are near the 80% but most are well under that. Will CEPH rebalance automagically, or is this somehing I need to initiate?
  7. B

    CEPH Nearfull Limit

    I've read that the Nearfull limit on CEPH is 80%, but where do I check that limit? My rbd currently says it has 77.58% usage, but the Performance metric on the CEPH heath screen says it is at 56% usage. Which one do I worry about?
  8. B

    After install Storage on Cluster not on Node

    If you've set up RAID1, both disks will be in use already. RAID1 is a mirror, it uses 2 disks and writes data to both so should one disk die, you can still access your data.
  9. B

    Going from Single Node to 3 Node HA

    Ok, so I have managed to do this and it has gone relatively smoothly. For those interested, this is how I did it. The first existing node (FEn) was running on 8x 300GB SAS disks in a hardware RAID10 configuration and contained 8 VMs. The second existing node (SEn) was running 3x 146GB SAS disks...
  10. B

    SAS drives are shown as unknown type

    I have the same issue. All of my disks show as 'unknown' and I don't get any SMART values. The all show as 'OK' in the S.M.A.R.T. column, but clicking 'Show S.M.A.R.T. values' shows a pop-up with only the following information: Current Drive Temperature: 0 C Drive Trip Temperature...
  11. B

    Going from Single Node to 3 Node HA

    Hi Wolfgang, Thank you for the reply. The problem I have is that the existing node (Node A) requires reinstallation because it is currently configured on 8 300GB SAS disks in a hardware RAID10, im reluctant to nuke it before I've made absolutely sure I CAN restore the VMs successfully, I've...
  12. B

    Going from Single Node to 3 Node HA

    Hi all, long time lurker, new poster, please be gentle! :p I've been running a single node with 8 VMs in production for the best part of 2 years now and happily, it has been rock solid. I also have a second node which I use for test VMs and I also run a Windows 10 VM which I use as my...