Search results

  1. S

    1/3 mons down after pve6to7

    Following up on this issue. I can not destroy the newly created prox-ceph3 monitor. When I attempt this it gives the following error. Please see the outputs from the "ls -l /var/lib/ceph" and "ls -l /var/lib/ceph/mon/ceph-prox-ceph3/" commands above. Thank you.
  2. S

    1/3 mons down after pve6to7

    The output of the commands are in my previous post. Are there any permissions that need changed that you can see? Thank you very much for the assistance with this, it's greatly appreciated.
  3. S

    1/3 mons down after pve6to7

    root@prox-ceph3:/etc/pve# ls -l /var/lib/ceph total 48 drwxr-xr-x 2 ceph ceph 4096 Aug 12 2020 bootstrap-mds drwxr-xr-x 2 ceph ceph 4096 Aug 12 2020 bootstrap-mgr drwxr-xr-x 2 ceph ceph 4096 Oct 20 2020 bootstrap-osd drwxr-xr-x 2 ceph ceph 4096 Aug 12 2020 bootstrap-rbd drwxr-xr-x 2 ceph...
  4. S

    1/3 mons down after pve6to7

    My ceph.conf : [global] auth_client_required = cephx auth_cluster_required = cephx auth_service_required = cephx cluster_network = 192.168.235.11/24 fsid = 27c7fb73-57f0-4d1d-8801-1db89fc9b7c8 mon_allow_pool_delete = true mon_host =...
  5. S

    1/3 mons down after pve6to7

    I followed the documentation and tried to destroy and recreate the monito on prox-ceph3. This is the syslog: Oct 29 11:28:58 prox-ceph3 systemd[1]: Started Ceph cluster monitor daemon. Oct 29 11:28:58 prox-ceph3 ceph-mon[300861]: 2021-10-29T11:28:58.745-0400 7f39580ab580 -1 rocksdb: IO error...
  6. S

    1/3 mons down after pve6to7

    Thank you, do you happen to have a documented process for this? I searched for it and found these commands: pveceph destroymon prox-ceph3 pveceph createmon prox-ceph3 pvecm status Also it may be necessary to remove the current mon, prox-ceph3, from ceph.conf? /etc/pve/ceph.conf
  7. S

    1/3 mons down after pve6to7

    I performed a PVE 6 to 7 upgrade last night following the Proxmox documented procedure. Two of our nodes came up without issue. One of our nodes is giving this error: 1/3 mons down, quorum prox-ceph1,prox-ceph2mon.prox-ceph3 (rank 2) addr [v2:192.168.235.13:3300/0,v1:192.168.235.13:6789/0] is...
  8. S

    PVE Backup to CIFS

    I gave up on CIFS and installed PBS on my NAS. Works great so far.
  9. S

    PVE Backup to CIFS

    I will remove VM 106 from the main backup job and create a new one for it using a different compression mode for tonight and then report back, Thank you.
  10. S

    PVE Backup to CIFS

    I have created a PVE backup schedule for our production VMs to a network Windows based CIFS target. The backup schedule is in Stop Mode with ZSTD compression. Retention on the CIFS storage is 3 copies. All of our VMs back up successfully except one, VM 106. If I delete all of the VM 106 backup...