Search results

  1. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    wait I have ceph debs held.. let me retry
  2. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    I saw someone post similar in the German forum --- attempting upgrade from 6.4 to 7: apt dist-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Error! Some packages could not be installed. This may mean that you have...
  3. R

    [SOLVED] file system choice for pbs on hardware

    https://forum.proxmox.com/threads/best-choice-for-datastore-filesytem.93921/ however that is for pbs running as a virtual machine.
  4. R

    [SOLVED] file system choice for pbs on hardware

    Hello, we are considering moving pbs to a raid-10 zpool using six 4-TB nvme disks. i was searching threads on best file system type to use and saw concerns regarding zfs. however I can not see a more reliable way then raid-10 zfs. does anyone have another idea to consider ?
  5. R

    [SOLVED] osd remove after node died

    Hello I know there is a cli way that we did 4-5 years ago to remove left over osd's , mons etc that show as out from an abruptly dead node. is there a newer way or is dump/edit/restore ceph config still the way to do so? we use pve 6.4 and ceph-octopus thanks
  6. R

    [PVE7] - wipe disk doesn't work in GUI

    on pve6.4 wipefs -fa /dev/nvme4n1 dd if=/dev/zero of=/dev/nvme4n1 bs=1M count=1000 udevadm settle reboot note after udevadm settle in pve after Reload drive showed still as lvm2 member parted /dev/nvme4n1 p showed no partitions i had tried many other things and glad that at least those...
  7. R

    [SOLVED] replacing server hardware and moving zfs disks issue

    thanks for the responses. due to the systems getting installed way back, there is not way to add a 512M partition . so we will reinstall. PS: adding a node to cluster and setting up ceph are much easier then before. the documentation and gui made it very easy.
  8. R

    [SOLVED] replacing server hardware and moving zfs disks issue

    thanks that is probably it. do you happen to know where the solution is for that?
  9. R

    [SOLVED] replacing server hardware and moving zfs disks issue

    Hello I searched I think i saw a this issue reported before we are replacing server hardware but moving over storage to new systems. the 1st system had a single disk ext4 pve system and all went well. the next one has zfs raid-1 . and will not boot. instead a uefi shell appears.. could...
  10. R

    [SOLVED] is garbage collection needed on a remote sync system?

    our remotes have much more disk usage [ 2x approx] then the source pbs system. so i am running garbage collect for the 1ST time. AFAIK the syncs have always been set to 'Remove Vanished' . How ever we have had some wrong configurations in the past , so i assume our issue with higher...
  11. R

    ceph update procedure

    thanks, wil use that next time.
  12. R

    ceph update procedure

    the update procedure for ceph changed this time [ to 15.2.13 ] this no longer works # systemctl restart osd.target Failed to restart osd.target: Unit osd.target not found. so used the following. these are quick notes not well formatted # 1 apt-get update && apt-get full-upgrade # 2-...
  13. R

    ceph update procedure

    In another ceph thread: we want to be as cautions as possible on ceph cluster upgrades. this is how we currently do ceph upgrades , and I am not sure that it is cautious enough. please advise 1- do mon systems 1st 2- restart services by systemctl try-reload-or-restart...
  14. R

    [SOLVED] 'ceph-volume lvm zap' zfs drive issue

    sdf or sdg ? run the mount command i wrote above
  15. R

    [SOLVED] 'ceph-volume lvm zap' zfs drive issue

    no medium found and open and already in use - check this mount| grep sdf I asssume it is mounted. umount and try again if that fails umount and zap /dev/sdf if you are sure you want to kill any data on /dev/sdf then zap the disk. AFAIR there used was a ceph zap disk command...
  16. R

    Change ceph network

    Hello Marcos. first your initial post of 'Wow ..' was not a good way to get help. it was insulting to me and probably others . certainly not a good way to ask for help. the post NOT POLITE or clear on what you are trying to ccomplish. Question - what are you trying to do? ceph...
  17. R

    Change ceph network

    This is an old thread, and PVE + Ceph are much easier to understand and manage. Check the documentation .
  18. R

    [SOLVED] turnkey repo

    can someone point me to how to add turnkey templates to shared storage?