Search results

  1. R

    slow migrations

    Hello I am seeing slower migrations with pve7 then pve6 . We do have a network issue that I have been trying to track down over the last week, which is probably the cause. However I wanted to see if others have noticed slower migrations. thank you for reading this.
  2. R

    [SOLVED] proxmox-backup-client --exclude question

    thank you Fabain. this is what worked in our case: proxmox-backup-client backup etc.pxar:/etc home.pxar:/home -exclude */.cache/ -exclude */Downloads/ -exclude */.thunderbird/*/ImapMail 2>&1
  3. R

    [SOLVED] proxmox-backup-client --exclude question

    Hello i am trying to get -exclude to work. we do not want to put a .pxarexclude at each system proxmox-backup-client backup etc.pxar:/etc home.pxar:/home --exclude home/*/.cache:home/*/.thunderbird/*/ImapMail I've tried a few different ways over the last hour. each time i see: warning...
  4. R

    [SOLVED] how to move pbs to new hardware

    after installing pbs and before doing any config. when creating a zpool use same name as original system 0- the transfer will create the zfs on the target. if target exists receive failed here. [ could be because i had already set up a datastore on the target . in any case i am unsure on...
  5. R

    [SOLVED] how to move pbs to new hardware

    probably there are just few more things besides datastore and /etc/proxmox-backup/ to scp / rsync over . do you happen to know the location of keys etc? after that move IP address .. PS thanks for the zfs send/rcv suggestion .
  6. R

    [SOLVED] how to move pbs to new hardware

    Hello we are moving pbs to new hardware. zpools / zfs have been set up with same names rsync of datastore is in progress and will take some hours. for configuration , besides /etc/proxmox-backup/ is there anything else that needs to be copied over?
  7. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    so i went and removed pve , installed bullseye rebooted and installed pve . so all is working. the issue started with the hold of ceph-common . our other stand alone system had no issue with pve6 to pve7 when ceph-common was un held .
  8. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    also this is a standalone system that does not use ceph
  9. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    unhold did not work # apt dist-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done .. Fetched 158 MB in 12s (12.9 MB/s) W...
  10. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    wait I have ceph debs held.. let me retry
  11. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    I saw someone post similar in the German forum --- attempting upgrade from 6.4 to 7: apt dist-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Error! Some packages could not be installed. This may mean that you have...
  12. R

    [SOLVED] file system choice for pbs on hardware

    https://forum.proxmox.com/threads/best-choice-for-datastore-filesytem.93921/ however that is for pbs running as a virtual machine.
  13. R

    [SOLVED] file system choice for pbs on hardware

    Hello, we are considering moving pbs to a raid-10 zpool using six 4-TB nvme disks. i was searching threads on best file system type to use and saw concerns regarding zfs. however I can not see a more reliable way then raid-10 zfs. does anyone have another idea to consider ?
  14. R

    [SOLVED] osd remove after node died

    Hello I know there is a cli way that we did 4-5 years ago to remove left over osd's , mons etc that show as out from an abruptly dead node. is there a newer way or is dump/edit/restore ceph config still the way to do so? we use pve 6.4 and ceph-octopus thanks
  15. R

    [PVE7] - wipe disk doesn't work in GUI

    on pve6.4 wipefs -fa /dev/nvme4n1 dd if=/dev/zero of=/dev/nvme4n1 bs=1M count=1000 udevadm settle reboot note after udevadm settle in pve after Reload drive showed still as lvm2 member parted /dev/nvme4n1 p showed no partitions i had tried many other things and glad that at least those...
  16. R

    [SOLVED] replacing server hardware and moving zfs disks issue

    thanks for the responses. due to the systems getting installed way back, there is not way to add a 512M partition . so we will reinstall. PS: adding a node to cluster and setting up ceph are much easier then before. the documentation and gui made it very easy.
  17. R

    [SOLVED] replacing server hardware and moving zfs disks issue

    thanks that is probably it. do you happen to know where the solution is for that?
  18. R

    [SOLVED] replacing server hardware and moving zfs disks issue

    Hello I searched I think i saw a this issue reported before we are replacing server hardware but moving over storage to new systems. the 1st system had a single disk ext4 pve system and all went well. the next one has zfs raid-1 . and will not boot. instead a uefi shell appears.. could...
  19. R

    [SOLVED] is garbage collection needed on a remote sync system?

    our remotes have much more disk usage [ 2x approx] then the source pbs system. so i am running garbage collect for the 1ST time. AFAIK the syncs have always been set to 'Remove Vanished' . How ever we have had some wrong configurations in the past , so i assume our issue with higher...