Search results

  1. R

    slow migrations

    - we use connect-x4 and -x5 - the mlag switches are mellanox 2500sn running cumulus . as for the loop i am certain the cables go from the connect-x* cards to the switch. cumulus has this command that shows the lldp of each connection. I ran this on both switches to verify that the port...
  2. R

    slow migrations

    so in our case the network we think was the cause. I will leave the thread open as someone else posted their issue, and will wait a few days to make sure there is not a repeat.
  3. R

    slow migrations

    also we get emails when ceph -s shows warnings and saw these: cluster: id: 220b9a53-4556-48e3-a73c-28deff665e45 health: HEALTH_WARN 1 slow ops, oldest one blocked for 82322 sec, mon.pve4 has slow ops services: mon: 3 daemons, quorum pve15,pve11,pve4 (age 22h)...
  4. R

    slow migrations

    1- I am fairly sure it is related to this seen at dmesg on pve hosts # dmesg|grep hung [Sun Sep 12 07:39:32 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [Sun Sep 12 07:39:32 2021] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [Sun...
  5. R

    slow migrations

    Hello I am seeing slower migrations with pve7 then pve6 . We do have a network issue that I have been trying to track down over the last week, which is probably the cause. However I wanted to see if others have noticed slower migrations. thank you for reading this.
  6. R

    [SOLVED] proxmox-backup-client --exclude question

    thank you Fabain. this is what worked in our case: proxmox-backup-client backup etc.pxar:/etc home.pxar:/home -exclude */.cache/ -exclude */Downloads/ -exclude */.thunderbird/*/ImapMail 2>&1
  7. R

    [SOLVED] proxmox-backup-client --exclude question

    Hello i am trying to get -exclude to work. we do not want to put a .pxarexclude at each system proxmox-backup-client backup etc.pxar:/etc home.pxar:/home --exclude home/*/.cache:home/*/.thunderbird/*/ImapMail I've tried a few different ways over the last hour. each time i see: warning...
  8. R

    [SOLVED] how to move pbs to new hardware

    after installing pbs and before doing any config. when creating a zpool use same name as original system 0- the transfer will create the zfs on the target. if target exists receive failed here. [ could be because i had already set up a datastore on the target . in any case i am unsure on...
  9. R

    [SOLVED] how to move pbs to new hardware

    probably there are just few more things besides datastore and /etc/proxmox-backup/ to scp / rsync over . do you happen to know the location of keys etc? after that move IP address .. PS thanks for the zfs send/rcv suggestion .
  10. R

    [SOLVED] how to move pbs to new hardware

    Hello we are moving pbs to new hardware. zpools / zfs have been set up with same names rsync of datastore is in progress and will take some hours. for configuration , besides /etc/proxmox-backup/ is there anything else that needs to be copied over?
  11. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    so i went and removed pve , installed bullseye rebooted and installed pve . so all is working. the issue started with the hold of ceph-common . our other stand alone system had no issue with pve6 to pve7 when ceph-common was un held .
  12. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    also this is a standalone system that does not use ceph
  13. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    unhold did not work # apt dist-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Done .. Fetched 158 MB in 12s (12.9 MB/s) W...
  14. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    wait I have ceph debs held.. let me retry
  15. R

    [SOLVED] pve 6 to 7 and ceph-common issue

    I saw someone post similar in the German forum --- attempting upgrade from 6.4 to 7: apt dist-upgrade Reading package lists... Done Building dependency tree... Done Reading state information... Done Calculating upgrade... Error! Some packages could not be installed. This may mean that you have...
  16. R

    [SOLVED] file system choice for pbs on hardware

    https://forum.proxmox.com/threads/best-choice-for-datastore-filesytem.93921/ however that is for pbs running as a virtual machine.
  17. R

    [SOLVED] file system choice for pbs on hardware

    Hello, we are considering moving pbs to a raid-10 zpool using six 4-TB nvme disks. i was searching threads on best file system type to use and saw concerns regarding zfs. however I can not see a more reliable way then raid-10 zfs. does anyone have another idea to consider ?
  18. R

    [SOLVED] osd remove after node died

    Hello I know there is a cli way that we did 4-5 years ago to remove left over osd's , mons etc that show as out from an abruptly dead node. is there a newer way or is dump/edit/restore ceph config still the way to do so? we use pve 6.4 and ceph-octopus thanks
  19. R

    [PVE7] - wipe disk doesn't work in GUI

    on pve6.4 wipefs -fa /dev/nvme4n1 dd if=/dev/zero of=/dev/nvme4n1 bs=1M count=1000 udevadm settle reboot note after udevadm settle in pve after Reload drive showed still as lvm2 member parted /dev/nvme4n1 p showed no partitions i had tried many other things and glad that at least those...