Search results

  1. T

    CEPH-Log DBG messages - why?

    thank you for this. i gave up trying to solve this after reading comments from the devs somewhere that led me to believe that they didn't want to make it possible to turn those messages off. finally i can get rid of the spam and perhaps see the actual interesting log messages for once.
  2. T

    wrong mtu after upgrade to 9

    interface ens6f0 got 2 altnames, one enp3s0f0 and one enx followed by mac
  3. T

    wrong mtu after upgrade to 9

    see below for the relevant part of interfaces. i included only the interfaces for the nic affected by this. this is the broken config, the fix was replacing enp3s0f0 with ens6f0 and enp3s0f1 with ens6f1 plus a reboot and it was solved. the interface that had the wrong mtu was enp3s0f0, the rest...
  4. T

    wrong mtu after upgrade to 9

    this issue is not on a virtio interface, it is on the physical interface of the host and besides mtu was set or else i would not have gotten any jumbo frames in the first place.
  5. T

    wrong mtu after upgrade to 9

    i recently upgraded my 4 mode proxmox cluster from 8 to 9. after a while i ran into some odd problems and after some troubleshooting it turns out the interface of one node did not get the correct mtu applied (i got one network with jumbo frames) resulting in one node with mtu of 1500 and the...
  6. T

    After updating ceph 18.2.2 each osds never start

    fixed it. ceph balancer off ceph osd dump ceph osd rm-pg-upmap-primary <each upmap primary id from dump> ceph osd rm-pg-upmap-items <each upmap item from dump> waited until cluster was finished with backfills and then then upgrade, appears to work.
  7. T

    After updating ceph 18.2.2 each osds never start

    Can you elaborate on how you removed the pg-upmap entries? I'm pretty sure i have also used the osdmaptool and set balancer to upmap in the past.
  8. T

    After updating ceph 18.2.2 each osds never start

    Downgrading back to 18.2.1-pve2 worked but i would like to get this issue solved or find a workaround so i can update. See below for the requested info. ceph status cluster: id: 07f85e29-1217-46f1-a392-5cccfe47cd8c health: HEALTH_WARN Module 'dashboard' has failed...
  9. T

    After updating ceph 18.2.2 each osds never start

    upgraded one node to 18.2.2 and got this error. monitor and osds on this node does not start. what is the easiest way of downgrading to previous release? never mind, i should read better
  10. T

    Ignore quorum requirement for specific vm

    yes i know. but for this specific vm there is no resource issues preventing it from starting, everything is local. it cannot be migrated away from this host and everything it needs is on this one host. so there will never be any conflict with anything else in the cluster (for example no shared...
  11. T

    Ignore quorum requirement for specific vm

    Hello a question, perhaps a bit odd one, is it possible to force one specific vm to start even when there is no quorum in the cluster? this specific vm if physically bound to a specific host and can never move or be run on any other host in the cluster because of pcie assignments and it also...
  12. T

    CEPH-Log DBG messages - why?

    Not a solution but this seems relevant to the problem: https://github.com/ceph/ceph/pull/47502 and https://tracker.ceph.com/issues/57049 My guess is that this behavior is a bug.
  13. T

    CEPH-Log DBG messages - why?

    A question, is this still the right way to do it in ceph v17 ? reason for asking is that i tried adding mon_cluster_log_file_level = info to ceph.conf under [global] and rebooted but i still have all the spam in the log about pgmap
  14. T

    Crash Proxmox 7 mptSAS when starting VM that incl. HW passtrough.

    Never mind, i'm stupid and need to pay more attention to my iommu groups
  15. T

    Crash Proxmox 7 mptSAS when starting VM that incl. HW passtrough.

    Did you find a solution to this problem? I'm trying to get gpu passthru working ( got it to work and have been using it on another box), the moment the vm is started it boots just fine BUT the sas controller that is not at all involved in this all of a sudden decides to drop all disks (luckily...
  16. T

    iotop after 7.2 upgrde

    Hello after upgrade to 7.2 iotop fields SWAPIN and IO is unavailable. i suspect this is related to the new kernel. suggestions how to fix?
  17. T

    lvm related problems (probably bugs)

    Hello Two problems partly related. Problem 1: When you have a vm with a disk on lvm (not in a thin pool) the lv gets deactivated whenever you turn off the vm. This is not such a great idea because if you also have lvmcache enabled the dirty writeback blocks will not be committed from fast to...
  18. T

    [SOLVED] Some ceph related questions (autoscale)

    That was exactly the problem, thanks. i had the built in pool device_health_metrics set to use replicated_rule and the other pools was using device class based rules.
  19. T

    nfs storage mount shows up inactive on some members

    I think i've found the cause but it still is odd. As you can see above the path on the nfs server itself is: /mnt/data/backup/proxmox the exported folder on this server is actually /mnt/data. So the nfs storage mount points to a subdirectory where i have my backups So far so good. BUT i do...
  20. T

    [SOLVED] Some ceph related questions (autoscale)

    Hello. I got a small pve cluster setup and i'm testing out ceph to see if it is something i want to use or not. The question i have is about the pg_autoscaler. If i look at the list of pools and check the column "Optimal # of PGs" this is listed as n/a and if i hover over it i get a tooltip...