Search results

  1. R

    how to restart ceph

    in a few days I've a setting to change in ceph.conf . to apply ceph.conf changes, which services need a restart ?
  2. R

    [SOLVED] Ceph Public/Cluster Networks another question

    So having thought ceph probably uses those addresses to prioritize networks. I will set public to vm network range 10.1.0.0/16 . so another question: should pbs access the storage network using the public or cluster network?
  3. R

    [SOLVED] Ceph Public/Cluster Networks another question

    we have this in our ceph.conf for few years. public_network = 10.11.12.0/24 cluster_network = 10.11.12.0/24 pve vm's run at 10.1.10.0/24 on a seprate pair of switches used in lacp bond . corosync.conf uses 2 other nics and switches for cluster communications. having read forum posts, pve...
  4. R

    [SOLVED] Ceph - Schedule deep scrubs to prevent service degradation

    Hello David with Ceph 15 the script has the following issue. ceph-deep-scrub-pg-ratio: line 104: $2: unbound variable # line 104: while read line; do set $line; echo $1 $($DATE -d "$2 $3" +%s); done | \ PS thank you for this script, we've been using it for a few years.
  5. R

    [SOLVED] Corosync Redundancy question

    I want to manually edit corosync.conf to set ring network priorities . I am reading pve-docs/chapter-pvecm.html#pvecm_redundancy . my question is how do I set priority in the .conf ? # pvecm create CLUSTERNAME --link0 10.10.10.1,priority=15 --link1 10.20.20.1,priority=20 this is the totem...
  6. R

    [SOLVED] ceph upgrade procedure

    hello, I am looking to have our upgrade for ceph procedure updated. here is what we do now per 2017 notes: 1. apt update && apt full-upgrade 2. restart monitors, each after the other (wait for healthy...
  7. R

    Zpools Not Importing After Power Failure

    so if there are files or directories at the mount point try this zfs set overlay=on tank replace 'tank' with your zpool name
  8. R

    Zpools Not Importing After Power Failure

    does the zfs mount point have directories like dump , template etc? of so there is a zfs option to fix this, i'll look for it as i used it again a few weeks ago
  9. R

    [SOLVED] ifupdown2 and bond

    off topic: we are looking at upgrading ceph switches. currently using Quanta LB6M 10Gbe. we have 40Gbe cards . we think Mellanox/Nvidia switches running cumulus linux is the way to go. however i know little on this subject. is cumulus a good fit in labs and clusters?
  10. R

    [SOLVED] ifupdown2 and bond

    Hello Spirit. during the switchover to ifupdown2 I noticed a few warnings which you are probably already aware of. I assume these will not cause an issue. However there could be settings to avoid the warnings? # ifreload -a warning: bond0: attribute bond-min-links is set to '0' and...
  11. R

    [SOLVED] ifupdown2 and bond

    so after looking at the network expertise the authors of ifupdown2 have at networking , i think we'll switch our pve cluster to use ifupdown2 .
  12. R

    [SOLVED] ifupdown2 and bond

    auto bond0 iface bond0 inet static address 10.11.12.80/24 bond-mode active-backup bond-primary enp3s0f0 bond-slaves enp3s0f0 enp3s0f1 mtu 9000 ## OLD auto bond0 iface bond0 inet static address 10.11.12.80/24 slaves enp3s0f0...
  13. R

    [SOLVED] ifupdown2 and bond

    thank you for the fast reply.
  14. R

    [SOLVED] ifupdown2 and bond

    Hello, at our pbs system a warning had flashed about ifupdown2 or ifdown2 missing, so i installed it. after doing so my existing bond did not work so had to change /etc/network/interfaces to new bond directives. I have 5 pve nodes to get bond working on. My question: is ifupdown2...
  15. R

    [SOLVED] lxc backup fail dmesg info

    Hello, for lxc to pbs or local vzdump we get frequent fails on busy systems. ### from email 606: 2020-11-01 19:57:16 INFO: Starting Backup of VM 606 (lxc) 606: 2020-11-01 19:57:16 INFO: status = running 606: 2020-11-01 19:57:16 INFO: CT Name: bc-sys6-buster 606: 2020-11-01 19:57:16...
  16. R

    Most efficient sync / prune / garbage collect strategy

    that makes sense. so we will always have gc set up at the remotes.
  17. R

    Most efficient sync / prune / garbage collect strategy

    I had assumed that a remote sync target did not need GC , since it should be a duplicate of the main pbs system. however that is not the case. the remote had 2TB+ more disk usage after a couple months of syncs. running gc fixed that. question: once pbs is stable, should gc still be...
  18. R

    [SOLVED] 1 in 12 lxc backups fail on average

    naturally after marking this solved there were more fails like this last two days: - to local storage: INFO: starting new backup job: vzdump --all 1 --mode snapshot --mailnotification failure --compress zstd --quiet 1 --mailto fbcadmin --storage z-local-nvme INFO: skip external VMs: 108, 446...
  19. R

    [SOLVED] 1 in 12 lxc backups fail on average

    just to complete this, a week or so after we had 1-2 fails per day lxc and kvm. that seems to be fixed by software updates for pbs and pve/kvm

About

The Proxmox community has been around for many years and offers help and support for Proxmox VE, Proxmox Backup Server, and Proxmox Mail Gateway.
We think our community is one of the best thanks to people like you!

Get your subscription!

The Proxmox team works very hard to make sure you are running the best software and getting stable updates and security enhancements, as well as quick enterprise support. Tens of thousands of happy customers have a Proxmox subscription. Get yours easily in our online shop.

Buy now!