Search results

  1. G

    No email notification for zfs status degraded

    Still the issue... also on PBS.. whenever a ZFS degradation is detected there's no alert of any kind (other than checking on the host itself)
  2. G

    RGW ceph client admin keyring creation

    Hi, i'm following this on how to create a S3 compatible storage. Unfortunately adding the admin-key as described: ceph -k /etc/ceph/ceph.client.admin.keyring auth add client.radosgw.pve1 -i /etc/ceph/ceph.client.radosgw.keyring but also: ceph -k /etc/pve/priv/ceph.client.admin.keyring auth...
  3. G

    multiple datastores on same ZFS pool

    Hi, We've got 2 locations with a PBS node and a PVE cluster. We want to backup both clusters and replicate them with eachother. I started with only a zfs-pool called "tank" and mounted the backup-datastore directly on there for each location. Now, after a few months, i think it is best to...
  4. G

    Proxmox windows cloud-init password problem after upgrade

    Hi, i see (through your previous posts) that you've managed to get cloudbase-init to work (or partially). I'm in the process of achieving the same but struggle to get DNS & password inserted. Adjusted the conf file to insert the NetworkConfigPlugin & SetUserPasswordPlugin Only the IP and...
  5. G

    global proxy settings through snippets?

    we'll fix it with a remote pushed ansible solution
  6. G

    VM failing to start

    TASK viewer: Use of uninitialized value in addition (+) at /usr/share/perl5/PVE/QemuServer/Cloudinit.pm line 496. Use of uninitialized value $data in print at /usr/share/perl5/PVE/Tools.pm line 254. Having the same
  7. G

    global proxy settings through snippets?

    Is this possible or should i rely on external tooling like pve-cloud-init-creator?
  8. G

    global proxy settings through snippets?

    Hi, does anybody know how to insert proxy-variables through snippets so my CI-vm can reach out to get it's updates during first boot? I'm looking for some documentation but cannot find it. Thanks in advance.
  9. G

    [SOLVED] corosync no active links

    The Active / Backup config fixed it. Going to add some 2x1gbit 1lane cards for corosync in the future.
  10. G

    [SOLVED] corosync no active links

    So actually corosync misbehaves on a 802.3ad bond which has dynamic channeling :eek: Why is it so sensitive and what could go wrong eventually? It does a better job on a single nic setup? Or like 2 times 1 Gbit with seperate IP's on different switches? I'll give active/backup a try. Thanks...
  11. G

    [SOLVED] corosync no active links

    Hi, our new cluster is being prepaired for use and while slowly starting to use the cluster i see some messages in syslog: Dec 25 06:27:52 prxa06 pmxcfs[4868]: [dcdb] notice: data verification successful Dec 25 06:34:26 prxa06 corosync[5041]: [TOTEM ] Retransmit List: b819f Dec 25 06:35:35...
  12. G

    ceph pool compression lz4

    deduplication would be nice :-D
  13. G

    ceph pool compression lz4

    changed aggressive to force which leads to some changes
  14. G

    ceph pool compression lz4

    Hi, to save some space on our ssd-pool i've enabled compression on the pool: ceph osd pool set VMS compression_algorithm lz4 ceph osd pool set VMS compression_mode aggressive with: ceph df detail, i can get some details but cannot verify if it works. Any hints? Is "ratio" needed as well...
  15. G

    [SOLVED] OSD outdated and OUT?

    My goodness.... human error... Looking at my first downloaded OSD-log... i found a typo in my cluster network... Changed it, works like a charm :)
  16. G

    [SOLVED] OSD outdated and OUT?

    This is what ceph -s looks like root@utr-tst-vh03:~# ceph -s cluster: id: 67b4dbb5-1d5e-4b62-89b0-46ff1ec560fd health: HEALTH_WARN 1 filesystem is degraded 1 MDSs report slow metadata IOs 7 osds down 2 hosts (8 osds) down...
  17. G

    [SOLVED] OSD outdated and OUT?

    After a recent PVE upgrade (non-enterprise) taking CEPH from 16.2.5 to 16.2.7, the converged storage went offline on my test cluster. Following the advice to get my monitors and managers up again, 2 out of 4 hosts gained an upgraded OSD. After a couple of reboots many OSD's (12 out of 16) went...
  18. G

    [SOLVED] CEPH MON fail after upgrade

    Restarting all nodes, monitors and managers doesn't do the trick. Systemctl restart osd-ceph.target does not fix it either. Now I'm seeing osd's going offline.
  19. G

    [SOLVED] CEPH MON fail after upgrade

    Ahum, sorry .. 2 of 4 machines have upgrade 16.2.7 OSD's. The other 2 are still on 16.2.5 and have the upgrade symbol. Anything to do or just wait?