Search results

  1. R

    [SOLVED] Failing connection to pbs after setting a custom certificate in it.

    # cat .config/proxmox-backup/fingerprints cat: .config/proxmox-backup/fingerprints: No such file or directory
  2. R

    [SOLVED] Failing connection to pbs after setting a custom certificate in it.

    Yes, checked multiple times. Checked with wget and curl (before reverting to the autogenereted certificate). Connecting to the pbs web GUI with the custom certificate installed works, only pve -> pbs connection fails.
  3. R

    [SOLVED] Failing connection to pbs after setting a custom certificate in it.

    Hello, After having uploaded a custom certificate, made with out internal PKI. Even if I updated the cert fingerprint on pbs storage configuration on pve side, pbs is no more accessible from our pve clusters, which show this error: proxmox-backup-client failed: Error: error trying to connect...
  4. R

    File restore failing from a directory with thousands of files

    Thank you Imho, use of `proxmox-backup-client map` as detaild in #2 is much more practical (and really great for large restores, in general). rob
  5. R

    File restore failing from a directory with thousands of files

    Here's the log. I guess the salient part is this OOM: Out of memory: Killed process 53 (proxmox-restore) total-vm:434868kB, anon-rss:49756kB, file-rss:5000kB, shmem-rss:0kB, UID:0 pgtables:224kB oom_score_adj:0
  6. R

    File restore failing from a directory with thousands of files

    I prepared a little screencast: https://www.resolutions.it/nextcloud/index.php/s/PGydmnnTTMsCMPF The error text says: "Connection closed before message completed (500)"
  7. R

    File restore failing from a directory with thousands of files

    A workaround: use proxmox-backup-client map command for mapping the remote (encrypted) snapshot to /dev/loop0 and mounting it as a local device. proxmox-backup-client map "my_snapshot_name" "my_drive_image" --repository my_pbshost:my_datastore_name --ns my_namespace --keyfile...
  8. R

    File restore failing from a directory with thousands of files

    Hello, our pve 7.2-4 cluster is running backups of a file server with some huge directories (one of these contains more than 75000 subdirectories). File restore from gui fails while trying to list content of these dirs. Atm the only way I found to restore contents from these dirs is to point to...
  9. R

    Slow Tape Backups on HP LTO4

    Hello Dietmar The PBS datastore is an ext4 filesystem on a LV in a large VG where other LV are used as BackupPC pools. The physical drives are a bunch of enterprise SAS disks in raid5. The tape library is FC connected. You are right. These LV volumes are often under heavy utilization. I tried...
  10. R

    Slow Tape Backups on HP LTO4

    Hello, I am testing pbs 2.0-9 on an old HP MSL6030 tape library with two LTO4 drives inside. The first backup seems quite slow compared with usual ones (made with dump command). dump: 2426.42GB in 47 hours, so more or less 14MB/s pbs: 135GB in 6 hours, so about 6MB/s Hers's the job detail: ()...
  11. R

    RDB mirroring slow in proxmox

    After some research, i came on this post of ceph-users ml: http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-August/028898.html «If you are trying to optimize for 128KiB writes, you might need to tweak the "rbd_journal_max_payload_bytes" setting since it currently is defaulted to split...
  12. R

    RDB mirroring slow in proxmox

    I am trying to play with rbd-mirror as well, following the howto on the wiki: https://pve.proxmox.com/wiki/Ceph_RBD_Mirroring Primary cluster is composed of seven nodes, with four 2TB bluestore OSD with SSD cache each. Secondary cluster has three nodes with similar configuration. Both...
  13. R

    [SOLVED] ceph-mgr failing to start on one node after nautilus migration

    CLI, ok. 3 MONs, ok. Given that under normal circumstances we will not have more than one node down. I guess could be an issue if we had two (with MON onboard) down. msgsrv2, yes. I carefully followed the (very accurate indeed) guide. bye, rob
  14. R

    [SOLVED] ceph-mgr failing to start on one node after nautilus migration

    Found that nice MGR administration on the web interface. Destroyed, created, problem solved :) Thanks, rob
  15. R

    [SOLVED] ceph-mgr failing to start on one node after nautilus migration

    yes, I noted that /etc/ceph/ceph.client.admin.keyring was not aligned with the same file on other nodes, and manually replaced it, but this did not changed anything. Here follows (I redacted node names): [global] auth client required = cephx auth cluster required = cephx auth...
  16. R

    [SOLVED] ceph-mgr failing to start on one node after nautilus migration

    One of the manager refuses to start after a seven node cluster migation to pve6 and nautilus, here the debug trace: # /usr/bin/ceph-mgr -d --cluster ceph --id pvenode2 --setuser ceph --setgroup ceph --debug_ms 1 2>&1 | tee ceph-mgr.start.log 2019-11-28 12:17:39.136 7f40b23a1dc0 1 Processor...
  17. R

    xterm.js fonts not rendered correctly

    Hello, On some clients (mainly ubuntu 18.04) it seems i'm hit by https://github.com/xtermjs/xterm.js/issues/1170 I was not able to apply mentioned workarouds; may be someone succeded in fixing? Thanks, rob
  18. R

    4.15 based test kernel for PVE 5.x available

    All nodes upgraded and now running 4.15.17-13 . All is well :) Nice job, Thomas! rob