Search results

  1. G

    PBS vs Freenas + BackupPC (review)

    I would greatly welcome that. I like the pbs-server side of how the backups are stored. I like being able to download the individual files and sort them into namespaces. The server side is just great. The client side makes me weep. :) So there are 4 main things I want to backup: I have a...
  2. G

    PBS vs Freenas + BackupPC (review)

    So I've installed PBS on a new server (physical), because the idea seems sound to me. This is basically a comparison, of my current setup, vs switching to PBS. I haven't decided what I'm going to do yet here... Current Setup: FreeNas with a NFS export for proxmox to backup into, and then...
  3. G

    Spice - Passthru USB Webcam

    [Wed Jun 15 17:56:21 2022] usb 1-4.1: new full-speed USB device number 4 using xhci_hcd [Wed Jun 15 17:56:22 2022] usb 1-4.1: not running at top speed; connect to a high speed hub [Wed Jun 15 17:56:22 2022] usb 1-4.1: config 1 interface 1 altsetting 7 endpoint 0x81 has invalid maxpacket 2688...
  4. G

    Add server as non-voting member of cluster?

    I didn't even think about that. OK, great idea!
  5. G

    Spice - Passthru USB Webcam

    I'm attempting to passthru a Logitech c270 webcam via SPICE USB redirection, and only getting a black video screen. The VM sees the device, for example, zoom knows the model and type, but the video feed is just black. Has anyone gotten this to work?
  6. G

    Add server as non-voting member of cluster?

    The idea is that I want to build a proxmox server to run my pfsense. I want it to normally have as few interdependencies as possible, so it should be able to come up on it's own, if the whole network is down. However, I want to be able to move the vm off for a few minutes for patching of the...
  7. G

    Add server as non-voting member of cluster?

    Thank you. Hrmm. that might not solve my actual problem, which is that I don't want to mess up my existing quorum.
  8. G

    Move EFI disk fails on between (RBD storage and RBD storage) or (RBD storage and lvm-thin) while VM is running

    Oddly, I've noticed that mine move, but it crashes the vm midway through the move, and once it gets to the other side, I have to then reboot it.
  9. G

    Add server as non-voting member of cluster?

    I'm wondering if I can add a server as a non-voting member of a cluster. I currently have a cluster of 5 machines, and want to build one more, that is more or less dedicated to running a specific application. However, I would like the ability to disk-migrate a VM over to the main cluster for...
  10. G

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    So, yes, 100%, I think I've figured it out. There were two problems. The one that woke me up and freaked me out was this one: (Month average) The 14th was the day I upgraded ceph. However, I also did a general update on that day, and got: 2022-05-14 06:45:15 status installed...
  11. G

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    I might be on to something.. on saturday or sunday, I upgraded pve-qemu-kvm, because I saw something in another thread about it causing issues with ceph and backups. After doing so, I moved 2 of my heaviest use VM's back and forth between nodes, and it looks like the massive slowdown has...
  12. G

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    OK, interesting.. I normally don't keep snapshots around, but I found one vm, which is semi-active, that had a really old snapshot.. I've just told it to delete, and now I see a ton of snaptrims running. Maybe I'll let those go and see what happens? I've never really seen a snaptrim run...
  13. G

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    Full runs below. It's horrific. root@alphard:~# rados bench -p bench 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects Object prefix: benchmark_data_alphard_1594585 sec Cur ops started finished...
  14. G

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    No impact: root@alphard:~# rados bench -p bench 10 write --no-cleanup hints = 1 Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects Object prefix: benchmark_data_alphard_1575585 sec Cur ops started finished avg MB/s cur MB/s last...
  15. G

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    Hrmm, seems to have not solved anything. I'm not seeing any snaptrims running.I also tried upgrading qemu-kvm, that seems to have done nothing (even after restarting or migrating the vm's around) What I see, is just massive io load, for no reason. If I look at the VM's, none of them are...
  16. G

    how to install ceph influx module?

    Errr.. I didn't install the influxdb python module via pip, I installed it via apt. I don't know if it will notice the pip one.
  17. G

    how to install ceph influx module?

    I'm a liar, it was broken. Just fixed it. apt install python3-influxdb root@felis:~# ceph mgr module enable influx Error ENOENT: module 'influx' reports that it cannot run on the active manager daemon: influxdb python module not found (pass --force to force enablement) (re-start mgr in gui)...
  18. G

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    Curious, did you see any snaptrims running? I never see any running at all. Either way, I'm thinking I'll try that. osd.0: osd_pg_max_concurrent_snap_trims = '1' (not observed, change may require restart) Interesting, might have to restart all osds...
  19. G

    how to install ceph influx module?

    ceph mgr module enable influx It should be included as part of core-modules I think. I just rebuilt my mgr node last night, and found it working fine with no special packages when I failed back over to it. I do remember having to run a bunch of commands to setup the destination. However I...
  20. G

    [SOLVED] Ceph performance after upgrade to pacific extremely slow

    Some examples of horrible performance: Iperf3 tests from all 5 nodes look pretty much identical: ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 10.1.1.9, port 41392 [...