I would greatly welcome that. I like the pbs-server side of how the backups are stored. I like being able to download the individual files and sort them into namespaces. The server side is just great. The client side makes me weep. :)
So there are 4 main things I want to backup:
I have a...
So I've installed PBS on a new server (physical), because the idea seems sound to me. This is basically a comparison, of my current setup, vs switching to PBS. I haven't decided what I'm going to do yet here...
Current Setup:
FreeNas with a NFS export for proxmox to backup into, and then...
[Wed Jun 15 17:56:21 2022] usb 1-4.1: new full-speed USB device number 4 using xhci_hcd
[Wed Jun 15 17:56:22 2022] usb 1-4.1: not running at top speed; connect to a high speed hub
[Wed Jun 15 17:56:22 2022] usb 1-4.1: config 1 interface 1 altsetting 7 endpoint 0x81 has invalid maxpacket 2688...
I'm attempting to passthru a Logitech c270 webcam via SPICE USB redirection, and only getting a black video screen. The VM sees the device, for example, zoom knows the model and type, but the video feed is just black. Has anyone gotten this to work?
The idea is that I want to build a proxmox server to run my pfsense. I want it to normally have as few interdependencies as possible, so it should be able to come up on it's own, if the whole network is down. However, I want to be able to move the vm off for a few minutes for patching of the...
I'm wondering if I can add a server as a non-voting member of a cluster. I currently have a cluster of 5 machines, and want to build one more, that is more or less dedicated to running a specific application. However, I would like the ability to disk-migrate a VM over to the main cluster for...
So, yes, 100%, I think I've figured it out.
There were two problems. The one that woke me up and freaked me out was this one: (Month average)
The 14th was the day I upgraded ceph. However, I also did a general update on that day, and got:
2022-05-14 06:45:15 status installed...
I might be on to something.. on saturday or sunday, I upgraded pve-qemu-kvm, because I saw something in another thread about it causing issues with ceph and backups. After doing so, I moved 2 of my heaviest use VM's back and forth between nodes, and it looks like the massive slowdown has...
OK, interesting.. I normally don't keep snapshots around, but I found one vm, which is semi-active, that had a really old snapshot.. I've just told it to delete, and now I see a ton of snaptrims running. Maybe I'll let those go and see what happens? I've never really seen a snaptrim run...
Full runs below. It's horrific.
root@alphard:~# rados bench -p bench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_alphard_1594585
sec Cur ops started finished...
No impact:
root@alphard:~# rados bench -p bench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_alphard_1575585
sec Cur ops started finished avg MB/s cur MB/s last...
Hrmm, seems to have not solved anything. I'm not seeing any snaptrims running.I also tried upgrading qemu-kvm, that seems to have done nothing (even after restarting or migrating the vm's around)
What I see, is just massive io load, for no reason. If I look at the VM's, none of them are...
I'm a liar, it was broken. Just fixed it.
apt install python3-influxdb
root@felis:~# ceph mgr module enable influx
Error ENOENT: module 'influx' reports that it cannot run on the active manager daemon: influxdb python module not found (pass --force to force enablement)
(re-start mgr in gui)...
Curious, did you see any snaptrims running? I never see any running at all. Either way, I'm thinking I'll try that.
osd.0: osd_pg_max_concurrent_snap_trims = '1' (not observed, change may require restart)
Interesting, might have to restart all osds...
ceph mgr module enable influx
It should be included as part of core-modules I think. I just rebuilt my mgr node last night, and found it working fine with no special packages when I failed back over to it.
I do remember having to run a bunch of commands to setup the destination. However I...
Some examples of horrible performance:
Iperf3 tests from all 5 nodes look pretty much identical:
-----------------------------------------------------------
Server listening on 5201
-----------------------------------------------------------
Accepted connection from 10.1.1.9, port 41392
[...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.