What if your house caught fire?
when defining your production environment, you really need to define what your disaster recovery criteria are. This is irrespective of what hardware or configuration you use. If you have a maximum downtime criteria, consider a replicated remote environment for...
compression in flight. if you just booted the guest, ram is mostly zeros.
Yeah, I remember around 2018 was discussion on the subject, and some folks managed to get it working but it had to be compiled from source. the results were not encouraging at the time, but that has more to do with the...
thanks for the data, but I'm not certain how this addresses the question "was wondering whether rdma was worth it."
parenthetically-
Compression. unless you have a way to fully load the vm ram with incompressible data this isnt all that impressive.
Are you referring to OFED, or the inbuilt drivers? Mellanox/nvidia dont support it, true, but I imagine it would be pretty trivial to apt install rdma-core and set the rdma switch in nfsd.conf. I have to imagine you tried that though...
I never had the need to squeeze those extra 10% so I...
good thought. There are no compute resources sharing nodes with OSDS, but nevertheless I already changed osd_scrub_load_threshhold to 3.5, but your comment prompted me to walk over my osd nodes to see whats happening there.
lo and behold, they're all busy, mostly with OSD load.
now to figure...
I've been wrestling with this issue for over a month now, and I cant seem to get past it.
I have two pgs that havent been scrubbed since June:
$ ceph health detail | grep "not scrubbed since 2024-06"
pg 17.3dc not scrubbed since 2024-06-01T20:46:29.042727-0700
pg 17.137 not scrubbed...
You mentioned that your guest has no connectivity.
what you arent providing is your GUEST'S network configuration. as in, whatever the equivalent is for /etc/network/interfaces in your guest's operating system. It is not possible for us to continue troubleshooting the guest without being inside it.
then its option 1 :)
There are two ways to deal with this:
1. create a dedicated router vm. map the uplink provided by your colo to the vm as eth0, and add a second virtual nic attached to vmbr0. the router will respond to all 5 IPs, and you can NAT traffic to any logical internal address based...
It all depends on how your ISP is delivering the IPs to you. You probably have some sort of device at the head end of your network thats provided by your ISP. This device can be set to pass through the IPs, or it could be set up as a NAT.
IF its delivering the IPs directly, all you need to do...
This is usually assigned to your router, which in turn can/should be set up to NAT internally. There is almost no usecase when you want your hypervisor/VMs facing directly out to the internet- here be dragons.
So you've shown benchmarks with 35.9K IOPs @140MB/S, which is FANTASTIC for these drives (I wouldnt expect that to last as the tbw grows.) where did you see 450MB/S, and under what benchmark?
yes.
not quite. you need to use mdadm to make the underlying raid.
dont use a single parity volumeset, ESPECIALLY with consumer grade drives. you're better off making a single mirror, and use the third drive for other purposes.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.