Hello!
As some of you may have heard, a huge security hole has been found in the Java package "log4j".
(https://www.bleepingcomputer.com/news/security/new-zero-day-exploit-for-log4j-java-library-is-an-enterprise-nightmare/)
So my question is, does Proxmox also use this package?
Do we admins...
Hello!
I just stumpled on this thread, because I have currently the same situation of transfering VMs between clusters.
Is there any kind of prediction, when this feature may be included in a stable version?
Or some kind of roadmap?
Yes I know...
We currently run our "productive" VMs on a single iSCSI Server, which performs much better than CEPH at the moment
(no problems at all regarding performance)
Because the iSCSI Server is our bottleneck, we wanted to use CEPH now, so it's very disappointing to get that hangs etc.
Yes thats true!
That was also my thought.
The problem is, that I don't know much about fio.
The specialist told me, that I need to use --ioengine=rbd
but the ceph cluster is not available inside the VM, so that doesn't make sense to me..
I mean the randwrite without --ioengine=rbd was also OK...
And fio, which I ran on a VM:
fio --rw=randwrite --name=test --size=2G
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1
fio-3.16
clock setaffinity failed: Invalid argument
Starting 1 process
Jobs: 1 (f=1)...
Yes sure!
Here are the results:
rados bench -p testbench 10 write --no-cleanup
hints = 1
Maintaining 16 concurrent writes of 4194304 bytes to objects of size 4194304 for up to 10 seconds or 0 objects
Object prefix: benchmark_data_103380
sec Cur ops started finished avg MB/s cur MB/s last...
Yes, I have also heard about the autoscaler.
I will ask the specialist again if it is really necessary.
So the PGs are not it then.... :/
I do not remember to have set the block size.
Could that also be a cause?
I don't know if you have to pay attention to this under CEPH as well (?).
Thank you very much for your reply :)
Well the 32PGs were recommended / set by the specialist..
He said, when the autoscaler starts to complain about to few PGs, then I should increase it
Well at the beginning I had entered the CEPH Mon IPs as below:
192.168.x.x,192.168.x.x,192.168.x.x
I've read an article, where they recommended using ";"
Now it seems to work better?!
Can someone confirm this?
I am completly confused ...
I've already saw this post:
https://forum.proxmox.com/threads/krbd-and-external-ceph-slow-vm-disk-use-100.88684/
But this does not seem to be a solution for us.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.