Separate names with a comma.
did it works?
I have used yum install cloud-init installed at Centos 7 and Debian 9 the issue really annoying.
Thanks you anyway.
Had the same issue. even bug to got root access instead of ssh key works without set it from putty but it come with the Centos 7 and Debian 9
It seems there is bug with Cloud-init root password. I understand that Cloud-init works with SSH Key access but either normal root access...
After few hours of search i figuered something not sure if it really works. now how this look
# begin crush map
tunable choose_local_tries 0...
I followed https://ceph.com/community/new-luminous-crush-device-classes/
I add the rules and seems fine but not sure why the Ceph start...
Do have one KVM with 5tb ceph raw file with Cloud-init active the issue that system are on one partitions vda1
i can see it 2TB only...
If i do use NVME for promox os in ceph without any raid anf if this drive failed i will loss everything on that node? Even the OSD and...
it seems quit strange that i got better performance with 2 x 6tb sata only while using Filestore instead of bluestore?!
rados bench -p test 60...
I can add more drives it will gives better performance? Also not sure do i need add more journal or one P3700 will be enough each node?
Well, not sure what is the issue i spent 4 days trying to figure it out. !
i'm using Mellanox SX6025 Non-blocking Unmanaged 56Gb/s SDN Switch, not sure if this even will works if increases the MTU? i'm using
allocating VM = 0 nothing on the system
What disks do you use exactly = Seagate 7200rpm 256MB Cache 6Gb/s
i'm doing Ceph not sure if this the best speed with my configuration? using 3 OSD with 5TB enterprise HardDrive and NVM P3700 Bluestore...
osd commit_latency(ms) apply_latency(ms)
8 65 65
7 74 74
Do some test
75 GB RAM
SM863 OS Host
Dual port Mellanox 56Gb/s
3 x OSD 5TB Hard drive Per server 9 total OSD
1 x P3700 Journal per...
anyone can give me advice on this? and what best configuration i can go with?
I was able to set the Mellanox Dual port 54Gb/s port FDR card up however able to do without bonding
root@c18:~# rados -p test bench 10 write...
I would use different subnet in eth4 or eth5 bot even both when i using bonding otherwise the network will not fully up the boding ips not ping...
Yes in both
i was testing something if it will works i will remove the subnet from eth4 and eth5 leave they are on with empty
I will test with...