well it don't seems to be very consistent what I see :( - if I reboot the machine disk changes are detected also on the ZFS pool
It might be related to vmware? - but should be the same behavior as a physical server
Felix
It works on and off - but if I remove a disk and add a new on with the same SCSI ID from vmware it will not be detected / updated - only if I reboot
If I create a new disk with an previously not used SCSI ID, it works automatically
any clues?
I tried it, and it does not work - disks remain. If I reboot proxmox the disk is "gone" as expected - I am testing on vmware 5.5
poolraid5 ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
sdo ONLINE 0 0 0
sdn ONLINE...
Thanks :) - and are you using the s3610 for the journal also and all SATA drives or PCIE? is 10GE storage network with Jumbo frames?
In general how big should the journal drive be?
Hi
I am ceph rookie and have been working with lost of storage systems and find ceph very, very interessting - I have also been looking at Nutanix, and looks very cool but also too expensive :(
A 3 node setup with SSD/PCIE journal logs and with SAS spinning drives or maybe all flash, 10GE on...
Thank you very much, so how would it work if I have 3 nodes running storage and x other nodes for compute only - how can they talk to the ceph system on the other storage nodes?
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.