https://pve.proxmox.com/wiki/Proxmox_VE_2.0_Cluster#Remove_a_cluster_node
According to that you are supposed to shut down the node(s) and make sure they do not come back.
Well, i did just that. I had 4 nodes for testing, turns out did not need 2 of them.
So, i shut down the 2 nodes, and...
Yea knew that about Mellanox :)
I haven't used infiniband in couple of years almost now, and even then it was just 10G over copper.
Cable prices sure have come down since then! :)
Webgui will move to the internal IP?
Hmm, that means i need outside cluster VM for VPN, or a VPN device to access then, or nginx reverse proxy.
I have already added the nodes to the cluster, so if i change the hosts entries, it will move to using those address without reboot?
So let's says these:
MTS3600Q-1BNC Switch: http://www.ebay.com/itm/Mellanox-MTS3600Q-1BNC-InfiniBand-Fiber-Switch-36x-QDR-20-40-Gb-QSFP-Ports-/131579163377?hash=item1ea2bab2f1
Goes together well with ConnectX-2 HCA...
I had 32GB on the testing machines :)
For the RAID5 arrays i only looked at the actual drive measurements, with RAID you may see much higher going to the device, but optimizations happen after on the device level.
iostat -xz to see all the merges going on. But even then you can't always be sure...
Wow, haven't taken a look in a while on infiniband switches.
Damn these are cheap used!
ConnectX-2 cards are 50-60$ a pop, switches (36port) hover around 500$ starting mark!
What should i look in a switch when choosing one?
I think even with the lower than expected throughput i'm already sold...
If that is SSD IOPS that's weak, but i guess you are talking about HDD?
Use case differences and optimizations.
Did you look at the actual IOPS or before merges and kernel optimizations?
Even after that, you are hard pressed to get real figure since there is more optimizations done on the driven...
Actually incorrect, at least with software raid the mount drops to read only the moment you've lost 1 drive too many.
This gives the opportunity to repair it without damage to data, after getting the drives back online a resync will happen.
ZFS does not do this, it will happily continue on...
Those Adaptec adapters were capable of running HW Raid or JBOD, naturally i used JBOD.
Actually, what i read up on back then claimed it's stable and ready back when i tried it. i ran FreeNAS first on the same hardware btw.
YES - The hardware was faulty (SATA cables), but common sense says that...
I did use it for a short while under *BSD too. The aforementioned configs were consumer grade motherboard + PSU, Kingston ECC Ram (and lots of it), Opteron CPU, Rackmount chassis and i think i used Adaptec adapters for the SATA connections.
It was stable after swapping for "cheapo" SATA cables.
You are very right - very limited budget. But most get this wrong - i have no problem using money, as long as the Per Capacity/Performance/Node or whatever ratio is RIGHT.
It's what our target segment is - 99.9% of our customers are private persons, storing whatever not so essential data. for...
Thanks for the input! :)
Did not know about the cluster sync network! In none of the examples i've seen there's been no mention of such. So does the sync network do rebalancing etc. OSD to OSD traffic? or what does it do? Does it need as much BW?
Ofc i haven't been considering Infiniband...
Hi,
It seems proxmox defaults on using the public ip address.
Is changing this to internal only way to do via hosts? Do i need just hostname or FQDN (hostname.domain.com) entries in there?
What are the drawbacks?
Change hosts files and reboot to make the changes, or is there a way to do...
Nice setup! Sounds to me like you are also using Dell cloud nodes, or Supermicro version?
Dell cloud nodes uses Tyan & Supermicro motherboards at least.
We are still planning on which sizes we will be deploying, first cluster ofc will be slowly built out, but it's quite possible we will...
Glad to hear from someone who has used ceph for years! :)
Our target market and business model is such that 30 is nothing, even 64 nodes is nothing. We do pure mass market applications at a limited margin. Sometimes we only have 2-3 users on a node!
Have you had your OSDs 60%+ full? Someone...
Now that is not very sensible at all -- to use raid before ceph :(
Kinda ruins the idea of Ceph.
I don't use RAID cards, unless you go with the most costliest options they tend to ruin performance and sometimes even reliability. I've long been suspecting that at least Adaptec makes the cards go...
Yes, usual ceph tools seemed to work - but i have no idea what else pveceph does, so no idea does it work in practice.
Tbh, needs to be done somehow, no matter what. If it is not possible, this is a showstopper issue. But we are not yet at a point where ceph deployment is practical, need to...
Dietmar, these definitively are NOT decisions belonging to proxmox development but users. There are many use cases which require other than raw, direct SATA/SAS devices! Ours included.
We are planning to build a sizeable cluster, and if we cannot use partitions it would mean wasting *half* of...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.