Lets just say that 35 MB/s Read and 100 MB/s write are wrong. You should see at best a a 2-time increase in reads over your writes (unless your OSD's are THAT slow, that without the SSD's they just generate around 17 MB/s in a R2 pool)
As a comparrison-point i have a home ceph-cluster...
So I just noticed something I never noticed before and I am not sure wether it is a bug or not.
I have 2 ceph pools.
Pool Ceph_A and Ceph_B for simplicities sake. They are both Erasure Coded pools with a cache pool infront of it (afaik mandatory for EC-pools)
Now because i had issues...
I am not aware of another way. There very well might be tho. I have a tendency to skip over other possibilities once i found the best one that works for me.
paging @dietmar
well, its not easier, it is just differently displayed.
In both options you need to make sure to not "block" the wrong ports, and to open the right ones.
For someone starting up, i'd suggest you use Proxmox and skip pfsense. especially if you do not need a firewall because your proxmox server...
You seem to replicate via your nodes by using the "host" bucket (which is good). Since you have 3 Nodes i assume you do replicate with a size of 3. Thats all good news.
Now the bad news.
1.)
You have differently sized Disks. You can see this by the "weight" in "osd tree" and also by the pg...
yeah that works, follow the steps i provided above.
You know that proxmox has a firewall build-in, right ?
https://pve.proxmox.com/wiki/Proxmox_VE_Firewall
And you also know you should be able to do NAt straight out of the box, right ?
https://pve.proxmox.com/wiki/Network_Model
That way...
I assume you wan't something like this:
Inet <-> Pfsense <-> Proxmox <-> VM ?
You'd basically do the following:
Assign 2 vmbrX to your proxmox
Vmbr0 -> eth0
Vmbr1 -> no ethx
Assign pfsense 2 vNics
vNic1 -> Vmbr0 -> Inet
vNic2 -> vmbr1 -> internal Proxmox side
Assign your VM's a vNic on...
Can't you just block 0.0.0.0/0 access to SSH, enable your Proxmox-IP's And then go into your Cluster via a specific Range, or from inside the Cluster ??
Whats your Replication like ?
Replication 3 via Host ?
If you can post the following parts from your crush map:
I assume you have tried using virtio with IOthread, right ?
Can you execute the following commands and provide us the output via Code-tags ?
ceph osd tree
ceph pg dump | awk '...
Yes, you need to fully disintegrate / destroy / purge / remove the Cluster on ALL nodes.
Then you change your host entries, check that you can ping the IP's.
Then you create a NEW Cluster on your nodes.
that won't work.
you will need to do the following steps in the order they are written here:
1) Destroy the cluster (as in remove it from ALL nodes) - by doing a reinstall, or better yet following the steps on the proxmox wiki here...
Yes and no.
You use bonding for failover and for load-balancing, which in turn can increase your parallel throughput.
You'd have to read up on the available modes for linux native bridges and/or openvswitch.
Some Examples:
With a balance-rr mode (via linux native bridge) on a dual 1G link...
I have a hard time understanding the following part:
This is mainly due to the level of english used.
Are you trying to say the following ? :
If that is what you are trying to say, then this would be my reply:
Q1: Where did you implement these changes ? On the Original Node(s) ?
Q2: Did...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.