Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance issues for production usage #251

Open
zhp007 opened this issue May 16, 2024 · 3 comments
Open

Performance issues for production usage #251

zhp007 opened this issue May 16, 2024 · 3 comments
Assignees

Comments

@zhp007
Copy link

zhp007 commented May 16, 2024

We plan to use Olric in production. We build our cache service with Olric as embedded Go library.

Olric embedded servers are accessed through a gRPC endpoint. Requests are directed to this endpoint and then evenly distributed among the servers using load balancing.

Each server creates one Olric instance with the following config:
Config env: "wan"
PartitionCount: 271
ReplicaCount: 2
ReplicationMode: AsyncReplicationMode

Each server creates one EmbeddedClient from the Olric instance, and one DMap from the client. The gRPC get and set requests will be handled by this one DMap with its Get and Put operation.

Cluster setup for the testing:
3 pods, each with 4G memory and 2 CPUs.

Load testing scenario:
Key: UUID, value: Random 16 bytes
Test flow: Set a key/value pair -> wait 10ms -> Get the same key/value pair
The flow is executed 1000 times per second

Result:
P99 set is 4.5ms, P99 get is 6ms. It is much higher than we expected.

Any issues or suggestions for our usage and setup?
Will create more than one EmbeddedClient and/or DMap in each server help?
Any other config settings or tunings we need to care about?
Any other performance tuning suggestions?

Thanks in advance!

@buraksezer buraksezer self-assigned this May 16, 2024
@buraksezer
Copy link
Owner

Hey @zhp007,

The setup, configuration, and results seem normal to me.

P99 set is 4.5ms, P99 get is 6ms. It is much higher than we expected.

What are the results for other percentiles? The Go runtime causes latency fluctuations. It might be a good idea to play with the GC parameters.

Will create more than one EmbeddedClient and/or DMap in each server help?

The EmbeddecClient/DMap implementation is thread-safe. With the same client instance, you can create any number of goroutines. Two CPUs should be good enough to get a rough idea of the performance characteristics.

Any other config settings or tunings we need to care about?

Currently, there are no other configuration options to improve performance. Still, you have two replicas, and Olric is trying to fetch all accessible values from the cluster before returning the result to the client. This is called the Last Write Wins(LWW) policy. It compares the timestamps in the returned DMap entries and the most up-to-date result wins. It's not possible to turn off this behavior explicitly. We can quickly implement it, but disabling LWW decreases the consistency.

See this:

if dm.s.config.ReadQuorum >= config.MinimumReplicaCount {

ReadQuorum is one by default. We can add a boolean flag to turn off fetching the values from replicas but this will hurt the consistency.

The other thing to know is that if you request a key/value pair that does not belong to the node, the node finds the partition owner, fetches the pair from the owner, and returns it to the client. So, there is no redirection message in the protocol. It works as a reverse proxy.

See this:

func (dm *DMap) Get(ctx context.Context, key string) (storage.Entry, error) {
.

@zhp007
Copy link
Author

zhp007 commented May 16, 2024

@buraksezer Thanks for quick reply!

What are the results for other percentiles?

Set P50 is 0.8ms, P90 is 1.7ms, P95 is 2.3ms
Get P50 is 1.5ms, P90 is 2.4ms, P95 is 3.1ms

We also tried ReplicaCount=1, there is no change on Set, while Get has better performance:
P50 is 0.5ms, P90 is 1.2ms, P95 is 2ms, P99 is 4ms

With the same client instance, you can create any number of goroutines.

If we create a worker pool of goroutines with the same client, with each worker having its own DMap with the same name. Then use the worker pool to process incoming requests rather than using a single DMap, will it help improve performance?

Olric is trying to fetch all accessible values from the cluster before returning the result to the client. This is called the Last Write Wins(LWW) policy. It compares the timestamps in the returned DMap entries and the most up-to-date result wins.

Won't owner always win in this case, or do I miss sth? It would be great if we can have a option to read from owner/primary only.
IIUC, the difference is that when owner is down, LWW can still read from backup, while with this new option, we need to wait for new node coming up to propagate the data from backup, during which the data is unavailable?

Also the calls to dm.lookupOnOwners and dm.lookupOnReplicas in getOnCluster are sequential. Can it be parallelized to speed up Get?

Besides, condition dm.s.config.ReadQuorum >= config.MinimumReplicaCount is always true, so dm.lookupOnReplicas will always be called.

The other thing to know is that if you request a key/value pair that does not belong to the node, the node finds the partition owner, fetches the pair from the owner, and returns it to the client.

Yes, understand that one additional hopping and data transfer can increase overhead/latency.

@buraksezer
Copy link
Owner

P50 is 0.5ms, P90 is 1.2ms, P95 is 2ms, P99 is 4ms

I think these numbers are pretty good. Olric is implement in Go.

If we create a worker pool of goroutines with the same client, with each worker having its own DMap with the same name. Then use the worker pool to process incoming requests rather than using a single DMap, will it help improve performance?

I have never tried such a thing before but increasing amount of context switches may reduce the performance at some point.

Won't owner always win in this case, or do I miss sth? It would be great if we can have a option to read from owner/primary only.

It depends what happened in the cluster before you run the request. If you are adding and removing the nodes frequently, you may encounter such anomalies. Only the owner node has read-write right on the partitions but it apply the LWW policy to return the most up to date result.

Olric is an AP store, that means Olric always chooses availability over consistency.

IIUC, the difference is that when owner is down, LWW can still read from backup, while with this new option, we need to wait for new node coming up to propagate the data from backup, during which the data is unavailable?

When a node goes down, a new partition owner will be assigned immediately and start processing the incoming requests. There is no active anti-entropy mechanism in Olric. It only tries to read keys from members(using the routing table, based on a consistent hash algorithm) and apply read-repair if it's enabled.

Also the calls to dm.lookupOnOwners and dm.lookupOnReplicas in getOnCluster are sequential. Can it be parallelized to speed up Get?

It may improve performance in some cases but I think the increasing amount of parallel network calls may decrease the overall performance for some workloads. This should be carefully designed and tested.

Besides, condition dm.s.config.ReadQuorum >= config.MinimumReplicaCount is always true, so dm.lookupOnReplicas will always be called.

Yeah, this enables the LWW implicitly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants