Skip to content
This repository has been archived by the owner on Dec 5, 2022. It is now read-only.

Proposal to use pre-rendered content in tessellate-fragment #79

Open
semonte opened this issue Mar 16, 2017 · 3 comments
Open

Proposal to use pre-rendered content in tessellate-fragment #79

semonte opened this issue Mar 16, 2017 · 3 comments
Labels

Comments

@semonte
Copy link
Contributor

semonte commented Mar 16, 2017

Currently when the tessellate-fragment gets a request, it

  • fetches sources (HTTP)
  • fetches bundle (HTTP)
  • fetches content (HTTP)
  • renders the fragment by loading the bundle (js) and content (JSON) in server memory and uses node virtual machine to run the bundle script.

This requires resources from the server and the same steps are taken for every single request. When there are multiple requests per second, this does not scale.

A proposal to resolve this is to have a pre-rendered fragment storage. tessellate-fragment could (by configuration) look up for a pre-rendered fragment by the URL and language key. If there is a hit, the HTML can be sent immediately to the client skipping all the above mentioned steps. For popular URLs the performance gains are significant and clients can enjoy faster response times.

@semonte
Copy link
Contributor Author

semonte commented Mar 16, 2017

@mfellner Would really like to hear your ideas regarding this.

@mfellner
Copy link
Contributor

This is certainly a useful optimisation we should implement. Just to be clear: we're simply talking about caching here. Where "simply" means it's actually really complicated, since it's caching.

First let's look at all the steps where we might add a cache:

  1. fetch resources for rendering
    • we can use ETags for caching
    • we can use a faster resource store
  2. render HTML from resources and context
    • we can cache the rendered html

Intuitively 1 should be slower since the network is usually slower than CPU time.

In that case we can optimise by adding ETag based caching to tessellate-request. The cache itself should probably be in memory or on disk - ideally we'd use a tiered cache with two layers (a networked cache doesn't make sense). Maybe node-cache-manager could help for example. Another idea would be to make the storage at least for some resources (e.g. bundles) faster. E.g. by using Redis or by moving it onto the same physical node as the fragment-service.

Now, server-side rendering with React is slow too. Instead of caching rendered results we could also look into alternative, faster renderers. But caching is probably the bigger win. There are some things to consider here:

  • rendering uses multiple inputs:
    • bundle.js
    • content.json
    • HTTP request context
    • potentially data fetched by React components
  • cache-invalidation can happen in two ways:
    • client-based (fragment client needs to send invalidation information)
    • server-based (fragment checks for new resources and re-renders or not)
  • cache can be multi-tiered:
    • memory
    • disk
    • redis
  • cache must be thread-safe and scalable
    • now that the fragment runs multiple processes, we might write to the cache on two different processes in parallel
    • memory/disk would be shared by processes on 1 node, redis would be shared by multiple nodes

Those are just my initial thoughts, maybe I forgot something important. What is everyone else's opinion?

@semonte
Copy link
Contributor Author

semonte commented Mar 17, 2017

  1. render HTML from resources and context
    * we can cache the rendered html

The initial proposal is about caching just the rendered content. Caching bundles, content and sources adds more complexity while providing small(er) performance benefits. A big bottleneck is the rendering phase.

I don't think we need to decide which cache will be implemented, caching capability could be activated by a configuration. For us https://aws.amazon.com/elasticache/ seems like a good choice, but this should not be hardcoded into the application code.

Maybe some code clears my thoughts:

return router.get('fragment', '/fragment', async ctx => {
    if(cacheOn) {
      html = getCacheProvider().getOrCreate(headers, query)
    } else {
      html = renderTheSlowWay(headers, query);
    }
  ...
  });

CacheProvider would be an interface for the cache. Deployed services can then use whatever cache fits them.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

2 participants