BigPipe would be applicable in this case?

2

I was reading about how great web-sites work and got to know BigPipe.

What is BigPipe (kids)

The Bigpipe was a feature created by facebook, what it does is make the page load time faster, page content is divided into parts called "Pagelets" and can load some of them simultaneously or item-by- item.

Well, I'm working on a rather large project and I thought I'd put this technique into practice, but I doubted it.

My pages are technically loaded only once, ie the main js css are loaded into the index. On this page there is a view , it is in this view that pages are loaded in through the load() function of jQuery and so my website works, the pages are pulled and loaded there. Taking into account that these pages contain no files, just html compressed.

Use internal and external cache system for files that do not change frequently as images etc.

My question, would it be useful to use BigPipe in this case, or would it not make any difference in performance?

    
asked by anonymous 11.10.2015 / 00:44

1 answer

2

TL; DR

Possibly the gain will be small, if any, because BigPipe does not seem like a suitable technique for the scenario proposed in the question.

Performance is a box of surprises

No it is possible to state whether BigPipe will positively or even negatively affect an application's performance without effectively implementing the implementation and collecting before-and-after data.

In addition, any optimization should be done within a relevant context. Users have different habits and needs to use a system. BigPipe helps to parallelize the rendering of a composite page, but is the biggest problem for the user not one of the queries that takes so long to return? In such a case, you can invest all resources in optimization techniques and the user will still be dissatisfied because the information that matters to him continues to linger.

Just start thinking about complex optimization techniques if performance really is a problem and a diagnosis has been made so that you have confidence that the proper technique has been selected. Never apply optimization techniques without this, because the cost to implement and maintain them is high and it will be money thrown in.

Why not use BigPipe

By the system description, there are not several sections in the system that are displayed at the same time.

A parallel implementation that only displays a single page at a time will only generate system overhead, that is, unnecessary processing.

Alternatives

For a system that loads all the resources only once, the concern is the first access the user makes to the homepage.

Well, the first step is to put everything that is static, such as images, styles, and scripts, mini- ned on a CDN to ensure the least possible resource download time.

Once this has been done, measure the load time without caching. If it is not satisfactory, you can use some simple techniques:

Flush header early

Some engines templates mount the entire page and then send it to the user. Others do buffering . The problem is that in this case the browser will have to wait for processing on the server until it receives the header and can start downloading the declared styles and scripts.

Change this to a flush that sends the entire <head> tag in the first milliseconds of the request. That way, while the server thinks, the browser is downloading features.

For this to work even better, declare all <script> tags in the header with the async attribute so the browser does not block the rest of the page while loading the scripts.

Background preloading

A simple way to give a sense of fluency is to preload the features of the screens the user will access while he is doing something.

A very simple example would be to pre-load static system resources asynchronously while the user is logging in or looking at the home page.

It's almost cheating, but this way when the user accesses a system screen the browser will have the features cached.

Caution with cache

Systems that need to scale must try to store state. Creating stateless services allow you to deploy the system, avoid delays with code cache (uninitialized cache), avoid stale data incorrect), and so on.

Caches done wrongly are a tremendous headache later, a real source of problems, limitations and bugs.

Distributed caching is costly for a small system, after all synchronizing information also costs time, and is complicated for a large system.

A cache is like putting a mutant state on the system and you will always have to worry if that state is properly updated and how the update will be done. It's often not worth it.

Performance depends on implementation

Regardless of the optimization technique that applies, your implementation will always speak loudly.

A poorly implemented BigPipe can increase the total time until the ready for user event. A poorly implemented cache can be worse than a database query. An SQL code that does table scan may be worse than a REST call to an external service.

These examples are reasons why the greatest concern is always with a good implementation of the system and less with specific techniques of performance improvement.

    
13.10.2015 / 08:30