What is the best option to use in relation to performance

1

I need to make all products add up from the screen, but I have products that are no longer on the screen. So my question is: Will it be that if I hide only the products that are visible will it perform better? instead of giving .hide() to all products, even those that are already invisible

For example, what would be the best performance of these two code snippets below

$(".produto").hide().attr('invisible',true);

OR

$(".produto[invisible=false]").hide().attr('invisible',true);
    
asked by anonymous 03.07.2015 / 16:15

3 answers

1

Simple selectors like .produto will always be faster than complex selectors to search

However, to change properties it is likely that [invisible=false] is faster, since changing DOM elements that are already in the body of the page is more costly, so in this case the query will bring less elements and this will make the process with .hide faster because we will have fewer elements.

Classes and performance

However, the class query is usually faster than the attribute query, so instead of using .attr('invisible',true); use a class, such as:

Hide:

$(".produto:not(.hide)").addClass("hide");

Displays:

$(".produto.visible").removeClass("hide");

If you want to use the animated effect:

$(".produto:not(.hide)").hide(400, function() {
    $(this).addClass("hide");
});

$(".produto.hide").removeClass("hide").show();

But note that this type of optimization is minimal.

Selector :hide and :visibile

You can also use the selectors :hide and :visibile to be useful, for example:

$(".produto:visible").hide();

and

$(".produto:hide").show();
    
03.07.2015 / 16:24
2
  

Warning: Unfortunately the jsperf , which would be the most practical tool to test what I say in my answer, It has stability issues , and I am not able to use it right now. So consider that some of my arguments can be both confirmed and disproved by a real performance test. And do not forget that the best option may vary by browser.

To understand the possible performance bottlenecks in these code snippets, let's see what operations they do:

  • $(...) - Creates an array of jQuery objects containing all elements that meet the last selector. Whenever possible jQuery uses the native document.querySelectorAll function for this.
  • .hide() - Loops over our array of elements, hiding all of them one by one.
  • .attr('nome', valor) - Runs another loop on the array, setting the attribute value for all of them, one by one.
  • The difference between your two code snippets is in operation 1 , but can impact the other two snippets.

    In theory, selecting only the elements of a class (such as ".produto" ) is faster than selecting elements in that class that meet some additional criteria (such as ".produto[invisible=false]" ). However, this may depend on how the selection algorithm is implemented in each browser. And also in theory, selecting attributes is slower - which also depends on the implementation.

    But assuming the first selector is even faster, and the other operations (loops) are the same in the two code snippets you posted, then the first version is faster than the second, right? Not necessarily! The first version has the potential to select more elements of the page, since it is less restrictive than the second one, and this can influence the execution time of the loops.

    Imagine a page with many products (I do not know, 5,000), but only 10 visible ones. The first version may even select faster but will select 5000 elements, and then iterate over those 5,000 elements twice. Already the second would select only 10 elements, and make 2 loops on those 10, which theoretically is much faster and can compensate for the greater slowness of the selection.

    Conclusion

    On performance issues like this, the only way to do a realistic test is to reproduce all the conditions of your application, and see how each variation behaves. What matters is how they behave in these situations, not in any possible situation. And if you test the two variations on your application with real users in the various browsers you intend to support, and both look good, forget the performance issue. The cost (in time, effort) of micro-optimizations of this type is far greater than the performance improvement they may provide, and is generally negligible.

        
    03.07.2015 / 23:25
    1

    The first one will run faster, since it will only be necessary to find all the elements with the product class, while the second code will have to find the same elements and then verify its invisible attribute .

        
    03.07.2015 / 16:20