___ ___ erkimt How the semantic / indexing with AngularJS? ______ qstntxt ___

I always wonder, AngularJS is a framework that is constantly being used.

But I have a question about how it works for crawlers (example googlebot).

Do they even run the javascript and interpret the code to get the information and show the site developed on the platform?

  

With the angular HTML theoretically does not have information "yet", it is first necessary to trigger the controllers and such.

The question is: How does semantics / indexing work with Angular?

    
______ azszpr71758 ___

According to this post , Google's crawler renders pages that have Javascript and browse the listed states.

Interesting parts of the post (free translation):

  

[...] we decided to try to interpret pages by running JavaScript. It's hard to do this on a grand scale, but we decided it was worth it. [...] In recent months, our indexing system has been serving a large number of web pages the way a regular user would see them with JavaScript enabled.

     

If features such as JavaScript or CSS in separate files are blocked (say with %code% ) so that Googlebot can not retrieve them, our indexing system will not be able to see your site as a regular user.

     

We recommend allowing Googlebot to retrieve your JavaScript and CSS so that your content can be better indexed.

Recommendations for Ajax / JS can be found at this link .

If you want to serve Angular application content for crawlers that do not support the same kind of functionality, you need to pre-render the content. Services such as Prerender.io are intended for this.

    
______ azszpr71790 ___

Crawlers (example googlebot) use pure text reading, meaning they first validate meta tags, then comments, then they remove all encoding and then read the whole text without code. Reason: Increase processing speed and reduce errors by having fields that hide, or have nodes (nodules) that are removed during execution. Crawlers do not run any kind of technology (Browser), they only read the file. The Angular does not stop being a Javascript like any other, for that reason its elements are ignored. Only items relevant to SEO (Optimization) are brought into question in their indexing.

Part of my explanation you find in this Google article Understanding Web Pages Better

To better understand the process of viewing plain text, make a requestion of the page in question by CURL, Lynx which are technologies commonly used by Crawlers.

For better indexing we recommend creating robots.txt and XML sitemaps .

    
______ azszpr71766 ___

One tip I can give you is, take the course they offer, it's quick and easy, you'll better understand how semantics work:

link

    
___

10

I always wonder, AngularJS is a framework that is constantly being used.

But I have a question about how it works for crawlers (example googlebot).

Do they even run the javascript and interpret the code to get the information and show the site developed on the platform?

  

With the angular HTML theoretically does not have information "yet", it is first necessary to trigger the controllers and such.

The question is: How does semantics / indexing work with Angular?

    
asked by anonymous 29.06.2015 / 14:43

3 answers

9

According to this post , Google's crawler renders pages that have Javascript and browse the listed states.

Interesting parts of the post (free translation):

  

[...] we decided to try to interpret pages by running JavaScript. It's hard to do this on a grand scale, but we decided it was worth it. [...] In recent months, our indexing system has been serving a large number of web pages the way a regular user would see them with JavaScript enabled.

     

If features such as JavaScript or CSS in separate files are blocked (say with robots.txt ) so that Googlebot can not retrieve them, our indexing system will not be able to see your site as a regular user.

     

We recommend allowing Googlebot to retrieve your JavaScript and CSS so that your content can be better indexed.

Recommendations for Ajax / JS can be found at this link .

If you want to serve Angular application content for crawlers that do not support the same kind of functionality, you need to pre-render the content. Services such as Prerender.io are intended for this.

    
29.06.2015 / 15:31
0

Crawlers (example googlebot) use pure text reading, meaning they first validate meta tags, then comments, then they remove all encoding and then read the whole text without code. Reason: Increase processing speed and reduce errors by having fields that hide, or have nodes (nodules) that are removed during execution. Crawlers do not run any kind of technology (Browser), they only read the file. The Angular does not stop being a Javascript like any other, for that reason its elements are ignored. Only items relevant to SEO (Optimization) are brought into question in their indexing.

Part of my explanation you find in this Google article Understanding Web Pages Better

To better understand the process of viewing plain text, make a requestion of the page in question by CURL, Lynx which are technologies commonly used by Crawlers.

For better indexing we recommend creating robots.txt and XML sitemaps .

    
29.06.2015 / 17:49
-1

One tip I can give you is, take the course they offer, it's quick and easy, you'll better understand how semantics work:

link

    
29.06.2015 / 16:13