Identify if all characters are equal [duplicate]


I have the following code that works perfectly.

In it I have a string and check if all the characters are equal or not:

var numbers = '1111121111',
    firstNumber = numbers.substr(0,1),
    numbersEquals = true;

for(let i = 1; i < numbers.length; i++) {
  if (numbers[i] != firstNumber) numbersEquals = false;
console.log('numbersEquals = ' + numbersEquals)

Is there an easier way or a ready method to do this?

I think I'm using a lot of code to do something simple.

asked by anonymous 19.12.2016 / 19:56

4 answers


Using RegExp.prototype.test() , do the following:


This returns true or false depending on whether all characters are equal or not.

/^(.)+$/.test("xxxx"); // true
/^(.)+$/.test("xxxy"); // false

test() is from the ECMA 3rd edition (1999). It will work basically on all browsers.

19.12.2016 / 19:59

There are many ways to do this. I do not think its wrong, it's very clear and didactic. But there are shorter ways, like Mr Felix's. Converting to array gives to do some tricks too, for example:

var numbers = '1111121111';
numbers.split('').every(function(num, i, arr) { return num == arr[0] }); // false

Or even shorter in ES-2015:

const numbers = '1111121111';
[...numbers].every( (num, i, arr) => num == arr[0] ); // false
19.12.2016 / 20:05

With ECMA6, you can use Set :

const unicos = [ Set(numbers)]


var numbers = '1111112111';

if (new Set(numbers).size > 1)
  console.log("Todos os valores não são iguais");
  console.log("Todos os valores são iguais");


As @bfavaretto suggested, you can get number element in the Set object.

19.12.2016 / 20:04

I would do it exactly the way it did because it should be the fastest solution. But there are several alternatives, one of them:

var numbers = '1111121111';
console.log('numbersEquals = ' + (numbers.replace(new RegExp(numbers.substr(0, 1), 'g'), "").length == 0));
numbers = '1111111111';
console.log('numbersEquals = ' + (numbers.replace(new RegExp(numbers.substr(0, 1), 'g'), "").length == 0));
19.12.2016 / 20:09
Fill string with leading zeros ___ ___ erkimt How the semantic / indexing with AngularJS? ______ qstntxt ___

I always wonder, AngularJS is a framework that is constantly being used.

But I have a question about how it works for crawlers (example googlebot).

Do they even run the javascript and interpret the code to get the information and show the site developed on the platform?


With the angular HTML theoretically does not have information "yet", it is first necessary to trigger the controllers and such.

The question is: How does semantics / indexing work with Angular?

______ azszpr71758 ___

According to this post , Google's crawler renders pages that have Javascript and browse the listed states.

Interesting parts of the post (free translation):


[...] we decided to try to interpret pages by running JavaScript. It's hard to do this on a grand scale, but we decided it was worth it. [...] In recent months, our indexing system has been serving a large number of web pages the way a regular user would see them with JavaScript enabled.


If features such as JavaScript or CSS in separate files are blocked (say with %code% ) so that Googlebot can not retrieve them, our indexing system will not be able to see your site as a regular user.


We recommend allowing Googlebot to retrieve your JavaScript and CSS so that your content can be better indexed.

Recommendations for Ajax / JS can be found at this link .

If you want to serve Angular application content for crawlers that do not support the same kind of functionality, you need to pre-render the content. Services such as are intended for this.

______ azszpr71790 ___

Crawlers (example googlebot) use pure text reading, meaning they first validate meta tags, then comments, then they remove all encoding and then read the whole text without code. Reason: Increase processing speed and reduce errors by having fields that hide, or have nodes (nodules) that are removed during execution. Crawlers do not run any kind of technology (Browser), they only read the file. The Angular does not stop being a Javascript like any other, for that reason its elements are ignored. Only items relevant to SEO (Optimization) are brought into question in their indexing.

Part of my explanation you find in this Google article Understanding Web Pages Better

To better understand the process of viewing plain text, make a requestion of the page in question by CURL, Lynx which are technologies commonly used by Crawlers.

For better indexing we recommend creating robots.txt and XML sitemaps .

______ azszpr71766 ___

One tip I can give you is, take the course they offer, it's quick and easy, you'll better understand how semantics work: