Good practices
So I can say that either you did not understand what they said about variable declaration, or they taught you wrong, and this happens a lot.
I really see a lot that made sense in the 60's or 70's being repeated to this day as if they were true. People learn by cake recipe. That is, they learn good practices and do not learn how something really works and why they do it. Learning why is more important than learning what .
For decades there was less than a thousandth of processing we have today and less than a millionth of memory. The compilers needed to be simple. They avoided difficult work and forced the programmer to help him. This ceased to exist, but what was required became legend.
In fact, it has never been ideal to do this. It is more readable to declare the variable as close to its use as possible. It's easier to follow what you're doing.
Smaller surface
The smaller the scope, the less damage it can do when something goes wrong. The less the lifetime, the less memory it takes, even if it's just a stack space.
If a variable is not being used outside a block, it does not have to be declared outside of it, there is no gain in doing so. Even if it was good practice, it should be justified, nor can anyone do that.
Even though, respecting good practices should never be the goal of a code. It works properly, meeting the requirements and being readable and easy to maintain is what should always occur.
I answered something about this in C .
Technically worse
Note that absurd you have to declare a variable, spend time to assign a value to it, and shortly thereafter you have to assign another value and the first one to be discarded. A int
is simple, but it has a type in which assigning a value is very expensive. Every time I see someone assigning a value that is never used I feel like crying.
Even if you do not need this whole performance, even because this gain is not great, avoiding something totally unnecessary is not just optimizing, it's simplifying.
Remembering that declaring a variable always has an assignment, at least in C #, even if implicit. Luckily, or randomly, depending on the point of view, reference types declared but not explicitly assigned, have a very low cost because they only have to zero the reference, but it is not zero, it is similar to assigning an integer.
I've always been oriented to create the variables before any operation
This is true, you can not use the variable before declaring :) But it does not have to be well before, it can be just before.
Resharper is very sure in this.
Semantic difference
Note that there is a semantic difference in these codes. The first creates a variable and changes its value. There may be some case that wants to do this, but it does not seem to be this case. The second creates several variables, one by loop interaction. Yes, each pass will generate a a
different from the other.
But do not think that this has cost more or takes up more memory, because it is also closed at the end of the loop, so it is created again, over where the other was. There is no cost in doing this, in addition to having to assign a value.
It may seem like the same thing. In general, it does, but because it has a different identity it may be that it makes a real difference if the variable is captured by a < in> closure , for example.
If you had the closures created in the loop stored in a list for later execution, the values would be different. The first one will capture the same unique variable, so the value will be the same in all closures instances. In the second code, each closure would have a different value since it is capturing a new variable at each pass. This is very important. I believe that in this case Resharper would not give this indication.
This code shows this:
using System;
using static System.Console;
using System.Collections.Generic;
public class Program {
public static void Main() {
var acoes = new List<Func<int>>();
var a = 0;
for (var i = 0; i < 5; i++) {
a = i;
acoes.Add(() => a * 2);
}
foreach (var acao in acoes) {
WriteLine(acao());
}
acoes = new List<Func<int>>();
for (var i = 0; i < 5; i++) {
int b = i;
acoes.Add(() => b * 2);
}
foreach (var acao in acoes) {
WriteLine(acao());
}
}
}
See working in dotNetFiddle and in CodingGround .
Until C # 4 there was a bug in the compiler and even the second worked wrong, just like the first one.