What defines good logic?

3

I was thinking about some code issues that I changed and I remembered what my developer android teacher talked and did: "Good logic has few lines of code." I asked some people who told me that it depends. But depends on what? Of course the less code, the faster the processing. Does anyone think the opposite of this and have a concrete example of this?

    
asked by anonymous 01.07.2014 / 03:55

3 answers

7

What your teacher said is generally true. One problem that occurs is some misunderstand the snippet "how much less code better". So they make "spaghetti codes" to justify fewer rows.

Having fewer codes does not mean better performance in certain cases.

It depends on the features of the language you are working with and the purpose.

A simple example with JavaScript:

With Jquery, we can select an element this way

elemento = $('#id_do_elemento');

Simple, is not it?

Let's see how it goes without JQuery

elemento = getElementByID('id_do_elemento');

Simple, however, uses more characters. So does this mean that using JQuery is more performative?

No, because it was necessary to load a 400kb library to be able to have this type of resource.

Of course, if library use is required for more complex tasks or even relatively large use of even very small functions, it is recommended to use the library.

So the logic goes in to determine, to have a good sense of discerning what can best be employed.

    
01.07.2014 / 09:50
3

I believe that the old maxim divide to conquer is a good practice. Making code with reasonable size, well modularized, well object oriented helps in the management of memory, but above all helps in maintaining the code. When the application becomes large and with many people using it, problems and / or new implementations arise and it is at this point that you will find out if you are going to go crazy or simply add a module.

    
01.07.2014 / 04:07
3

Let's assume that your logic or algorithm is quasi-optimal (i.e. mathematically speaking, there is no different procedure or test that gets to the same result with a substantially fewer number of steps). That is, its conceptual logic is well defined. It remains to realize this logic through code - an engineering problem, let's say - which demands a different kind of strategy. It is in this context, I believe, that your teacher's statement fits.

A code does not exist in a vacuum. It is embedded in an environment that has its own routines, built-in libraries, or external libraries, in addition to code you wrote yourself. All of these are resources that you can or can not use in your favor when planning a solution.

Code reuse

In general, if you can reuse code already ready your solution will be simpler / concise. This is not always true (eg if the ready code requires a multitude of parameters and configuration, and you only intend to use a tiny subset of its functionality), but as a rule it will be. And while the full solution has more code, there's a separation of responsibilities between your logic and each of your dependencies .

So to get a simple logic, the first step is to know your environment well, to know what resources are available and whether or not they apply to you.

Performance

Contrary to common sense, less code does not necessarily mean better performance. Even when external libraries are not involved. A classic example is ordering algorithms, where the simplest ones are usually the least efficient. In order to achieve the best performance, it is often necessary to complicate, to exploit peculiarities of its domain, and to abandon the "purity" of its conceptual model in favor of the solution most appropriate to the context.

The question is: is this really necessary? In the vast majority of cases, the answer is no - you do not need the performance of your program to be the best it can be. Citing the Daniel Omine example , ok, loading a 400kB library to save a few characters seems like a bad idea, but is it? Firstly, the jQuery code is cleaner, easier to write, and has less problem of incompatibility between browsers. Second, JavaScript code is often small, and activated in response to user action, so performance is not critical. Thirdly, caching you can load this library once and use it at several different points in your system.

That is, if * sacrificing a little performance can have a simpler and concise code, it is good to consider this possibility.

* if and only if - when your code serves as a basis for many other systems (eg you are developing a library that can be used in heavy calculations), it is worth sacrificing simplicity in the name of performance.

Level of abstraction

Finally, to have a concise code it is necessary to program at the right abstraction level. If you need to read a data structure from a file, do something with it, and save the results to another file, it is not good to mix different responsibilities in the same code (eg open the file and read its bytes; as data types of the language, put this data in the structure you are going to use, use the structure).

Often this means reusing ready code, as discussed earlier. In others, it's a matter of breaking your problem into smaller issues (ie "divide and conquer," such as mentioned by Zanoldor ), solve each of them and then compose the final solution of the individual solutions.

In other words, the total code you write may even be large, but each individual function or module should preferably be concise.

Conclusion

A good logic does not necessarily have to have a few lines of code, but an extended code can be sign that there are problems with your logic (where there is smoke, fire). It may be a case of reinventing the wheel, having premature optimization, not being at the correct level of abstraction. In another question I gave an answer detailing a little more the question of concision of the code. And with the caveat that there are many cases where a good algorithm requires rather extensive code.

    
01.07.2014 / 16:45