Let's assume that your logic or algorithm is quasi-optimal (i.e. mathematically speaking, there is no different procedure or test that gets to the same result with a substantially fewer number of steps). That is, its conceptual logic is well defined. It remains to realize this logic through code - an engineering problem, let's say - which demands a different kind of strategy. It is in this context, I believe, that your teacher's statement fits.
A code does not exist in a vacuum. It is embedded in an environment that has its own routines, built-in libraries, or external libraries, in addition to code you wrote yourself. All of these are resources that you can or can not use in your favor when planning a solution.
Code reuse
In general, if you can reuse code already ready your solution will be simpler / concise. This is not always true (eg if the ready code requires a multitude of parameters and configuration, and you only intend to use a tiny subset of its functionality), but as a rule it will be. And while the full solution has more code, there's a separation of responsibilities between your logic and each of your dependencies .
So to get a simple logic, the first step is to know your environment well, to know what resources are available and whether or not they apply to you.
Performance
Contrary to common sense, less code does not necessarily mean better performance. Even when external libraries are not involved. A classic example is ordering algorithms, where the simplest ones are usually the least efficient. In order to achieve the best performance, it is often necessary to complicate, to exploit peculiarities of its domain, and to abandon the "purity" of its conceptual model in favor of the solution most appropriate to the context.
The question is: is this really necessary? In the vast majority of cases, the answer is no - you do not need the performance of your program to be the best it can be. Citing the Daniel Omine example , ok, loading a 400kB library to save a few characters seems like a bad idea, but is it? Firstly, the jQuery code is cleaner, easier to write, and has less problem of incompatibility between browsers. Second, JavaScript code is often small, and activated in response to user action, so performance is not critical. Thirdly, caching you can load this library once and use it at several different points in your system.
That is, if * sacrificing a little performance can have a simpler and concise code, it is good to consider this possibility.
* if and only if - when your code serves as a basis for many other systems (eg you are developing a library that can be used in heavy calculations), it is worth sacrificing simplicity in the name of performance.
Level of abstraction
Finally, to have a concise code it is necessary to program at the right abstraction level. If you need to read a data structure from a file, do something with it, and save the results to another file, it is not good to mix different responsibilities in the same code (eg open the file and read its bytes; as data types of the language, put this data in the structure you are going to use, use the structure).
Often this means reusing ready code, as discussed earlier. In others, it's a matter of breaking your problem into smaller issues (ie "divide and conquer," such as mentioned by Zanoldor ), solve each of them and then compose the final solution of the individual solutions.
In other words, the total code you write may even be large, but each individual function or module should preferably be concise.
Conclusion
A good logic does not necessarily have to have a few lines of code, but an extended code can be sign that there are problems with your logic (where there is smoke, fire). It may be a case of reinventing the wheel, having premature optimization, not being at the correct level of abstraction. In another question I gave an answer detailing a little more the question of concision of the code. And with the caveat that there are many cases where a good algorithm requires rather extensive code.