It's not bad / bad to use wildcards (*). It depends on your goal.
-
I want to be a good programmer (probably your case)
A good programmer knows that he or she will need all columns in a row (for example, in a "show all data" style list) a or only a few (in a subquerie , for example) and will ONLY select the ones you need.
As a good programmer, you know which columns will be added to your tables, and you know that the only way to make your system scalable - working without changes - is to use a wildcard (*). A framework usually uses the DB metadata to find out which columns exist, or even have it in its settings. But this information is cached by someone, whether on a host running applications, PHP, or the DB host. This task is always with someone.
But in joins , where we join different tables, columns with equal names may appear (now and in the future), and the best way to do this is by specifying column-by-column, nicknames "for columns with the same name.
Well, you are either someone who is responsible for the budget of the projects, or is aware that you should minimize money spent on the DB. There are servers that limit traffic in terms of queries , and others that limit quantity of traffic data . In general, you will only be aware of (*) when you can not traffic many bits, or even when the requirement-gathering step (steps prior to programming) has established that there will be many records in the BD or there will be heavy network usage. / p>
Ah, if there is heavy network traffic, the number of queries becomes a problem, because you save a lot on busy data, but you do not have much to do with multiple / concurrent access. you already enter into theories CAP / ACID , but it is no longer part of the basic programming scope, and in general it does not use DBMSs and is used in SQL.)
The price of the Internet is decreasing, and prices are falling - from BD servers; so you will less and less worry about how your system will access the DB and more on how it presents itself to the user.
- Well, I want my client (or boss / my company) in my hand (can not).
Well, in the old days, some programmers had the bad faith to ignore the programming principles and wanted to "keep the job / customer." They thought that making changes to the system difficult would be the way, and in that way, they would never use wildcard. Because that has changed a lot: what matters is the data that is in the DB, not the system that uses it.
If you study hard, you'll find it's easy to find out all about a DB-which you did not do-and then you'll know how the system should work. This is called reverse engineering. Also, seeing the system working, there is reengineering, which is to make one system emulate another.
That is, there is no more dependency on the guy who created the query without wildcard , and you will be fired or you will tarnish your reputation in the market with other clients
Summarizing : There is no basis in this note now (before in the era of the chipped bit, yes), and you will probably always use wildcard , except in joins , where columns with equal names may appear in different tables and handling this is more difficult.
I think the people who "denied" did not understand. Everything depends on the goal , and it can change. Today, I handed it to the client and he is 100% fit for what he asked for. But tomorrow may not be (customer calls for maintenance). If we always think of "a system prepared for the future" we will fill the solution of theorems, creating a large, complex and complex software that will give a negative note to you today. Robust software should be used for problems that require robustness, but not for cases where the system will most likely not have larger requirements.