Some time ago (2013? 2012?) I developed a system very similar to that of the question. I had a few million objects and a dozen tags, and each object could have 0 or more tags associated. I had to filter these search-based objects by tag sets. Similar, no?
As the number of tags per object, in my case, was theoretically unlimited (since new tags could be added to the system after it was released), the solution proposed in the question did not serve me, ie I could not use a table with columns "tag1", "tag2", etc. Also, this scheme does not allow indexing (see my first comment in @mgibsonbr's answer).
Since I needed a lot of performance (i.e. queries answered in "seconds"), at the time I made a comparison between several solutions, including the two proposed by @mgibsonbr in their answer.
Next, my results - if my memory does not fail!
Trade offs ...
The solution "1." of @mgibsonbr has the disadvantage of possibly taking up a lot of disk space (since you will have the "characters" of the repeated tags countless times in the whole table). This is a disadvantage because it forces your database to have to read many "pages" of your hard disk, so you have to turn the disk a lot and move the read head very much, which can have great latency. The advantage is that you only do 1 select to get your result.
Already the solution "2." of @mgibsonbr uses less disk space (because in the giant table only the ids of the tags will be saved - and if you use the numeric type of size appropriate to the maximum number of tags, you can reduce to 4, 2 or even even 1 byte per line). Thus, you can read more lines per page read from the disk reducing latency. In contrast, your select would probably have a join:
select from tags_objects, tags
where tags.id = tags_objects.tagId
and (tags.name = 'tag-buscado-1'
or tags.name = 'tag-buscado-2') -- etc...
This join is to blame for performance issues with this solution.
More efficient solution (in my case of specific use)
At the end of the day, the most effective solution I could get was to use solution "2." with 2 different selects. The first select looks for the ids of the tags, and the second select uses the ids of the tags in the giant table. It's as if I've done the join "manually".
This was advantageous to me because, in my case, it was possible for me to cache the ids of the tags in my application. This cache was updated by a background thread (doing a "full scan" on the lowercase table that contains tags and their ids every "X" seconds). In the end, in practical terms, the "synchronous" calculation was just a select on the giant table with the "tagId" column being some numeric type, so smaller than having to do joins.
Obviously, for performance issues, it is necessary to put an index in the "tagId" column of the giant table.
Before implementing this solution, my queries lasted ~ 1min or ~ 2min with, if I remember correctly, 5 tags. After all that, I managed to shorten the queries time to something around ~ 10s!
Considerations
It is quite complicated to analyze beforehand what will be the best performing solution in this case, because it really depends on the characteristics of your project. I hope this answer can give some guidance to your quest for the most efficient solution for your specific case.