In fact, as far as I know, it does not have a specific definition for the term in computing, the term is used in science itself, and we do not fail to do some kind of science. Probably not inventing new things, but applying already invented science, which we call more engineering.
Because I am not a specialist in science and I do not know how to use the term I do not know if the example cited can be characterized as false positive, for me it is a mistake, but I will not guarantee it. I think it is error because the requirement is to determine invalidity of the data if it does not contain some useful information, having a space is a useless information in most cases and seems to be useless in this case. So either it's a badly taken requirement or it's an implementation that does not consider the requirement correctly.
False positive is a characteristic of statistics, specifically the part dealing with inferences. It occurs when it is possible to misunderstand a problem. If you declare the false positive as wrong then it happens to be error. The same goes for the false negative. It only remains a false result when it is an acceptable position at some level of analysis. I do not think it's possible to use the term if we're not doing a statistical inference .
In the SOen there is a question on the subject with a context and this seems valid for the term. There a warning of the compiler can be wrong. And for that very reason it is a warning and not an error. Usually this is not desirable, but it is acceptable, so the compiler does his best to give you correct information, but it is known that he can not always hit, so he issues an alert that is likely to be wrong.
The same is true of spam and countless other domains depending on statistics, including all artificial intelligence. So we use this more in third-party domains we're working on than in our domain, unless we're doing something for our domain, which is the case for compiler .
Generally we take this possibility when having a false positive generates more benefit than having zero reliable information. The alternative to eliminate the false positive is to consider everything negative. When we work with statistics and not mathematics everything needs to be false negative or false positive, it may even be accurate, and some problems it is possible to say whether it is necessary or not.
I do not remember now, but has a data structure used in indexes called Bloom filter you can ensure that a key is not in a database, but can not guarantee that it is. There is something around, I do not know, 95% to 99.9% chance of being right, then in cases he can not guarantee you have to do another search with another less efficient method. As most cases it hits and guarantees this, the gain can be enormous since few searches compel the second algorithm. And it takes up very little space.
For all this I call the cited problem of software development failure. With more information I could be more specific and maybe call it programming error.