When I test the API, should I re-test the cases that have already been tested in the service?
No, it should not.
The problem is not redundancy in the sense that the same code is run by tests at different levels. The problems are:
See, throwing an exception in case you can not save Validation is a service rule, so it's okay to test this rule when testing this service. But you should not test this rule in the above layer ("API") because this is not a rule of this layer.
Even if we simplify and change the term "layers" to "objects", in your test you are checking an object on a rule that is the responsibility of another object, which is why you got this strange feeling of be testing twice the same thing.
Your API is not doing much, it's hard to decide what to test there. In these cases, generally, I would not test anything. If I do not know what to test, why should I write a test?
If, on the other hand, the project requires 100% test coverage, you can test if the API returns a Validation object with properties filled as expected, so you would cover the lines API without testing something other than its responsibility and would maintain 100% coverage.
Other notes about your code - Exceptions
In ValidationDAO.save () you are returning null as an error code. Null is a bad error code.
If a method was unable to do its job (it could not accomplish what its name suggests) it should either throw an exception or return an error code (if your design decision is to work with error codes instead of exceptions).
The semantics of null is "unknown value" or "not found" and this by itself does not suggest an error (eventually the consumer may decide that this is an error given the context).
It may be useful to return null in a find method, for example, indicating that what was searched for was not found; and then the consumer code decides what to do, for example doing nothing or throwing an exception if what was wanted should, in the given context, be there.
Anyway, I do not think you're interested in error codes, since you're actually throwing an exception when detecting a null . In this case, instead of returning null , the ValidacaoDAO.save () method should itself throw the exception if it can not do its job. Or, he should not explicitly throw any exceptions but just let an exception spread that might prevent him from doing his work.
Conclusion
Fatally, when testing a top layer, the rules of the lower layers will take effect. But a test should explicitly check only the rules of the layer or object it is testing, and should not check the specific rules of the layers below that in thesis it does not know. Example:
API:
void facaAlgo() {
if (condicaoRuim_X) {
throw new ExceptionA("As condições estavam desfavoráveis na API");
}
Service.facaAlgo();
}
Service:
void facaAlgo() {
if (condicaoRuim_Y) {
throw new ExceptionB("As condições estavam desfavoráveis no Service");
}
}
Now, when testing the service, I check the ExceptionB exception and when testing the API I check ExceptionA .
If there were no logic in the API, then I could:
- or do not test this API method;
- or just test the happy case, validating the results in case everything works out, ignoring the specific rules of the layers below.
As for mocks , use as little as possible. They give a lot of work and can make our lives unfortunate, which is the inverse of the goal of automated testing. Here's a bit more about mocks and other "stunts":