Develop integration tests correctly

1

We are writing several tests for the application, there is testes unitários and we start testes de integração .
All communication with DAO is mockada, but when I test the API, should I test the cases that have already been tested in the service?

For example:
API

@RequestMapping(value = "", method = RequestMethod.POST)
    public Validacao create() {
        Validacao validacao = new Validacao()
                .setDataAtualizacao(DateTime.nowISODate())
                .setDataCriacao(DateTime.nowISODate())
                .setEmail(activeUser.getUser().getEmail())
                .setInstancia(activeUser.getInstancia().getInstancia());

        validacaoService.create(validacao);

        return validacao;
    }

Service

public Validacao create(Validacao validacao) {
        validacao = validacaoDAO.save(validacao);
        if (validacao == null) {
            throw new InternalServerErrorException("Ocorreu um erro ao criar a validação.");
        }
        return validacao;
    }

In the Service unit test, there is already a test case if validation is null . Should this test be repeated again in the integration test? Not a redundancy?
There are cases where my Service uses another Service, should this be mocked or create a new instance?

    
asked by anonymous 01.06.2015 / 13:43

2 answers

1
  

When I test the API, should I re-test the cases that have already been tested in the service?

No, it should not.

The problem is not redundancy in the sense that the same code is run by tests at different levels. The problems are:

  • You are explicitly testing the same rule twice.

  • You are testing on one system layer the rules of another layer.

See, throwing an exception in case you can not save Validation is a service rule, so it's okay to test this rule when testing this service. But you should not test this rule in the above layer ("API") because this is not a rule of this layer.

Even if we simplify and change the term "layers" to "objects", in your test you are checking an object on a rule that is the responsibility of another object, which is why you got this strange feeling of be testing twice the same thing.

Your API is not doing much, it's hard to decide what to test there. In these cases, generally, I would not test anything. If I do not know what to test, why should I write a test?

If, on the other hand, the project requires 100% test coverage, you can test if the API returns a Validation object with properties filled as expected, so you would cover the lines API without testing something other than its responsibility and would maintain 100% coverage.

Other notes about your code - Exceptions

In ValidationDAO.save () you are returning null as an error code. Null is a bad error code.

  

If a method was unable to do its job (it could not accomplish what its name suggests) it should either throw an exception or return an error code (if your design decision is to work with error codes instead of exceptions).

The semantics of null is "unknown value" or "not found" and this by itself does not suggest an error (eventually the consumer may decide that this is an error given the context).

It may be useful to return null in a find method, for example, indicating that what was searched for was not found; and then the consumer code decides what to do, for example doing nothing or throwing an exception if what was wanted should, in the given context, be there.

Anyway, I do not think you're interested in error codes, since you're actually throwing an exception when detecting a null . In this case, instead of returning null , the ValidacaoDAO.save () method should itself throw the exception if it can not do its job. Or, he should not explicitly throw any exceptions but just let an exception spread that might prevent him from doing his work.

Conclusion

Fatally, when testing a top layer, the rules of the lower layers will take effect. But a test should explicitly check only the rules of the layer or object it is testing, and should not check the specific rules of the layers below that in thesis it does not know. Example:

API:

void facaAlgo() {
    if (condicaoRuim_X) {
        throw new ExceptionA("As condições estavam desfavoráveis na API");
    }
    Service.facaAlgo();
}

Service:

void facaAlgo() {
    if (condicaoRuim_Y) {
        throw new ExceptionB("As condições estavam desfavoráveis no Service");
    }
}

Now, when testing the service, I check the ExceptionB exception and when testing the API I check ExceptionA .

If there were no logic in the API, then I could:

  • or do not test this API method;
  • or just test the happy case, validating the results in case everything works out, ignoring the specific rules of the layers below.

As for mocks , use as little as possible. They give a lot of work and can make our lives unfortunate, which is the inverse of the goal of automated testing. Here's a bit more about mocks and other "stunts":

02.06.2015 / 18:47
1

Do not worry about redundancy. Integration tests are to test how the various components of your system work together.

Mostly cases of failure. In fact the tests are the perfect place to see what happens if an exception pops up three or more layers below the service being tested.

Persistence and access to external services (SOAP, REST, JMS, etc.) can be mock, but I prefer to use as much of the real services as I can in integration testing.

Finally, beware of the academics. Do what works for your project, without fear of breaking the rules.

PS: If this is a job for your college, if teacher will tell you that I am wrong and give you a low grade.

    
01.06.2015 / 14:13