Each one adopts their own technique of doing the table test. I'll describe it more or less as I do.
I create a table with the declared variables, each one in a column. Each row in the column will annotate the new variable value whenever a new assignment is made. Some people like to scratch the previous amount to avoid confusion and only make the last one visible, which is what it's worth.
It is also possible to have columns for subexpressions, which may help to find errors in them.
Then from the third row I start annotating a column of n1
. I'll note down a value that I want to test.
Next I write down the result of calculating the value of n1
(it's always the last value available in the column) times 3.
Then I do the same by taking the last value of n1
minus 1 plus the last value of n2
.
Then I note separately what should appear on the screen (a screen-only page), in the case O resultado final será n3=
and the last value of n3
.
If it were something more complex it would continue. If I had a loop I would keep the notes.
I would test with several values, with 0, positive, negative, some high numbers, would try for a text, in the end would try to create situations that cause error in the algorithm.
Some people like to put columns for subexpressions as well. It's a good idea.
To make an easy table test it is important that the algorithm is not too large.
In some cases there may be some specifics of how to proceed.
Some people like to create a row of data for each rows or sub-expression executed by copying the previous data. I find it exaggerated, but it can be useful in some cases. I only do something when it changes the state.
Powerful debugging tools now almost abolished the use of the same table test: