The algorithm that the @bruno indicated in the comments (Levenshtein distance) is a good algorithm to determine the similarity of two strings. There is a somewhat more robust one, called Damerau-Levenshtein which also considers the transposition of two characters adjacent - that is, it takes into account some simple spelling errors.
But I suggest rethinking the questionnaire design.
Fuzzy search, and calculation of similarity of strings, cause bad user experience in this case. Let's say we use Levenshtein's algorithm and we determine that the response given by the user may differ from the response in the database by 10 characters, maximum.
What if my answer has 11 different characters? Is it necessarily wrong? Why is a response with 10 different characters correct, and my answer is not?
Furthermore, these algorithms only tell us how many characters are different - but they do not tell us what, or what they mean. I can add 15 characters to an answer without changing its meaning - but I can also add only a comma, and radically change its meaning.
It is for these reasons that most computerized questionnaires are of multiple choice - and the questionnaires with open answer questions are usually hand-scanned by a human.