The varchar datatype considers non-UNICODE characters, nvarchar, in contrast, works with UNICODE characters.
What you have to take into account is the amount stored by each data type.
VARCHAR will store the reported amount, plus 2bytes. For example, a VARCHAR (10) field will store a maximum of 10bytes + 2bytes. These two extra bytes are due to being a variable-sized data type.
NVARCHAR will occupy twice the space plus the 2bytes of control. So in the same example, an NVARCHAR (10) field will occupy 20bytes + 2bytes.
This will make a lot of difference to your storage and should be taken into account.
Roughly, in the CHAR and VARCHAR world, each character occupies 1 byte. A byte is a set of 8 bits and considering all the positions of these bits (on and off) we can have 256 combinations (2 ^ 8). This means that one byte is capable of representing 256 different combinations. For the American alphabet this is more than enough, for the Latin alphabet this is also more than enough.
The problem begins when we consider Arabic, Asian, Greek, etc. alphabets. In this case, if we consider all possible letters and characters we will extrapolate all 256 combinations that 1 byte can represent. For this situation came the NVARCHAR and the NCHAR. For these types of data each character occupies 2bytes. If a byte can express 256 combinations (2 ^ 8), two bytes can store 65536 combinations (2 ^ 16). With this amount of combinations, it is possible to represent any existing character only the storage cost becomes larger.
If you use the CHAR and VARCHAR types and try to store certain characters, the universe of available characters will be restricted to the collation you have chosen. If you try to store another character that is not covered by this collation, that character will be converted to some approximate character. If you choose NCHAR and NVARCHAR, then this limitation does not occur.