A UUID (or even a GUID - one of its implementations) is essentially a number, between 0
and 2^128
[-1]. Ideally, it should always be treated as a number, and mainly compared as a number. Convert a UUID to string, just to show the user in a convenient way. However, when you start typing UUIDs things start to get complicated ...
I do not know, historically speaking, if the GUID came up first and the UUID came to standardize it, or if it was the other way around. But the fact is, yes, Microsoft not only uses uppercase letters in a GUID like uses them inconsistently . This was probably done in a time when there was no standard or, as is typical of Microsoft, simply bypassing the standards. And then kept what he had, for compatibility. I do not know the situation with Apple.
If your system is going to receive as input - from the user or from another system - a UUID in text format, I suggest dealing with all possible variations (all lowercase, all uppercase, mixed, with or without {}
in back, etc.) and always taking care of the encoding ( encoding ) of the text. From there, use a number even (if feasible), and always make any comparison using numbers, not text. If you need to provide a UUID as output, use the default format. So there is a good chance that the UUID consumer will "understand" the format - since if you use something that is not standardized, this can be rejected or even worse, duplicated (the system does not check this, and saves the same UUID in two versions, one with uppercase and one with lower case ...).
At the end of the day, the "importance" of following the standard is precisely to avoid interoperability errors. Without knowing if the systems that will interact with your "normalize" the UUIDs before sending / after receiving, you can not guarantee anything. The ideal is to see with each specific system how this treatment is done, but failing that, following the pattern is the path with the most chance of success.