BOM (byte order mark) was created to solve a UTF-16 problem (and also UTF-32, although this format is little used for saving files).
Since each character in UTF-16 is composed of 2 bytes (or in rarer cases for a pair of units of 2 bytes each), it is possible to sort them in different ways: byte 1, byte 2; or byte 2, byte 1 (on the order of bits, no one argues, at least ...). So little-endian architectures will prefer to use UTF-16LE (LE = little endian), which has the order "byte 2, byte 1" which is the most natural for the processor. And big-endian architectures will prefer to use UTF-16BE.
In order to differentiate the two types of UTF-16, the BOM is used at the beginning of the file, which is a character that can not be confused with its "inverse", so when you read it, you will be able to find out the order of the rest of the file.
UTF-8 has been developed differently, where byte order does not depend on the computer's architecture. This is why many consider it unnecessary to use BOM in UTF-8 files.
The BOM, which in UTF-16 occupies 2 bytes, when encoded in UTF-8 takes the form of 3 bytes. So some programs, despite the non-recommendation to use BOM in UTF-8, have adopted it anyway, because when they open a file and find those 3 special bytes, they will know that it is probably a UTF-8 file it is very rare for a text to begin with 
, which is how the BOM appears if it is read as the cp1252 encoding).
Now, whether or not you should use BOM in your files, the debate gets a bit philosophical because there are pros and cons ...