It's a rather broad question, but come on.
The biggest risk is to use a low-level programming language, type C, because in it you are required to use pointers to manipulate strings and buffers. Sooner or later the programmer makes a mistake and opens a loophole.
Using a higher level language reduces risk. Of course the language compiler or interpreter itself may have bugs, because it is probably written in C, but as a rule this type of code goes through more scrutiny, and the risk is concentrated in a smaller area.
If C is required, it is best to use a library for strings, for example the venerable QMail used a library of the author. Again, the library may have bugs but you focus the risk on a relatively small code instead of spreading it across the code of each program.
The operating system can greatly help mitigate buffer overflow risks with style features:
The compiler can collaborate by detecting some types of buffer overflow by checking the stack. I think every modern compiler has already incorporated the ideas of "StackGuard", a fork of the GCC popular in the 1990s.
The C library (libc) collaborates by checking for errors like double free (), which are not buffer overflows but are also attack vectors.
In both the compiler and the libc there are even stronger protections, but they cause performance cracking, so the developer can opt for them if the trade is advantageous. Often you inherit bad C-code, you can not rewrite it and the way to defend yourself is.
In short, every modern operating system has protections at several levels, and the situation is much better than in the 1990s. But the best prophylaxis is certainly to avoid unnecessarily low-level language.