To make graphics appear on the screen, there are several layers :
The creators of graphics cards, motherboards, monitors define a way for those parts to communicate. For example, set the X-pin with a Y-bit, wait for nanoseconds, read the bytes in the Z-address, and so on. These conventions can follow standards such as PCI or PCI Express buses, connection with VGA or HDMI monitor, etc. You will only work at this level if you mess with hardware or operating systems.
The operating system has a driver that controls the above functionality so that application developers do not (and can not!) communicate directly with the video card. You will only work at this level if you are writing drivers , operating systems, or if you are using some more primitive system such as MS-DOS.
Again, the operating system (OS) will have an application programming interface (API) to access what the driver has implemented, with syscalls assembly, or with a library for a higher-level language, such as C.
A higher-level library can further abstract the issue by providing a new API that can be used in the same way across multiple languages and operating systems.
So, how do you create a graphical library from scratch? It depends on where you put the zero.
If you have only one card and one processor, you will start from step 2 by writing drivers and operating systems.
If you have a card, a processor, and also an operating system, you will start from step 3, using the features that the operating system offers and building on it.
The case that I imagine you want to know is the latter, when you already have the operating system but can not understand how libraries such as Qt, GTK +, and SDL do to draw on the screen. The answer then is: they take the responsibility of speaking with the operating system in their "language", whatever it is (Win32? GDI +? DirectX? OpenGL? Xlib? Wayland? Direct syscalls to the Linux or Windows kernel? ).
C language, for historical reasons, provides a unified () way of dealing with text files (including the screen and keyboard), but does not provide the same for graphics. In the background, the two cases go through similar layers (library for the application programmer, OS API, driver, bits and bytes for communication with computer components, until it reaches the level of the electrical current that makes everything work) p>
So it's very common to have a library, which calls another library, which calls another library, which calls an operating system dependent function, which calls the driver, which causes the pixel to appear on the screen . Check out these examples :
The application uses wxWidgets on Linux, which calls
GTK + , which uses ...
Cairo to make 2D graphics that ...
You will do what is necessary to generate the graphics in the operating system using Xlib , Quartz , or Win32 ...
which will access the OS with a syscall
and the OS will use the video driver to send the pixels to the screen.
Note that the programmer could also use GTK + directly if he felt that wxWidgets is not offering any advantage. But if it goes down to Xlib, it would certainly be harder to port the app to Windows, or when X becomes obsolete and everyone uses Wayland. And look, I may have forgotten some layer in the above list!
Or else:
A game at SDL
which will use calls to the Win32 and Direct3D APIs for graphics in Windows
or Cocoa and OpenGL for graphics in Mac OS.
Remembering that the programmer can use Direct3D and OpenGL directly.
All this without talking about Android and iOS, which should have their own APIs ...