According to this Stack Overflow response you can use robots.txt to prevent search engines from indexing pages that webmaster does not want.
How is this possible?
via a
Custom Domain
or custom domain.
However, if the robots goal is to delimit a private area on the site (for example), what's the point of trying to "hide" (with robots ) any content? if everything can be viewed freely on the Github platform?
In order for the content to be effectively hidden or hidden, would it be necessary to pay for the Github platform and have access to the private repositories resource?