Just as a remark, it is not possible to say with all certainty how google works, we can only interpret the information in the documentation in google so I will just leave a suggestion of what may be causing it
What we know from the documentation
First of all, robots.txt just states that google should not index your site, however if your site is already indexed the documentation does not indicate that this would be enough to remove this content.
link
The goal of noindex
will prevent google from indexing your site, but for that it needs to be able to see this tag, if you use robots to block the page it will never see this tag.
link
Then what may be happening
One possibility is that google when trying to fetch updated content from your site is faced with robots.txt that asks it not to index the site, and as google respects your request it does not even see if the content of your page has changed.
But google already has a page of your indexed, and this page it has indexed in his cache has no information that it should not be there, which may be what is doing google keep your page in the searches of it.
What could solve the problem
Perhaps what you need to do is not prevent google's crawler from checking your site, instead perhaps allowing it to do so temporarily can cause your problem to be resolved.
Removing robots.txt to allow it to try to fetch these same pages it already has indexed will cause it to find the noindex
target, and allowing it to update its cache by removing the old page.
If this works and there are no more results for your pages, you can go back to robots.txt to decrease requests to your site from the google crawler.