These things depend a lot. Consumption has more to do with your modeling than with the database, and the way you model one is different from modeling in another.
Contrary to what people imagine MongoDB or any tool does not work miracles. My perception is that it will be worse because it is optimized to work with documents, it is expected to have large inputs without a standard, so each document must have metadata to "explain" what that is. But without testing and measuring the specific case I can not say.
Relational databases are very optimized for space consumption and processing, and today's implementations are very flexible, and it's easy to track transactions on it. Its deficiency is basically in more complicated access to accurate relationships (but it also has advantages) and greater difficulty in vertical scaling (but that is less necessary than in other solutions).
You need to see ways to reduce this data, see if you're using the right types, the right way, if you can do some kind of compression. MongoDB does some compression by default, but only because it wastes too many resources not to do. If the compression helps then do it in SQL Server, as this is not standard.
If data goes uninterruptedly into the database in any technology it will take up a lot of space and there is no miracle solution that will change this. It's possible to do some hard engineering work to find solutions.
In general terms, this can not be helped without knowing the problem more deeply.