Make a Shrink on a 2.6 TB bank [closed]

1

Hello,

I have a SQLServer 2008 Standard R2 on a 32GB of RAM server, 5 HD 2TB making RAID5, whose database is 2.6 TBytes in size and a table uses 96% of every MDF file, as can see in the image below:

Thetable" ClientType" consists of two columns of types:

- TEXT , which corresponds to 27% of the table size and uses it to fetch in text;

- IMAGE , which corresponds to 73% of the size, are .MHT files that include images and text.

I want to extract this text data and put ElasticSearch, the image binary in a storage service's files, and make a SHRINK in the database, and here that enters my doubt.

My options that I have thought so far to purge the data are:

  • Gradually extract data by setting NULL in the field and making SHRINK along with BACKUP LOG every 100,000 rows.

  • Extract all data, use DROP COLUMN and then BACKUP LOG and SHRINK .

I want to use the first suggestion, even though I know that SHRINK is an extremely heavy command that uses a high disk usage, because in the second option, however fast it is to delete the data, I am afraid of being stuck more than 12 hours in SHRINK .

I would like to know if I'm on the right way to get rid of the data, if SHRINK can give me a problem, and also if someone imagined a different approach.

    
asked by anonymous 27.12.2017 / 20:12

0 answers