I have a productive environment for which I upload html, css etc. files. However, every time I upload a file I must first back up existing instances.
I'm trying to build a script that does the following:
- Read the upload directory and search for production if that file exists, if you do a backup
- Copy the files for production and generate a list (txt) of the copied files
- In case you need to back up, I need the script to remove everything that was copied (to remove new elements if they exist) and copy back the backup files
However, I'm pretty much lost once shell script is not my strong. Would anyone have any idea how to produce this script?
I currently have this snippet of code:
#!/bin/bash
if [ ! $1 ]
then
echo "Informe o nome do diretorio de upload"
exit 0
fi
DirUpload=/home/upload/$1
DirProducao=/home/producao
cd $DirUpload
find . | sed 's/\.\///g' > upload.txt
for i in 'cat $DirUpload/upload.txt'
do
if [ -d $DirProducao/$i ]
then
echo "Diretorio: ${i}"
elif [ -f $DirProducao/$i ]
then
echo "arquivo: ${i}"
else
echo "nao encontrado ${DirProducao}/${i}"
fi
done
By now I'm using the above script to generate a txt with the list to load, based on it I'm checking what currently exists in production to be able to copy to the backup