0

I have a shell script. To this script I am passing arguments from a file. This file contains tables names

The script is working fine. I am able execute the command for all the tables in the file.

shell script

#!/bin/bash

[ $# -ne 1 ] && { echo "Usage : $0 input file "; exit 1; }
input_file=$1

TIMESTAMP=`date "+%Y-%m-%d"`
touch /home/$USER/logs/${TIMESTAMP}.success_log
touch /home/$USER/logs/${TIMESTAMP}.fail_log 
success_logs=/home/$USER/logs/${TIMESTAMP}.success_log
failed_logs=/home/$USER/logs/${TIMESTAMP}.fail_log

#Function to get the status of the job creation
function log_status
{
       status=$1
       message=$2
       if [ "$status" -ne 0 ]; then
                echo "`date +\"%Y-%m-%d %H:%M:%S\"` [ERROR] $message [Status] $status : failed" | tee -a "${failed_logs}"
                #echo "Please find the attached log file for more details"
                #exit 1
                else
                    echo "`date +\"%Y-%m-%d %H:%M:%S\"` [INFO] $message [Status] $status : success" | tee -a "${success_logs}"
                fi
}

while read table ;do 
  sqoop job --exec $table > /home/$USER/logging/"${table}_log" 2>&1
done < ${input_file}

g_STATUS=$?
log_status $g_STATUS "Sqoop job ${table}"

I am trying to collect the status logs for the script.

I want to collect the status logs for each table individually.

what I want

2017-04-28 20:36:41 [ERROR] sqoop job table1 EXECUTION [Status] 2 : failed
2017-04-28 20:36:41 [ERROR] sqoop job table2 EXECUTION [Status] 2 : failed

What I am getting

If the script for last table fails

2017-04-28 20:38:41 [ERROR] sqoop job EXECUTION [Status] 2 : failed 

If the script for the last table is successful then

2017-04-28 20:40:41 [ERROR] sqoop job [Status] 0 : success    

What am I doing wrong and what changes should I make to get desired results.

6
  • Move the last two lines in your code to inside the while loop. Otherwise, it will only run for the last iteration of the loop. Commented Apr 28, 2017 at 19:58
  • @Munir Do you mean g_STATUS=$? and log_status $g_STATUS "Sqoop job ${table}" these two lines Commented Apr 28, 2017 at 20:03
  • Yes...those two lines Commented Apr 28, 2017 at 20:05
  • @Munir thank you It worked fine I got the desired result Commented Apr 28, 2017 at 20:10
  • @Munir One quick question. Say if If want to copy the /home/$USER/logging/"${table}_log" to a different location in Linux for each table. How can I achieve that? I have tried cp /home/$USER/logging/"${table}_log" /home/$USER/debug/date "+%Y-%m-%d"/logs/. It says cannot find /home/$USER/logging/"_log" No such file or directory Commented Apr 28, 2017 at 20:10

1 Answer 1

2

Change

while read table ;do 
  sqoop job --exec $table > /home/$USER/logging/"${table}_log" 2>&1
done < ${input_file}

g_STATUS=$?
log_status $g_STATUS "Sqoop job ${table}"

to

while read table ;do 
  sqoop job --exec $table > /home/$USER/logging/"${table}_log" 2>&1
  g_STATUS=$?
  log_status $g_STATUS "Sqoop job ${table}"
  # Any other command you want to run on using $table should be placed here
done < ${input_file}

The while loop only runs the code within the while and done lines. So to log for all tables, you need to run the logging inside the while loop.

Also, $table changes in iteration of the loop, so any command that you want to run on all tables, you need to run inside the loop.

Sign up to request clarification or add additional context in comments.

1 Comment

One quick question Say if I want to have one email sent out for each failed job in the above script,or one email for all the failed jobs of the input_file what changes do I need to do

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.