ksh: shell script to search for a string in all files present in a directory at a regular interval

StackOverflow https://stackoverflow.com/questions/21449699

  •  04-10-2022
  •  | 
  •  

Question

I have a directory (output) in unix (SUN). There are two types of files created with timestamp prefix to the file name. These file are created on a regular interval of 10 minutes. e. g:

1.  20140129_170343_fail.csv (some lines are there)
2.  20140129_170343_success.csv (some lines are there)

Now I have to search for a particular string in all the files present in the output directory and if the string is found in fail and success files, I have to count the number of lines present in those files and save the output to the cnt_succ and cnt_fail variables. If the string is not found I will search again in the same directory after a sleep timer of 20 seconds.

here is my code

#!/usr/bin/ksh

for i in 1 2
do
  grep -l 0140127_123933_part_hg_log_status.csv /osp/local/var/log/tool2/final_logs/* >log_t.txt;  ###  log_t.txt will contain all the matching file list
  while read line   ### reading the log_t.txt
  do
    echo "$line has following count"
    CNT=`wc -l $line|tr -s " "|cut -d" " -f2`
    CNT=`expr $CNT - 1`
    echo $CNT
  done <log_t.txt
  if [ $CNT > 0 ]
  then
    exit
  fi

  echo "waiitng"
  sleep 20
done

The problem I'm facing is, I'm not able to get the _success and _fail in file in line and and check their count

Was it helpful?

Solution 3

Finaly I'm able to find the solution. Here is the complete code:

#!/usr/bin/ksh


file_name="0140127_123933.csv"

for i in 1 2
do

grep -l $file_name /osp/local/var/log/tool2/final_logs/* >log_t.txt;

    while read line
    do
    if [ $(echo "$line" |awk '/success/') ]           ## will check the success file
    then
    CNT_SUCC=`wc -l $line|tr -s " "|cut -d" " -f2`
    CNT_SUCC=`expr $CNT_SUCC - 1`

    fi

    if [ $(echo "$line" |awk '/fail/') ]             ## will check the fail file
    then
    CNT_FAIL=`wc -l $line|tr -s " "|cut -d" " -f2`
    CNT_FAIL=`expr $CNT_FAIL - 1`

    fi
    done <log_t.txt
    if [ $CNT_SUCC > 0 ] && [ $CNT_FAIL > 0 ]
    then
            echo " Fail count = $CNT_FAIL"
            echo " Success count =  $CNT_SUCC"
            exit
    fi

   echo "waitng for next search..."
   sleep 10
   done

Thanks everyone for your help.

OTHER TIPS

I'm not sure about ksh, but while ... do; ... done is notorious for running off with whatever variables you're using in bash. ksh might be similar.

If I've understand your question right, SunOS has grep, uniq and sort AFAIK, so a possible alternative might be...

First of all:

$ cat fail.txt
W34523TERG
ADFLKJ
W34523TERG
WER
ASDTQ34T
DBVSER6
W34523TERG
ASDTQ34T
DBVSER6

$ cat success.txt
abcde
defgh
234523452
vxczvzxc
jkl
vxczvzxc
asdf
234523452
vxczvzxc
dlkjhgl
jkl
wer
234523452
vxczvzxc

And now:

egrep "W34523TERG|ASDTQ34T" fail.txt | sort | uniq -c
    2 ASDTQ34T
    3 W34523TERG

egrep "234523452|vxczvzxc|jkl" success.txt | sort | uniq -c
    3 234523452
    2 jkl
    4 vxczvzxc

Depending on the input data, you may want to see what options sort has on your system. Examining uniq's options may prove useful too (it can do more than just count duplicates).

Think you want something like this (will work in both and )

#!/bin/ksh

while read -r file; do
  lines=$(wc -l < "$file")
  ((sum+=$lines))
done < <(grep -Rl --include="[1|2]*_fail.csv" "somestring")
echo "$sum"

Note this will match files starting with 1 or 2 and ending in _fail.csv, not exactly clear if that's what you want or not.

e.g. Let's say I have two files, one starting with 1 (containing 4 lines) and one starting with 2 (containing 3 lines), both ending in `_fail.csv somewhere under my current working directory

> abovescript
7

Important to understand grep options here

   -R, --dereference-recursive
          Read all files under each directory,  recursively.   Follow  all
          symbolic links, unlike -r.

and

   -l, --files-with-matches
          Suppress  normal  output;  instead  print the name of each input
          file from which output would normally have  been  printed.   The
          scanning  will  stop  on  the  first match.  (-l is specified by
          POSIX.)

I don't think I'm getting it right, but You can't diffrinciate the files?

maybe try:

#...
CNT=`expr $CNT - 1`
if [ $(echo $line | grep -o "fail") ]
then
    #do something with fail count
else
    #do something with success count
fi
Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top