grep multiple lines with unique count and string
Sample.csv
DSN1,abc,FAILURE,12,24,45
DSN1,def,FAILURE,12,78,65
DSN1,abc,FAILURE,12,24,45
DSN1,abc,FAILURE,12,24,45
DSN1,abc,FAILURE,12,24,45
DSN1,def,FAILURE,12,78,65
DSN1,abc,FAILURE,12,24,45i need the count of failures in the above sample.csv with response as
abc 5
def 2but i dont have mention the abc/def in the script. because i have given a sample scenario in my case many of the string like abc are there so i need that string and count with failure.
please suggest me
Thanks in advance
23 Answers
A simple solution is to use the following pipe:
<Sample.csv grep '^[^,]*,[^,]*,FAILURE' | cut -d, -f2 | sort | uniq -cgrepwill extract lines withFAILUREin the third columncutwill extract the column (separator,column number2)sortwill sort the extracted column (The same values will be next to each other.)uniqwill remove repeated values, the-coption will show counts of every unique value
You can also insert other filters into the pipe as needed. (for example grep at the beginning).
Ricky's comment is how I would do it, but if you want a solution specific to grep you could do the following:
$ for i in {abc,def}; do echo -n "$i: "; grep -c $i input.txt; done;this will output the expected:
abc:5
def:2Update
If you do not want to include the search keys in the for loop I don't see how to do it simply with just grep.
You could do it with awk though.
awk 'BEGIN{FS=","}//{a[$2]++}END{for(x in a) print x,a[x]}' test.txtExplanation:
2FS="," -- set record seperator to comma
// -- match all lines
We create an associate array called 'a'
a[$2]++ -- each pattern matched we take 2nd column and increment count
END { .. } -- this block is run when done with all matching. we iterate over all elements printing key and count.
please find the same:
awk < Sample.txt -F',' '{print $2}' | sort |uniq -c| grep "FAILURE"Thanks for your answers :)
2