Print only the first match once

I have a code snippet that I am using to parse through a log file and print information I need.

for i in $(cat ~/jlog/"$2"); do
        grep "$1" ~/jlog/"$2" |
        awk '/\([a-zA-Z0-9.]+/ {print $7}' 
 done;

The problem is when I enter input in, it displays the answer multiple times:

(1.3.51.0.1.1.10.10.30.48.2084865.2084839/1.2.840.113619.2.284.3.17454802.933.1401109176.280.1)
(1.3.51.0.1.1.10.10.30.48.2084865.2084839/1.2.840.113619.2.284.3.17454802.933.1401109176.283.1)
(1.3.51.0.1.1.10.10.30.48.2084865.2084839/1.2.840.113619.2.80.977011700.14346.1401109696.2)
(1.3.51.0.1.1.10.10.30.48.2084865.2084839/1.2.840.113619.2.80.977011700.14346.1401109706.51)
(1.3.51.0.1.1.10.10.30.48.2084865.2084839/1.2.840.113619.2.80.977011700.14346.1401109758.100)
(1.3.51.0.1.1.10.10.30.48.2084865.2084839/1.2.840.113619.2.80.977011700.14346.1401109773.149)
(1.3.51.0.1.1.10.10.30.48.2084865.2084839/1.2.840.113619.2.80.977011700.14346.1401109810.198)
(1.3.51.0.1.1.10.10.30.48.2084865.2084839/1.2.840.113619.2.80.977011700.14346.1401109818.247)

Is there any way I can trim this so I can only have the first series of data display once. I only need 1.3.51.0.1.1.10.10.30.48.2084865.2084839 to print out once.

I tried to change it to this as well, but Bash does not like it:

for i in $(cat ~/jlog/"$2"); do
        grep "$1" ~/jlog/"$2" |
        awk '/\([a-zA-Z0-9.]+/' |
        awk -F'[(/]' ' {print $2, exit}'
done;

Then tried this:

for i in $(cat ~/jlog/"$2"); do
        grep "$1" ~/jlog/"$2" |
        awk -F'[(/]' '/\([a-zA-Z0-9.]+/ {print $2, exit }'
done;

Try this,

for i in $(cat ~/jlog/"$2"); do
        grep "$1" ~/jlog/"$2" |
        awk '/\([a-zA-Z0-9.]+/ {print $7; exit}' 
done;

exit in the awk command exits after printing the first match.

OR

Just pipe the output of for command to the below awk command,

for .... | awk -F'[(/]' '{print $2;exit}'

You don't need the for loop - a single call grep will output all the matching lines from the file, so you're just repeating the same operation over and over for as many times as there are lines in the file.

Technically, you don't need both awk and grep either since both can do textual matching. If you want a more specific answer then post an extract of the log file and an example of what output you want.


You haven't explained what you're actually doing so I'll make a couple of assumptions. I assume you're running a script called foo.sh and giving it a string and a file name as an argument. These then become $1 and $2 respectively. Presumably, you are running it with something similar to

foo.sh SearchPattern LogFileName

In any case, the for loop is i) completely useless since you're not using the i variable created by the for i in ... loop. ii) very wrong since that will make i take the value of each word and not the entire line which is what you were probably thinking of iii) the cause of all your problems. You're getting the same results multiple times because you are running the exact same command multiple times. You'll get one result for each line of your file.

Anyway, what you want can be done with something as simple as

grep "$1" ~/jlog/"$2" | awk '/\([a-zA-Z0-9.]+/ {print $7}' 

Or, simpler:

awk '/'"$1"'/\([a-zA-Z0-9.]+/ {print $7}' ~/jlog/"$2"

The awk note that the "$1" is not within the single quotes. This makes bash expand it to whatever is currently held in $1 before it is passed to awk.