Find unique lines
How can I find the unique lines and remove all duplicates from a file? My input file is
1
1
2
3
5
5
7
7
I would like the result to be:
2
3
sort file | uniq
will not do the job. Will show all values 1 time
Solution 1:
uniq
has the option you need:
-u, --unique
only print unique lines
$ cat file.txt
1
1
2
3
5
5
7
7
$ uniq -u file.txt
2
3
Solution 2:
Use as follows:
sort < filea | uniq > fileb
Solution 3:
You could also print out the unique value in "file" using the cat
command by piping to sort
and uniq
cat file | sort | uniq -u
Solution 4:
While sort
takes O(n log(n)) time, I prefer using
awk '!seen[$0]++'
awk '!seen[$0]++'
is an abbreviation for awk '!seen[$0]++ {print}'
, print line(=$0) if seen[$0]
is not zero.
It take more space but only O(n) time.
Solution 5:
uniq -u has been driving me crazy because it did not work.
So instead of that, if you have python (most Linux distros and servers already have it):
Assuming you have the data file in notUnique.txt
#Python
#Assuming file has data on different lines
#Otherwise fix split() accordingly.
uniqueData = []
fileData = open('notUnique.txt').read().split('\n')
for i in fileData:
if i.strip()!='':
uniqueData.append(i)
print uniqueData
###Another option (less keystrokes):
set(open('notUnique.txt').read().split('\n'))
Note that due to empty lines, the final set may contain '' or only-space strings. You can remove that later. Or just get away with copying from the terminal ;)
#Just FYI, From the uniq Man page:
"Note: 'uniq' does not detect repeated lines unless they are adjacent. You may want to sort the input first, or use 'sort -u' without 'uniq'. Also, comparisons honor the rules specified by 'LC_COLLATE'."
One of the correct ways, to invoke with: # sort nonUnique.txt | uniq
Example run:
$ cat x
3
1
2
2
2
3
1
3
$ uniq x
3
1
2
3
1
3
$ uniq -u x
3
1
3
1
3
$ sort x | uniq
1
2
3