Remove duplicates from text file based on second text file
How can I remove all lines from a text file (main.txt
) by checking a second textfile (removethese.txt
). What is an efficient approach if files are greater than 10-100mb. [Using mac]
Example:
main.txt
3
1
2
5
Remove these lines
removethese.txt
3
2
9
Output:
output.txt
1
5
Example Lines (these are the actual lines I'm working with - order does not matter):
ChIJW3p7Xz8YyIkRBD_TjKGJRS0
ChIJ08x-0kMayIkR5CcrF-xT6ZA
ChIJIxbjOykFyIkRzugZZ6tio1U
ChIJiaF4aOoEyIkR2c9WYapWDxM
ChIJ39HoPKDix4kRcfdIrxIVrqs
ChIJk5nEV8cHyIkRIhmxieR5ak8
ChIJs9INbrcfyIkRf0zLkA1NJEg
ChIJRycysg0cyIkRArqaCTwZ-E8
ChIJC8haxlUDyIkRfSfJOqwe698
ChIJxRVp80zpcEARAVmzvlCwA24
ChIJw8_LAaEEyIkR68nb8cpalSU
ChIJs35yqObit4kR05F4CXSHd_8
ChIJoRmgSdwGyIkRvLbhOE7xAHQ
ChIJaTtWBAWyVogRcpPDYK42-Nc
ChIJTUjGAqunVogR90Kc8hriW8c
ChIJN7P2NF8eVIgRwXdZeCjL5EQ
ChIJizGc0lsbVIgRDlIs85M5dBs
ChIJc8h6ZqccVIgR7u5aefJxjjc
ChIJ6YMOvOeYVogRjjCMCL6oQco
ChIJ54HcCsaeVogRIy9___RGZ6o
ChIJif92qn2YVogR87n0-9R5tLA
ChIJ0T5e1YaYVogRifrl7S_oeM8
ChIJwWGce4eYVogRcrfC5pvzNd4
Solution 1:
There are two standard ways to do this:
With grep
:
grep -vxFf removethese main
This uses:
-
-v
to invert the match. -
-x
match whole line, to prevent, for example,he
to match lines likehello
orhighway to hell
. -
-F
to use fixed strings, so that the parameter is taken as it is, not interpreted as a regular expression. -
-f
to get the patterns from another file. In this case, fromremovethese
.
With awk
:
$ awk 'FNR==NR {a[$0];next} !($0 in a)' removethese main
1
5
Like this we store every line in removethese
in an array a[]
. Then, we read the main
file and just print those lines that are not present in the array.
Solution 2:
With grep
:
grep -vxFf removethese.txt main.txt >output.txt
With fgrep
:
fgrep -vxf removethese.txt main.txt >output.txt
fgrep
is deprecated. fgrep --help
says:
Invocation as 'fgrep' is deprecated; use 'grep -F' instead.
With awk
(from @fedorqui):
awk 'FNR==NR {a[$0];next} !($0 in a)' removethese.txt main.txt >output.txt
With sed
:
sed "s=^=/^=;s=$=$/d=" removethese.txt | sed -f- main.txt >output.txt
This will fail if removethese.txt contains special chars. For that you can do:
sed 's/[^^]/[&]/g; s/\^/\\^/g' removethese.txt >newremovethese.txt
and use this newremovethese.txt in the sed
command. But this is not worth the effort, it's too slow compared to the other methods.
Test performed on the above methods:
The sed
method takes too much time and not worth testing.
Files Used:
removethese.txt : Size: 15191908 (15MB) Blocks: 29672 Lines: 100233
main.txt : Size: 27640864 (27.6MB) Blocks: 53992 Lines: 180034
Commands:grep -vxFf
| fgrep -vxf
| awk
Taken Time:0m7.966s
| 0m7.823s
| 0m0.237s
0m7.877s
| 0m7.889s
| 0m0.241s
0m7.971s
| 0m7.844s
| 0m0.234s
0m7.864s
| 0m7.840s
| 0m0.251s
0m7.798s
| 0m7.672s
| 0m0.238s
0m7.793s
| 0m8.013s
| 0m0.241s
AVG0m7.8782s
| 0m7.8468s
| 0m0.2403s
This test result implies that fgrep
is a little bit faster than grep
.
The awk
method (from @fedorqui) passes the test with flying colors (0.2403 seconds
only !!!).
Test Environment:
HP ProBook 440 G1 Laptop
8GB RAM
2.5GHz processor with turbo boost upto 3.1GHz
RAM being used: 2.1GB
Swap being used: 588MB
RAM being used when the grep/fgrep command is run: 3.5GB
RAM being used when the awk command is run: 2.2GB or less
Swap being used when the commands are run: 588MB (No change)
Test Result:
Use the awk
method.
Solution 3:
Here are a lot of the simple and effective solutions I've found: http://www.catonmat.net/blog/set-operations-in-unix-shell-simplified/
You need to use one of Set Complement
bash commands. 100MB files can be solved within seconds or minutes.
Set Membership
$ grep -xc 'element' set # outputs 1 if element is in set
# outputs >1 if set is a multi-set
# outputs 0 if element is not in set
$ grep -xq 'element' set # returns 0 (true) if element is in set
# returns 1 (false) if element is not in set
$ awk '$0 == "element" { s=1; exit } END { exit !s }' set
# returns 0 if element is in set, 1 otherwise.
$ awk -v e='element' '$0 == e { s=1; exit } END { exit !s }'
Set Equality
$ diff -q <(sort set1) <(sort set2) # returns 0 if set1 is equal to set2
# returns 1 if set1 != set2
$ diff -q <(sort set1 | uniq) <(sort set2 | uniq)
# collapses multi-sets into sets and does the same as previous
$ awk '{ if (!($0 in a)) c++; a[$0] } END{ exit !(c==NR/2) }' set1 set2
# returns 0 if set1 == set2
# returns 1 if set1 != set2
$ awk '{ a[$0] } END{ exit !(length(a)==NR/2) }' set1 set2
# same as previous, requires >= gnu awk 3.1.5
Set Cardinality
$ wc -l set | cut -d' ' -f1 # outputs number of elements in set
$ wc -l < set
$ awk 'END { print NR }' set
Subset Test
$ comm -23 <(sort subset | uniq) <(sort set | uniq) | head -1
# outputs something if subset is not a subset of set
# does not putput anything if subset is a subset of set
$ awk 'NR==FNR { a[$0]; next } { if !($0 in a) exit 1 }' set subset
# returns 0 if subset is a subset of set
# returns 1 if subset is not a subset of set
Set Union
$ cat set1 set2 # outputs union of set1 and set2
# assumes they are disjoint
$ awk 1 set1 set2 # ditto
$ cat set1 set2 ... setn # union over n sets
$ cat set1 set2 | sort -u # same, but assumes they are not disjoint
$ sort set1 set2 | uniq
# sort -u set1 set2
$ awk '!a[$0]++' # ditto
Set Intersection
$ comm -12 <(sort set1) <(sort set2) # outputs insersect of set1 and set2
$ grep -xF -f set1 set2
$ sort set1 set2 | uniq -d
$ join <(sort -n A) <(sort -n B)
$ awk 'NR==FNR { a[$0]; next } $0 in a' set1 set2
Set Complement
$ comm -23 <(sort set1) <(sort set2)
# outputs elements in set1 that are not in set2
$ grep -vxF -f set2 set1 # ditto
$ sort set2 set2 set1 | uniq -u # ditto
$ awk 'NR==FNR { a[$0]; next } !($0 in a)' set2 set1
Set Symmetric Difference
$ comm -3 <(sort set1) <(sort set2) | sed 's/\t//g'
# outputs elements that are in set1 or in set2 but not both
$ comm -3 <(sort set1) <(sort set2) | tr -d '\t'
$ sort set1 set2 | uniq -u
$ cat <(grep -vxF -f set1 set2) <(grep -vxF -f set2 set1)
$ grep -vxF -f set1 set2; grep -vxF -f set2 set1
$ awk 'NR==FNR { a[$0]; next } $0 in a { delete a[$0]; next } 1;
END { for (b in a) print b }' set1 set2
Power Set
$ p() { [ $# -eq 0 ] && echo || (shift; p "$@") |
while read r ; do echo -e "$1 $r\n$r"; done }
$ p `cat set`
# no nice awk solution, you are welcome to email me one:
# [email protected]
Set Cartesian Product
$ while read a; do while read b; do echo "$a, $b"; done < set1; done < set2
$ awk 'NR==FNR { a[$0]; next } { for (i in a) print i, $0 }' set1 set2
Disjoint Set Test
$ comm -12 <(sort set1) <(sort set2) # does not output anything if disjoint
$ awk '++seen[$0] == 2 { exit 1 }' set1 set2 # returns 0 if disjoint
# returns 1 if not
Empty Set Test
$ wc -l < set # outputs 0 if the set is empty
# outputs >0 if the set is not empty
$ awk '{ exit 1 }' set # returns 0 if set is empty, 1 otherwise
Minimum
$ head -1 <(sort set) # outputs the minimum element in the set
$ awk 'NR == 1 { min = $0 } $0 < min { min = $0 } END { print min }'
Maximum
$ tail -1 <(sort set) # outputs the maximum element in the set
$ awk '$0 > max { max = $0 } END { print max }'
Solution 4:
I like @fedorqui's use of awk for setups where one has enough memory to fit all the "remove these" lines: a concise expression of an in-memory approach.
But for a scenario where the size of the lines to remove is large relative to current memory, and reading that data into an in-memory data structure is an invitation to fail or thrash, consider an ancient approach: sort/join
sort main.txt > main_sorted.txt
sort removethese.txt > removethese_sorted.txt
join -t '' -v 1 main_sorted.txt removethese_sorted.txt > output.txt
Notes:
- this does not preserve the order from main.txt: lines in output.txt will be sorted
- it requires enough disk to be present to let sort do its thing (temp files), and store same-size sorted versions of the input files
- having join's -v option do just what we want here - print "unpairable" from file 1, drop matches - is a bit of serendipity
- it does not directly address locales, collating, keys, etc. - it relies on defaults of sort and join (-t with an empty argument) to match sort order, which happen to work on my current machine