How might I remove duplicate lines from a file?
Solution 1:
On Unix/Linux, use the uniq
command, as per David Locke's answer, or sort
, as per William Pursell's comment.
If you need a Python script:
lines_seen = set() # holds lines already seen
outfile = open(outfilename, "w")
for line in open(infilename, "r"):
if line not in lines_seen: # not a duplicate
outfile.write(line)
lines_seen.add(line)
outfile.close()
Update: The sort
/uniq
combination will remove duplicates but return a file with the lines sorted, which may or may not be what you want. The Python script above won't reorder lines, but just drop duplicates. Of course, to get the script above to sort as well, just leave out the outfile.write(line)
and instead, immediately after the loop, do outfile.writelines(sorted(lines_seen))
.
Solution 2:
If you're on *nix, try running the following command:
sort <file name> | uniq