Find Duplicate Lines in a File in Linux Command Line

Find the duplicate lines in a file using the sort and uniq command.

sort file.txt | uniq -cd

This will print the lines that are duplicates, along with the number of occurances.

Find duplicate lines in a file

Another use for this command:

You can save all your bash command history to a file, sorting them and removing the duplicate entries. Try the command:

history | sed 's/^[ ]*[0-9]\+[ ]*//' | sort | uniq > file.txt

Now, the file.txt will contain a neat list of your commands.

needs | less after -cd to sort the output by screen Regards wally

1 Like

Yes, that’s the beauty of Linux :smile:

Pipe output as many times as we want…