Shell Programming Blog Shell Programming Blog

Friday 28 September 2012

How to remove duplicate entries from a file without sorting

Unknown | 10:58 |


 


Usually whenever we want to remove duplicate entries or lines from a file, we need to sort the entries and then eliminate the duplicates using "uniq" command.

But if we want to remove the duplicates and preserve the entries in the same order or sequence, here is the way:



 Sample file:


Using the "uniq" command without sorting the file will not remove all duplicates.


The "sort" command has an option (-u) to sort and uniq, but we will lose the original sequence of the entries


Using sort and then uniq commands will remove the duplicates, but sequence?? (Same as above)


Finally, here is the solution using AWK:


0 Comments:

   

Post a Comment

Don't just read and walk away, Your Feedback Is Always Appreciated. I will try to reply to your queries as soon as time allows.

Note:
1. If your question is unrelated to this article, please use our Facebook Page.
2. Please always make use of your name in the comment box instead of anonymous so that i can respond to you through your name and don't make use of Names such as "Admin" or "ADMIN" if you want your Comment to be published.
3. Please do not spam, spam comments will be deleted immediately upon my review.

Regards,
Mohamed Abubakar Sittik A

 

Shell Programming Copyright © 2012 Shell Programming theme is Designed by Abusittik, Shell Programming