Select unique or distinct values from a list in UNIX shell script

BashUniqueDistinctKshSh

Bash Problem Overview


I have a ksh script that returns a long list of values, newline separated, and I want to see only the unique/distinct values. It is possible to do this?

For example, say my output is file suffixes in a directory:

> tar > gz > java > gz > java > tar > class > class

I want to see a list like:

> tar > gz > java > class

Bash Solutions


Solution 1 - Bash

You might want to look at the uniq and sort applications.

./yourscript.ksh | sort | uniq

(FYI, yes, the sort is necessary in this command line, uniq only strips duplicate lines that are immediately after each other)

EDIT:

Contrary to what has been posted by Aaron Digulla in relation to uniq's commandline options:

Given the following input:

class
jar
jar
jar
bin
bin
java

uniq will output all lines exactly once:

class
jar
bin
java

uniq -d will output all lines that appear more than once, and it will print them once:

jar
bin

uniq -u will output all lines that appear exactly once, and it will print them once:

class
java

Solution 2 - Bash

./script.sh | sort -u

This is the same as monoxide's answer, but a bit more concise.

Solution 3 - Bash

With zsh you can do this:

% cat infile 
tar
more than one word
gz
java
gz
java
tar
class
class
zsh-5.0.0[t]% print -l "${(fu)$(<infile)}"
tar
more than one word
gz
java
class

Or you can use AWK:

% awk '!_[$0]++' infile    
tar
more than one word
gz
java
class

Solution 4 - Bash

With AWK you can do:

 ./yourscript.ksh | awk '!a[$0]++'

I find it faster than sort and uniq

Solution 5 - Bash

For larger data sets where sorting may not be desirable, you can also use the following perl script:

./yourscript.ksh | perl -ne 'if (!defined $x{$_}) { print $_; $x{$_} = 1; }'

This basically just remembers every line output so that it doesn't output it again.

It has the advantage over the "sort | uniq" solution in that there's no sorting required up front.

Solution 6 - Bash

Pipe them through sort and uniq. This removes all duplicates.

uniq -d gives only the duplicates, uniq -u gives only the unique ones (strips duplicates).

Solution 7 - Bash

Unique, as requested, (but not sorted);
uses fewer system resources for less than ~70 elements (as tested with time);
written to take input from stdin,
(or modify and include in another script):
(Bash)

bag2set () {
    # Reduce a_bag to a_set.
    local -i i j n=${#a_bag[@]}
    for ((i=0; i < n; i++)); do
        if [[ -n ${a_bag[i]} ]]; then
            a_set[i]=${a_bag[i]}
            a_bag[i]=$'\0'
            for ((j=i+1; j < n; j++)); do
                [[ ${a_set[i]} == ${a_bag[j]} ]] && a_bag[j]=$'\0'
            done
        fi
    done
}
declare -a a_bag=() a_set=()
stdin="$(</dev/stdin)"
declare -i i=0
for e in $stdin; do
    a_bag[i]=$e
    i=$i+1
done
bag2set
echo "${a_set[@]}"

Solution 8 - Bash

I get a better tips to get non-duplicate entries in a file

awk '$0 != x ":FOO" && NR>1 {print x} {x=$0} END {print}' file_name | uniq -f1 -u

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionbrabsterView Question on Stackoverflow
Solution 1 - BashMatthew ScharleyView Answer on Stackoverflow
Solution 2 - BashgpojdView Answer on Stackoverflow
Solution 3 - BashDimitre RadoulovView Answer on Stackoverflow
Solution 4 - BashAjak6View Answer on Stackoverflow
Solution 5 - BashpaxdiabloView Answer on Stackoverflow
Solution 6 - BashAaron DigullaView Answer on Stackoverflow
Solution 7 - BashFGroseView Answer on Stackoverflow
Solution 8 - BashMary MartyView Answer on Stackoverflow