Question

The question:
how can you limit the input data to achieve a better Big O complexity? Describe an algorithm for handling this limited data to find if there are any duplicates. What is the Big O complexity? (By limiting data, we mean size of data/ array).

Got the solutions i needed to help me achieve the task. I've removed my answers that i posted since they weren't necessary- thanks for your help guys :)

Was it helpful?

Solution

You can achieve a better big O complexity if you know the max value your integer array can take. Lets say it as m. The algorithm to do it is the variance of Bucket Sort. The complexity is O(n). Source code of algorithm:

public boolean HasDuplicates(int [] arr, int m)
{

    boolean bucket[] = new boolean[m];

    for (int elem : arr)
    {

        if (bucket[elem])
        {
           return true; // a duplicate found
        }

        bucket[elem] = true;
    }   
    return false;   
}

OTHER TIPS

Assume that sorting is our problem.

We know that sorting with only comparisons requires Ω(n*log(n)) time and we can do it in O(n*log(n)) by for example a merge sort.

However, if we limit n to some constant, for example let n < 10^6, then we can do it for any input in O(10^6 * log(10^6)) which is O(1) in terms of Big-O.

The bottom line is, if you want to measure a performance in terms of the Big-O notation, you can not assume any size limitation on the input.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top