You have the solution right in front of you. As you state, if the array is unsorted, finding duplicates is O(N^2). But if the array is sorted, you can do it in O(N). So sort the array first, which can be done in O(N.Log(N)).
An algorithm that first sorts and then find the duplicates can thus be done in O(N.Log(N) + N), which is O(N.Log(N)).
UPDATE:
As Amir points out: you can use a hash table. Since inserting and searching in a hash table is O(1), you can do this in a single loop, yielding O(N) time complexity.
However, your question is about "limited input data". So if you limit your input data to sorted arrays, you can reduce the complexity to O(N). To be precise, if you know that the array is sorted (this is your limitation), than a single loop that compares every element to its successor(s) will find the duplicates.
If the array is not sorted, you need an outer loop over all elements but the last, and an inner loop over all the remaining elements. If the array is sorted, you don't need the inner loop, just compare to the next element. That is the "reduction" of the algorithm, resulting in a "reduction" of O(N^2) to O(N).