To follow up on @Rob's point:
There is a theoretical limit on the efficiency of comparison-based sorting algorithms, of which heapsort is one. No comparison-based sort can do have runtime better than Ω(n log n) in the average case. Since heapsort is Θ(n log n), this means that it's asymptotically optimal and there can't be an O(n) average-time variant (at least, not a comparison-based one). The proof of this argument comes from information theory - without doing at least Ω(n log n) comparisons, there is no way to reliably distinguish the input permutation from any of the other input permutations.
The soft heap was invented by starting with a binomial heap and corrupting some fraction of the keys such that inserting and dequeuing n elements from a soft heap does not necessarily sort them. (The original paper on soft heaps mentions in its abstract that the ingenuity of the structure is artificially decreasing the "entropy" of the values stored to beat the Ω(n log n) barrier). This is the reason why the soft heap can support O(1)-time operations: unlike a normal heap structure, it doesn't always sort, and therefore is not bound by the runtime barrier given above. Consequently, the very fact that n objects can be enqueued and dequeued from a soft heap in O(n) time immediately indicates that you cannot use a soft heap to speed up heapsort.
More generally, there is no way to use any comparison-based data structure to build a sorting algorithm unless you do at least Ω(n log n) work on average when using that data structure. For example, this earlier question explains why you can't convert a binary heap to a BST in O(n) time, since doing so would let you sort in O(n) time purely using comparisons (build the heap in O(n) time, then convert to a BST in O(n) time, then do an inorder traversal in O(n) time to recover the sorted sequence).
Hope this helps!