To have any hope, you would need to run your shuffling program over and over, always starting with the same initial deck, and compare the results.
If it is completely fair, you expect that any given card has a 1-in-52 chance of ending up in any spot after shuffling.
If there is bias, you'd see that it ends up in some spots more frequently than others.
Of course, some variation from the ideal 1-in-52 is expected.
The amount of variation that is acceptable is predictable by stats and various confidence intervals. (95%? 97%?)
And even if you got a perfect distribution, that doesn't mean your shuffling is random.
(Imagine a shuffle-algorithm that simply rotates the deck by one card on each successive shuffle.... it would have perfect 1-in-52 results, but wouldn't be at all random)
Another aspect to look for is correlation between cards.
For example, the final location of the Ace-of-Spades and the King-of-Spades should be completely uncorrelated.
A bad shuffling algorithm could move those two cards around together, resulting in high correlation.
Checking for the correlation of every card against every other card is computationally expensive, but should be a simple algorithm.
I think the end result is that you cannot prove your algorithm is a fair/good shuffle.
You can only set up various tests to look for "bad" shuffles (non-uniform distribution, or high correlations of card positions). If you pass all the tests for bad-shuffles, it doesn't mean your algorithm is not flawed in some other way that the tests didn't cover.
But it does give you increasing confidence.
There may be an even better way to test for randomness, but it is computationally impossible.
As the Coding Horror Blog entry points out, a far better way may be to run your algorithm against very small decks of cards (the article uses a 3-card deck), and carefully evaluate those results. With only 3 cards it should be much easier to trace all paths, and see if all outcomes are equally likely.