The thing that causes a naive variance calculation to go unstable is the fact that you separately sum the X (to get mean(x)) and the X^2 values and then take the difference
var = mean(x^2) - (mean(x))^2
But since the definition of variance is
var = mean((x - mean(x))^2)
You can just evaluate that and it will be as fast as it can be. When you don't know the mean, you have to compute it first for stability, or use the "naive" formulation that goes through the data only once at the expense of numerical stability.
EDIT Now that you have given the "original" code, it's easy to be better (faster). As you correctly point out, the division in the inner loop is slowing you down. Try this one for comparison:
def newVariance(data, mean):
n = 0
M2 = 0
for x in data:
n = n + 1
delta = x - mean
M2 = M2 + delta * delta
variance = M2 / (n - 1)
return variance
Note - this looks a lot like the two_pass_variance algorithm from Wikipedia, except that you don't need the first pass to compute the mean since you say it is already known.