Why can I not simply set a to '001'?
x and a are of length k, so
x = { xk-1, ..., x0 }
a = { ak-1, ..., a0 }
If, k = 3, this would be
x = { x2, x1, x0 }
a = { a2, a1, a0 }
I.e. x and a would be one of '000', '001', '010', '011', '100', '101', '110', or '111'.
So the scalar product x • a results in
x • a = (x2 AND a2) XOR (x1 AND a1) XOR (x0 AND a0)
Consequently using a = '001' results in
z = x • '001' = (x2 AND '0') XOR (x1 AND '0') XOR (x0 AND '1') = x0
So you would not get the remaining digits of x (i.e. x2 and x1) in that case. Similarly, if you use an a with more than one bit set, e.g. a = '111', you would get
z = x • '111' = (x2 AND '1') XOR (x1 AND '1') XOR (x0 AND '1') = x2 XOR x1 XOR x0
and therefore could dervice the digits of x. Thus, you need to perform the protocol with a = '001', a = '010', and a = '100' in order to get each digit of x.
I feel like I would only have to run this say 10 times rather then running every a multiple times.
Well, for every round, you will get a correct result with a probability v. So the expected value would be
E[X] = v, if the correct digit is a '1', and
E[X] = 1 - v, if the correct digit is a '0'.
Hence, the mean value over all rounds (i.e. every sample you take) will approximate v for a '1' and will approximate 1 - v for a '0' for an infinite number of rounds. But this does not necessarily mean that you already reach this expected value after 1 round or 10 rounds. However, with every round you increase the confidence of getting the expected value.