Question

I ask this question because of some statements in the question "What is the 'continuity' as a term in computable analysis?" making me suspicious.

I'm engineer, not computer scientist, so I don't have the Turing machine but logic gates in mind when I'm thinking about algebraic operations performed with devices.

I read the answer to the question "Why are computable functions continuous?" and understood it the following way:

Because the device's input is of infinite length (a decimal number with an infinite number of digits after the decimal point), the device (e.g. Turing machine or computer) cannot read the entire number before writing the $n$-th digit of output.

Instead, the device can only have read $m(n)$ digits of the input when it writes the $n$-th digit of output.

If the first $n$ digits of the output of some function only depend on the first $m(n)$ digits of the input, the function is continuous.

However, if I understand this argumentation correctly, the word "continuous" in computation theory is not identical to the word "continuous" in mathematics:

Rounding towards zero would only require reading the input until the decimal point (so $m(n)=\text{const.}$); however, the mathematical function being calculated is not "continuous" according to the mathematical definition of that term.

We could also perform a digit-wise operation ($m(n)=n$) and exchange certain digits after the decimal point; for example replace all 4s by 9s and all 9s by 4s. As far as I understand, the function being calculated is not continuous on any interval of $\mathbb{R}$ (however, it would be right-continuous on $[0,\infty)$ and left-continuous on $(-\infty,0]$).

And if I didn't make a conceptual mistake and we use a balanced numeral system (like a Russian computer in the 1960s) instead of the decimal system, a similar algorithm (exchanging 0s and 1s instead of 4s and 9s) would even represent a mathematical function which is not even directional continuous on any interval of $\mathbb{R}$.

Questions:

Does the computability depend on the numeral system being used (as the example with the balanced numeral system suggests) or is the term "computable" even assuming a certain numeral system being used?

Is the observation correct that the term "continuous" does not have the same meaning in maths and CS?

Was it helpful?

Solution

If we were to use the decimal expansion to represent real numbers, your reasoning would work. But that gives us a very badly behaved notion of computability:

Proposition: Multiplication by 3 is not computable relative to the decimal representation.

Proof: Assume the input starts 0.3333333... At some point, our computation needs to start outputting something. The best choices are 0. and 1.. In the first case, we have screwed up if our input has a 4 as next digit we hadn't looked at; in the second case a 2 makes us wrong. Thus, we cannot output a guaranteed prefix of the solution.

Using a different base would yield a different notion of computability, but none of them are suitable. Some ways that all yield the same good notion of computability are:

  1. Code a real $x$ as a sequence of rationals $(q_n)_{n \in \mathbb{N}}$ such that $|x - q_n| < 2^{-n}$.
  2. Code a real via a signed digit representation, using $\{-1,0,1\}$.
  3. Code a real $x$ as a sequence of rational intervals $(I_n)_{n \in \mathbb{N}}$ with $\bigcap_{n \in \mathbb{N}} I_n = \{x\}$

When we speak about computability of a function on the reals without specifying what kind of representation we are using, we mean one of these (or another equivalent one). This is just like we don't always point out using the Euclidean topology on the reals if we do, that is just the standard case. We can now state:

Theorem: The functions on the reals which are computable (wrt the standard representation) relative to some oracle are exactly the continuous functions (wrt the Euclidean topology).

Coming back to rounding, this shows that perfectly exact rounding cannot work. However, we can circumvene this by not restricting ourselves to functions. For example, the following task is computable:

Given a real number $x \in [0,1]$, output either $0$ or $1$. If $x < 0.501$, then $0$ is an acceptable solution and if $x > 0.499$, then $1$ is an acceptable solution.

If the input to the task above is from $[0.499,0.501]$, then the answer we get does not only depend on the real we are looking at, but on the particular code for that real that our algorithm reads. That can make reasoning about algorithms slightly more cumbersome, but we really can't avoid that.

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top