Question

Let's take as an example the 3d → 2d reduction: What's the cost of simulating a 3d cellular automaton by a 2d cellular automaton?

Here is a bunch of more specific questions:

  1. What kind of algorithms will have their time complexity changed, by how much?

  2. What would be the basic idea for the encoding; how is a 3d grid efficiently (or not efficiently…) mapped to a 2d grid? (The challenge seems to achieve communication between two cells that where originally neighbors on the 3d grid, but are not neighbors anymore on the 2d grid).

  3. In particular, I'm interested in the complexity drift for exponential complexity algorithms (which I guess remains exponential whatever the dimension, is it the case?)

Note: I'm not interested in low complexity classes for which the chosen I/O method has an influence on complexities. (Maybe the best is to assume that the I/O method is dimensionless: done locally on one specific cell during a variable amount of time steps.)


Some context: I'm interested in parallel local graph rewriting, but those graphs are closer to 3d (or maybe ωd…) grids than to 2d grids, I'd like to know what to expect of a hardware implementation on a 2-dimentional silicon chip.

Was it helpful?

Solution

I will explain pieces of this article: Simulating 3D Cellular Automata with 2D Cellular Automata.

Let's start by the encoding of the grid, a function $t:\mathbb Z^3 → \mathbb Z^2$. Intuitively $t$ can't preserves the distance because the number of cells at a distance less than $R$ from the origin won't be the same. You will need to embed these about $R^3$ cells into some the same number of cells but which will be somehow of the form $r^2$ but then you must have $r>R$. These $r$ and $R$ are a bit like the radius of the notion of neighborhood you find in any cellular automaton.

So the transformation of the article will make things essentially bigger by a power of at least $3/2$. That if points are distant of $d$ in the first grid, they will be distant of at least $O(\sqrt d^3)$ in the second grid. Unfortunately the given embedding is only in $O(d^3)$.

However, and this is a very important remark, you don't obtain the same neighborhood than in the first automaton and that's why I previously said "a bit like". To quote the article:

it is obvious that there will be cells that are very close in $\mathbb Z^3$ and such that [their encoding will be] arbitrarily far in $\mathbb Z^2$

That works for time, too: the execution time of one step in $\mathbb Z^3$ can be arbitrary long in $\mathbb Z^2$. Note that the encoding is more of a simulation: the 2D CA the author even simulates the computation of the function $t:\mathbb Z^3 → \mathbb Z^2$.

It is a safe bet to say that the complexity (in time) of any algorithm ran on a 3D CA will explode when ran on the encoding of this 3D CA into a 2D CA. The author says it cannot be bound by any function in his simulation. And I say the explosion is at least exponential in general, because of the propagation time of the information depends on the position.

The idea of running algorithms on cellular automata seems already a bit weird to me, but that's personal. However that's nothing compared to the idea of an implementation of a cellular automaton into a silicon chip, or is it just me?

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top