Question

Briefly

Can a neural network emulate factorial decomposition (or some other method) to provide a list permutation given the permutations unique index?

Application

I have a list of 10 things, and what they are is irrelevant. What I care about is that my 10 things can be placed into 3628800 (or 10!) unique orders, because then I can express any list order of my 10 things using an unsigned integer and factorial decomposition:

Order 0: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9
Order 1: 0, 1, 2, 3, 4, 5, 6, 7, 9, 8
Order ....
Order 3628799: 9, 8, 7, 6, 5, 4, 3, 2, 1, 0

This allows for the parallel distribution of analysis on different list orders of my 10 things.

A common example being the travelling salesman problem:

1. I give 500 different computers each a range of unsigned integers:
   0    -> 7257  for computer 0, 
   7257 -> 14516 for computer 1, 
   etc.

2. Each computer first calculates the list order from it's unsigned integer 
   index by using factorial decomposition.
   ie. Order 1 -> 0, 1, 2, 3, 4, 5, 6, 7, 9, 8

3. The distance between the cities placed in the order described is calculated.

4. The shortest distances from each computer is collected, and the shortest 
   of those is taken. Leaving us with a single unsigned integer index that 
   describes the shortest possible permutation of cities.

The same process can be used to solve virtually any boundable error surface, given often far more than feasible computational power.

Recursive Algorithmic Solution

We can calculate the N-th permutation of any fixed size list (granted we will need large integer support for bigger lists) using factorial decomposition (outlined here in php), and provided here in javascript for clarity:

function ithPermutationOfNElements (n, i)
{
   var j, k = 0;
   var fact = [];
   var perm = [];

   // compute factorial numbers
   fact[k] = 1;
   while (++k < n)
      fact[k] = fact[k - 1] * k;

   // compute factorial code
   for (k = 0; k < n; ++k)
   {
      perm[k] = Math.floor(i / fact[n - 1 - k]);
      i = i % fact[n - 1 - k];
   }

   // readjust values to obtain the permutation
   // start from the end and check if preceding values are lower
   for (k = n - 1; k > 0; --k)
      for (j = k - 1; j >= 0; --j)
         if (perm[j] <= perm[k])
            perm[k]++;

   return perm;
}
console.log(ithPermutationOfNElements(4, 23)); // [ 3, 2, 1, 0 ]

Neural Network Solution?

Can any neural network architecture & training combination emulate this function given i as it's only input neuron and n output neurons representing each element of the permutation?

Was it helpful?

Solution

A neuron can operate as a logic gate, and thus a Neural Network can perform any calculation a computer can. However in that sense it simply emulates logic gates inefficiently, using high level code, so is not a good solution for this problem.

In general, neural networks are good with 'real' or 'natural' data. They also generally operate with floats, not integers. So if there is a pattern to be learnt, a NN might learn it, but the output answer you will get will be eg 0.783267. You could then denormalize this to 89743, but its unlikely to be exactly right. For your requirement, one integer off the right answer is completely wrong.

By contrast, for a face recognition NN returning 0.787 or 0.786 for a particular image, both could be considered correct.

Your problem is better suited to a traditional, procedural code solution, with only one correct answer for each input. Generally in AI, you are looking for the correct answer, within a certain range or probability distribution.

Regarding implementing algorithms with NNs:
You can have many neurons acting as logic gates, so now you have neuron nand gate / flipflops etc acting as adders/multipliers/latches etc, until you have essentially built a turing machine, but explicitly using high level code. It will in no way resemble a normal NN as they are used by the majority of the AI world. Further, you already have a perfectly good turing machine right in front of you.

Here is the code for a Neural Network AND gate in Matlab. No training is required. I've used configure instead of train, and just set the weights manually. So making the other logic types you could build an entire turing machine.

and = feedforwardnet(1);

truthTable = [0 0 1 1; 0 1 0 1];
and_out = [0 0 0 1];

and = configure(and, truthTable, and_out);

vals = [-2 -2 -1  2 0];

and.IW{1} = vals(1:2); % input1 to hidden, input2 to hidden
and.LW{2,1} = vals(3); % hidden to output
and.b{1} = vals(4);     % bias to hidden
and.b{2} = vals(5);     % bias to output

y = sim(and, truthTable)
round (y)
mse = mean ((y - and_out) .^ 2)


y =
    0.0000    0.0180    0.0180    0.9820
ans =
     0     0     0     1
mse =
   2.4263e-04

OTHER TIPS

A little known fact is that recurrent neural nets are Turing complete, and can thus perform any computation a computer can (see result by Siegelmann).

This does neither mean (a) that you can easily find the necessary weights by a learning algorithm or (b) that a feed forward net, as you probably are looking at, can do it.

Nevertheless, this seems like a task you don't want to do with a neural net.

Generally, neural networks are universal function approximators, so in theory yes.

More specifically, notice that for a particular (fixed) i value, the neural network that solves it is trivial (and in fact, requires no hidden nodes or activation functions. It is a linear problem).

As a brute force, naive solution, for the general problem with unfixed i: a neural network large enough to encode all 10! possible linear encodings, with a hidden layer essentially acting as a mux based on the i input, would solve the problem. More efficient networks probably exist, and it would be interesting to try a recurrent architecture for this problem.

In any case, while a solution certainly exists, the better question is whether this is a good way to solve it. If a problem boils down to some simple psuedocode, I would avoid a neural network implementation unless it is for academic purposes.

I think it might be possible. Input your data and a single number (starting at 0 or 1). Have it produce a single number representing the element number (round it). Add that number to your list. Then do it again, except increase the number you feed to the neural network by 1 (i.e. the number that represents the element in the list that you want to find.)

A recursive neural network would be ideal. But I'm still not certain if the underlying function can be learned or approximated effectively. I think that it could be.

Licensed under: CC-BY-SA with attribution
Not affiliated with StackOverflow
scroll top