문제

After writing a simple backpropagated+feedforwarded neural-network class in java, Im trying a shadow-caster NN to check if a quad(or triangle) casts a shadow on a vertex.

Inputs(normalized, 8+2*targetNumber total):

  • Point-light coordinates xL and yL
  • Coordinates of triangle or quad object xt1,xt2,xt3 , yt1,yt2,yt3
  • Target vertex coordinates xT(i), yT(i)

Outputs(normalized, targetNumber total):

  • (point i):Shadowed(1.0f) or no shadowed(0.0f)

Question: How many neurons would it need to think? Do I have to try all combinations of neuron-numbers-per-hidden-layer, hidden layer numbers, minimum training iterations, .... ? Is there a way to foresee that?

Question: What about the performance of this approach versus a usual raytracer for millions of vertices(NN seems to be more embarrassingly parallel than raytracer)?

enter image description here enter image description here enter image description here enter image description here

도움이 되었습니까?

해결책

Question: What about the performance of this approach versus a usual raytracer for millions of vertices(NN seems to be more embarrassingly parallel than raytracer)?

The problem you are trying to solve does not seem to be a problem for the machine learning model. Such methods should be aplied for the problems of the complex, statistical data, for which finding the good algoritmic solutions is too hard for a human being. Such easy problems (in the sense, that you can find highly efficient algorithm), which you can deeply analyze (as it is just 2/3 dimensional data) should be approached using classical methods, not neural networks (nor any other machnie learning model).

Even if you would try to do this, your representation of the problem is rather badly prepared, network won't learn the "idea of the shadow" by showing it such data, there are two many models representable with neural network consistent with your data. Even efficiency of the trained network does not seem to be comparable with "algorithmic" alternatives.

To sum up - there is no reason to use such methods, in fact, using them:

  • won't work well, due to bad representation of the problem (and I do not see a good representation "from top of the head")
  • even if it would work, it would not be effective

Question: How many neurons would it need to think? Do I have to try all combinations of neuron-numbers-per-hidden-layer, hidden layer numbers, minimum training iterations, .... ? Is there a way to foresee that?

As I said before, it rather won't learn this kind of data well, whatever parameters will you use. But for "future reference" - for "simple" neural networks, in fact you always need exactly one hidden layer. More hidden layers won't actually help in most cases, due to the vanishing gradient phenomena (for which, deep learning is a successful fix). There are some rules of the thumb for the hidden layers size, but there are no truly mathematical answers for that. One good option is use large number of hidden units and add strong regularization, which will prevent network from overfitting, which could be a result of too big hidden layer. Regarind number of iterations - you should never use it as a parameter. Network should be trained as long as it does not meet some well defined stopping criteria - number of iterations is not one of them. The most classic and well working one is measuring generalization error (error on the independent, validation set), and when it starts to rise - stop the learning process.

라이센스 : CC-BY-SA ~와 함께 속성
제휴하지 않습니다 StackOverflow
scroll top