Question

I have created some unique shapes, so-called "letters" for a custom alphabet, all of which can fit into 9x9 pixels. Instead of drawing countless more, I try to combine two solutions I saw in a relevant part of Reddit, and decided to let a neural network create some additional examples.

Desconstructing the problem: letters are formed in a graph of 25 nodes (always ordered into a 5x5 square) by spontaneously connecting only the adjacent nodes - no diagonal or non-adjacent edges are present.

For a neural network input, I drew these runes into 9x9 blocks, where each row has 5 "pixels" for each nodes, and 4 more as a place for indicating connection.

Below is the current letter set, with an example of an empty graph and the only example generated by my network in the last line.

enter image description here

I've made a perceptron (tried 1, 2 and even 3 hidden layers) where input layer had 6 neurons, using them as a binary code (zero values mean -1 activation, and one values are the 1 activation), andthe output layer had 81 neurons. (9x9 to plot out the desired shapes)

My aim was to be able to produce additional letters by defining letters as sample and making the network to learn it. Then, I assumed that by activating the network with undefined inputs, I can find new letter shapes.

I used built-in functions froM JavaNNS and backpropagation as learning method.

Which are the parts of my topology, where I could possibly go wrong? What is the most suitable solution for this task?

No correct solution

Licensed under: CC-BY-SA with attribution
Not affiliated with cs.stackexchange
scroll top