Specifications

  


                                               Yes               No
pretrained:                                                      X




                                              Linear         Logarithmic        Differentiated
Inputs:                                        X


                                              Mean          Std Dev              Modify distribution
Normalization:                                  X                X



                                             sigmoid       tanh            arctan
Transition fnct:                                                              X


                                              Two             Three           Four           Five
Levels:                                         X               X                X


Max neurons per level:                        256


Bias (reference input):                      Yes           No
                                               X



seed:                                       random


reseed:                                   user reset button


verification:                          intuitive/graphical




----------------------------------------------------------------------------------------------------
----------------------------------------------------------------------------------------------------

                                  Notes


 The two most important choices to make in developing this network are the transition function
and the learning method. For GoldenGem these choices were  based math and common sense, but they
are also very well justified in the literature. See points 1. and 2. below.

1. Our choice to use the arctan transition function is justified by many sources, for instance
comp.ai.neural nets "The arctan function is usually better than the tanh function"

2.  Our choice to use the gradient (backprop) method is justified by many sources, for instance
Karl Nygren, in a recent Masters thesis at Royal Inst of Technology in Sweden describes it as "unchallenged
as the most influential learning algorithm for multilayer perception." This masters thesis goes on
(as most scholarly articles do) to suggest a minor improvement to the algorithm, which is disjoint
from improvements suggested elsewhere.

3.Since the inputs are normalized to mean zero, a bias neuron is neeeded to break the symmetry in layer one. The
Subsequent layers do not need a bias neuron.

4. Adjustable sensitivity is crucial to the functioning of the neural network.
Human cognition must guide the essentially unintelligent neural network (which however has
greater accuracy and capacity for numerical calculation than the unaided human mind).

5. User choice of a set of related tickers also gives crucial capacity for 
user interaction.

6. Our choice not to include filtering, or any successive difference inputs, was a hard choice;
this is justified by the fact that filtering loses some of the most recent information, causes a time
delay. If we were to include successive differences, we should need to make a choice of how far
back in time to take the second (negative) input. The result, by the way,would precisely be a 
rudimentary digital filter. One of the two data points would become unavailable before the last data
point had been reached, and so the last valid data point would be in the past, rather than the current day.




     To conclude this brief list of notes -- the most natural choice of a neural network configuration
from elementary mathematical considerations, is precisely the same configuration which the artificial
intelligence community has settled on as the basic standard.  


Home Page