Sie sind auf Seite 1von 91

ABOUT THIS DOCUMENT

Joone is a free Neural Network Engine for Java

You can find this information and the source code at sourceforge.net

The three PDF manuals were made into a single file in February 2003

Java Object Oriented Neural Engine

GUI Editor User Guide

Joone Editor User Guide

Contents
1 2 INTRODUCTION..................................................................................................... 6 1.1 INTENDED AUDIENCE ........................................................................................... 6 DOWNLOAD AND INSTALLATION ................................................................... 7 2.1 JAVA .................................................................................................................... 7 2.2 JOONE FILES ......................................................................................................... 7 2.3 OTHER FILES ........................................................................................................ 7 2.3.1 Note ............................................................................................................. 7 2.4 RUNNING THE JOONE EDITOR ............................................................................... 7 3 MENU......................................................................................................................... 8 3.1 FILE ...................................................................................................................... 8 3.1.1 New.............................................................................................................. 8 3.1.2 Open ............................................................................................................ 8 3.1.3 Save ............................................................................................................. 8 3.1.4 Save As ........................................................................................................ 8 3.1.5 Export Neural Net ....................................................................................... 8 3.1.6 Print............................................................................................................. 8 3.1.7 Exit .............................................................................................................. 8 3.2 EDIT ..................................................................................................................... 8 3.2.1 Cut ............................................................................................................... 9 3.2.2 Copy ............................................................................................................ 9 3.2.3 Paste ............................................................................................................ 9 3.2.4 Duplicate ..................................................................................................... 9 3.2.5 Delete .......................................................................................................... 9 3.2.6 Group .......................................................................................................... 9 3.2.7 Ungroup ...................................................................................................... 9 3.2.8 Sent to Back................................................................................................. 9 3.2.9 Bring to Front.............................................................................................. 9 3.3 ALIGN .................................................................................................................. 9 3.3.1 Toggle Snap to Grid .................................................................................... 9 3.3.2 Left............................................................................................................. 10 3.3.3 Center ........................................................................................................ 10 3.3.4 Right .......................................................................................................... 10 3.3.5 Top............................................................................................................. 10 3.3.6 Middle........................................................................................................ 10 3.3.7 Bottom ....................................................................................................... 10 3.4 ATTRIBUTES ....................................................................................................... 10 3.4.1 Figure Attributes:...................................................................................... 10 3.4.2 Text Attributes ........................................................................................... 10 3.5 CONTROL ........................................................................................................... 11 3.5.1 Control Panel ............................................................................................ 11 3.5.2 Add Noise .................................................................................................. 12 3.5.3 Randomize ................................................................................................. 12 2

Joone Editor User Guide 3.5.4 Reset Input Streams ................................................................................... 13 3.5.5 Macro Editor ............................................................................................. 13 3.6 LOOKNFEEL..................................................................................................... 17 3.6.1 Metal.......................................................................................................... 17 3.6.2 CDE/Motif ................................................................................................. 17 3.6.3 Windows .................................................................................................... 17 3.7 HELP .................................................................................................................. 17 3.7.1 About Joone............................................................................................... 17 3.7.2 Help Contents............................................................................................ 17 4 TOOLBAR............................................................................................................... 18 4.1 SELECTION TOOL ............................................................................................... 18 4.2 LABEL AND CONNECTED LABEL......................................................................... 18 4.3 DRAWING TOOLS ............................................................................................... 18 4.4 INPUT LAYERS .................................................................................................... 18 4.5 LAYERS .............................................................................................................. 18 4.6 OUTPUT LAYERS ................................................................................................ 19 4.7 CHARTING COMPONENT ..................................................................................... 19 4.8 SYNAPSES........................................................................................................... 20 4.8.1 The Pop-Up Menu for all Connection Types ............................................ 22 4.9 PLUGINS ............................................................................................................. 22 5 LAYERS................................................................................................................... 23 5.1 PROCESSING LAYERS ......................................................................................... 23 5.1.1 Linear ........................................................................................................ 23 5.1.2 Sigmoid...................................................................................................... 23 5.1.3 Tanh........................................................................................................... 23 5.1.4 Logarithmic ............................................................................................... 23 5.1.5 Context ...................................................................................................... 24 5.1.6 Nested ANN ............................................................................................... 24 5.1.7 Delay ......................................................................................................... 24 5.2 I/O LAYERS ........................................................................................................ 26 5.2.1 File Input ................................................................................................... 26 5.2.2 URL Input.................................................................................................. 26 5.2.3 Excel Input................................................................................................. 26 5.2.4 Switch Input............................................................................................... 26 5.2.5 Learning Switch......................................................................................... 27 5.2.6 File Output ................................................................................................ 28 5.2.7 Excel Output.............................................................................................. 28 5.2.8 Switch Output ............................................................................................ 28 5.2.9 Teacher...................................................................................................... 28 6 PLUGINS ................................................................................................................. 30 6.1 PRE-PROCESSING PLUGINS ................................................................................. 30 6.2 MONITOR PLUGINS ............................................................................................. 31 6.2.1 The Annealing Concept ............................................................................. 32

Joone Editor User Guide 7 8 9 BASIC TUTORIAL ................................................................................................ 32 AN ADVANCED EXAMPLE: THE XOR PROBLEM ...................................... 34 8.1 9.1 9.2 9.3 9.4 10 10.1 10.2 10.3 10.4 10.5 10.6 10.7 11 11.1 11.2 11.3 11.4 11.5 11.6 11.7 11.8 11.9 11.10 11.11 11.12 TESTING THE TRAINED NET ............................................................................... 37 THE <BUTTONS> </BUTTONS> SECTION ....................................................... 38 THE <OPTIONS> </OPTIONS> SECTION .......................................................... 39 SEPARATORS ...................................................................................................... 39 TEMPORARILY REMOVING ITEMS ....................................................................... 40 ONLINE RESOURCES...................................................................................... 41 JOONE................................................................................................................. 41 ARTIFICIAL NEURAL NETWORKS TECHNOLOGY................................................. 41 JAVA .................................................................................................................. 41 JAXP ................................................................................................................. 41 JHOTDRAW ........................................................................................................ 41 SOURCE FORGE .................................................................................................. 41 SUN MICROSYSTEMS .......................................................................................... 41 GLOSSARY......................................................................................................... 42 ANN / NN........................................................................................................... 42 CLASSPATH ........................................................................................................ 42 GUI.................................................................................................................... 42 JAR FILE ............................................................................................................. 42 JAXP ................................................................................................................. 42 LAYER ................................................................................................................ 42 NEURON ............................................................................................................. 42 NEURAL NETWORK ............................................................................................ 42 PE ...................................................................................................................... 42 SWING ............................................................................................................ 42 XML .............................................................................................................. 43 ZIP FILE .......................................................................................................... 43 THE XML PARAMETER FILE ........................................................................... 38

Joone Editor User Guide Revision


Revision 0.1.0 0.1.5 0.1.6 Date October 12, 2001 October 15, 2001 October 21, 2001 November 6, 2001 January 8, 2002 January 13, 2002 January 22, 2002 February 12, 2002 Author Harry Glasgow Paolo Marrone Paolo Marrone Paolo Marrone Paolo Marrone Paolo Marrone Harry Glasgow Harry Glasgow Comments Pre-release draft

Added a parameter file on the command line to reflect the last change of the editor package Added the description of the XML parameter file Added the description of the Teacher component and of the Control Panel Added the advanced example based on the XOR problem Added page numbers Added the Export Neural Net menu item Added the <option> section of the XML parameter file Added the drawing tool buttons Added the official URL of Joone (www.joone.org) Added plugin sections Added the use of the Monitor plugins Added the Add Noise and reset Input Streams menu items Added the new layers (XL I/O, nested ANN, Switch I/O) Updated the list of libraries needed to run the editor Enhanced the explanation of the Delay Layer Added the Help Contents menu item Added the Learning Switch component and the validation parameter in the Control Panel Added the use of the Scripting Plugin Added details of new sections for the two-part menu, the Macro Editor pane and the charting module. Added the description of the Context and Logarithmic layers

0.2.0 0.5.7 0.5.8 0.5.9 0.6

0.6.5 April 9, 2002 Paolo Marrone

0.6.6

May 15, 2002

Paolo Marrone

0.7.0

September 2002

02,

Harry Glasgow

Joone Editor User Guide

Introduction

The Java Object Oriented Neural Engine (Joone) is a system of Java modules that provide a framework for developing, teaching and testing neural networks. Joone comprises two elements, a core engine and a GUI editor. The core engine is used by the editor to process neural networks. This guide focuses chiefly upon the editor component.

1.1

Intended Audience

This document is intended for non-programmers, persons interested in developing neural networks with the Joone editor.

Joone Editor User Guide

2
2.1

Download and Installation


Java

To run Joone, Java 2 (release 1.2 or above) needs to be installed on your computer. This is available from Sun Microsystems at http://java.sun.com/j2se/. Installation of Java falls outside the scope of this document but is relatively straightforward.

2.2

Joone Files

The compiled Joone project files are available from Source Forge at http:// sourceforge.net/projects/joone/. The Joone engine and editor zip files are available from the projects Summary page. Both these files are required to run the Joone editor and should be included in your computers Java Classpath.

2.3

Other Files

The Joone editor makes use of the following libraries: jhotdraw.jar from JhotDraw 5.2, a Java based drawing package xalan.jar and crimson.jar from JAXP, the Java XML processing package jh.jar from JavaHelp 1.1 or above poi-hssf.jar, poi-poifs.jar, poi-util.jar and hssf-serializer.jar from Jakarta HSSF project, the library to read/write Excel files bsh-core.jar from BeanShell (www.beanshell.org), a Java scripting engine The joone-ext.zip package contains a version of the above libraries.

2.3.1 Note

JoonEdit does not work with the new JhotDraw 5.3, only the 5.2 version. A new version of JoonEdit will soon be released that works with JhotDraw 5.3. Sun Microsystems has released Java Standard Edition 1.4, which includes JAXP, so it may not be necessary to include this separately for Java editions 1.4 and above.

2.4

Running the Joone Editor

To run the Joone editor, put all the above packages on the classpath, then JoonEdit class should be invoked with the following command:
java org.joone.edit.JoonEdit <parameters.xml>

where <parameters.xml> is the optional XML parameters file with the complete path (see /org/joone/data/layers.xml for an example). If the parameter is not specified, the editor will use a default set of parameters in the joone-edit.jar file On a Windows platform, Click Start then Run and type in the above command, then click OK. Alternatively you can use the RunEditor.bat contained in the executable distribution package. 7

Joone Editor User Guide

3
3.1

Menu
File

The menu of the Joone editor accesses standard operations.

Projects built with the Joone editor can be saved and reopened at a later date. Projects are saved as serialized files with the file extension of ser. Only one project can be edited at a time. Joone will prompt to save an edited project prior to closing, or opening a new project.

3.1.1 New
Creates a new Joone project. Any existing project changes are lost.

3.1.2 Open
Opens an existing Joone project. This will replace any existing project.

3.1.3 Save
Allows the current Joone project to be saved as a serializable file.

3.1.4 Save As
Allows a Joone project to be saved as a serializable file with a different name or path.

3.1.5 Export Neural Net


Allows the exporting of a neural net in a serialized form (.snet). This is provided for use in the distributed environment (or for other future uses).

3.1.6 Print
Prints a graphical representation of the current project.

3.1.7 Exit
Exits the Joone editor.

3.2

Edit

Joone allows various editing and positional actions to be performed on a component. Edit options may not be available if the required number of components is not selected. Copy

Joone Editor User Guide and paste operations are only available with drawing tools and not with Joone components.

3.2.1 Cut
Deletes a selected component from the screen but keeps it in memory for Paste operations.

3.2.2 Copy
Copies a selected component from the screen to memory.

3.2.3 Paste
Copies a selected component from memory to the screen.

3.2.4 Duplicate
Duplicates a selected component.

3.2.5 Delete
Deletes a selected component from the screen.

3.2.6 Group
This menu item groups a number of components together so that they can be manipulated as a single component.

3.2.7 Ungroup
This ungroups a previously grouped set of components.

3.2.8 Sent to Back


Positions a selected component so that other components that overlap it are drawn over this one.

3.2.9 Bring to Front


Positions a selected component so that this component is drawn over other components that it overlaps.

3.3

Align

Several components can be selected concurrently by either clicking on each required component while holding down the Shift key, or by dragging a rectangle around a group of components. Once a number of components are selected, alignment menu options can be applied to align the controls relative to each other.

3.3.1 Toggle Snap to Grid


Turns on/off the alignment of the components on a fixed grid facilitating the arrangement of the objects on the drawing area.

Joone Editor User Guide

3.3.2 Left
Aligns selected components vertically along their left hand edge.

3.3.3 Center
Aligns selected components vertically along their center line.

3.3.4 Right
Aligns selected components vertically along their right hand edge.

3.3.5 Top
Aligns selected components horizontally along their top edge.

3.3.6 Middle
Aligns selected components horizontally along their middle.

3.3.7 Bottom
Aligns selected components horizontally along their bottom edge.

3.4

Attributes

The attributes of the drawing tools (see below) can be modified with the following commands. The following attributes can be changed:

3.4.1 Figure Attributes:


Fill Color Pen Color Lines Arrows

3.4.2 Text Attributes


Font Font Size Font Style Text Color

These commands do not affect the attributes of the neural network components (layers and connections).

10

Joone Editor User Guide

3.5

Control

3.5.1 Control Panel

The Control Panel is the tool that controls the behaviour of the neural net. It contains three buttons: Run: Continue: Stop: Starts the neural net beginning from the first pattern of the input data set. Restarts the neural net from the last pattern processed. Stops the neural net.

The Control Panel parameters are: Epochs: The total number of the cycles for which the net is to be trained. Input Patterns: The total number of input rows for which the net is to be trained. This can be different from the number of rows read from the FileInput component (lastRow firstRow + 1). Momentum: The value of the momentum (see the literature about the backpropagation algorithm). Learning Rate: The value of the learning rate (see the literature about the backpropagation algorithm). Learning: True if the net is to be trained, otherwise set false. Validation: True if the net is to be tested on a validation data set. Used ONLY in conjunction with a Learning Switch component inserted in the net. Pre-Learning: The number of initial cycles skipped from the learning algorithm. Normally this parameter is zero, and is used when there is a DelayLayer component in the net. In this case pre-learning must be set equal to the number of taps of that component allowing its buffer to become filled before the learning cycle starts. 11

Joone Editor User Guide

To better explain the use of the Pre-Learning parameter, it serves to avoid making changes to the values of biases and synapses in the presence of a delayed layer. This is because if these values are altered from the first cycle of the learning process, this would adjust them using a wrong input pattern, obtained before the full input temporal window is presented to the network. For instance, if an input pattern is composed of one column as follows: 0.2 0.5 0.1 .... and an input delay layer with taps = 3 is present, then when the network might only have read the first two input values (0.2 and 0.5), and the output of this first layer would be:
Delay Layer 0

-1
0

-1
0.2

-1
0.5

0.1 0.5 0.2

In this case the network would learn the wrong {0, 0, 0.2, 0.5} pattern. Thus the Pre-Learning parameter must be set equal to the taps parameter so that the network starts to learn only when all the taps values have been read.

3.5.2 Add Noise


This adds a random noise component to the net and is useful for allowing the net to exit from a local minimum. At the end of a round of training, adding noise may jolt the netork out of a local minimum so that further training produces a better network.

3.5.3 Randomize
This reset the weights of a neural network, initializing it to a random state. 12

Joone Editor User Guide

3.5.4 Reset Input Streams


This command resets all the buffered input streams in input and teacher layers, permitting the reloading of their buffers with the input data. This is useful after the contents of some files have changed and it is necessary to reload the data.

3.5.5 Macro Editor


A powerful scripting management engine is provided in the Joone Editor. Before describing this feature, it is important to understand the concepts underlying the scripting engine. There are two types of script in Joone: Event-driven scripts User-driven scripts These are executed when a neural networks event is raised. These are executed manually by the user.

Both of the above type of scripts are contained within the neural network, and are serialized, stored and transported along with the neural network that contains them (in the same way that macros defined in a MS Word of Excel document are). In this manner the defined scripts can be executed even when the net is running on remote machines outside the GUI editor. It is possible to load, edit, save and run any script (in this document also referred to as Macro) using a user friendly graphical interface. The Macro Editor can be opened with the Control Macro Editor menu item. A window will be shown, as depicted in the following figure:

13

Joone Editor User Guide In the Macro Editor, the user can define both of the two script types. On the left side of the editor there is the list of the available scripts for the neural network. For a new network, the list already contains the definitions of all the available event-driven scripts. To edit a script, the user simply clicks on the desired script on the list and inserts the code in the text area on the right. The text area has some useful features to help with the writing of the BeanShell code: Coloured code based on the Java syntax (comments, keywords, literals, etc.) Highlighting of matching opening/closing brackets and parenthesise. Highlighting of the current edited row of code. Different colours used to emphasize unclosed strings.

Note in the above figure the bold Java keywords new and for, the small box indicating the corresponding open bracket matching the one near to the cursor, the greyed row indicating the current edited row, and the red colour used to indicate an unterminated string. The event-driven scripts can not be renamed nor deleted from the list. If the user does not want to define an event-driven script, s/he can just remove or comment the corresponding code in the text area. The user-driven scripts can be added, renamed or deleted by choosing the corresponding menu item in the Macro menu.

3.5.5.1 The Macro Editor Menu


Here are details of all the menu items of the Macro Editor frame. File Import Macro Save Macro as The content of a text file can be imported into the text area of the selected script. The old text will be replaced. The content of the selected script can be exported into a text file. 14

Joone Editor User Guide Close Edit Cut Copy Paste Select All Macro Enable Scripting If checked, this enables the execution of the event-driven scripts for the edited neural network. If not checked, all the events of the net will be ignored. Adds a new user-driven script (the user cannot insert new event-driven scripts). Removes the selected user-driven script. Disabled for event-driven scripts. Permits renaming of the selected user-driven script. Disabled for event-driven scripts. Runs the selected script. Sets the execution rate (the number of free training cycles between two execution calls) for cyclic event-driven scripts. The cyclic event-driven scripts are the cycleTerminated and the errorChanged scripts. Warning: The default value of rate for a new network is 1 (one), but it is recommended tat the value be set to between 10 and 100 (or even more) to ensure that there is sufficient processing time available for the running of neural network. Copies the selected text into the clipboard and delete it from the text area (you can also use Ctrl-X). Copies the selected text into the clipboard (you can also use Ctrl-C). Pastes the content of the clipboard into the text area starting at the cursor position (you can also use Ctrl-V). Selects all the content of the text area. Closes the Macro Editor window.

Add Remove Rename Run Set Rate

15

Joone Editor User Guide

3.5.5.2 Macro Scripting Features


The following section describes some characteristics of the scripting feature added to Joones engine. How to use internal Joone objects To obtain a reference to the internal neural networks objects use: jNet to access to the edited org.joone.net.NeuralNet object jMon to access to the contained org.joone.engine.Monitor object For example: jNet.getLayers().size() returns the number of the layers contained in the neural network. jMon.getGlobalError() returns the last RMSE value from the net. A list of the callable public methods for the above two objects is available in the projects javadoc html files. To use the objects from the Joone libraries, it is not necessary to import the corresponding package. The following packages are imported automatically for you by the scripts engine:
org.joone.engine.* org.joone.engine.learning.*; org.joone.edit.* org.joone.util.* org.joone.net.* org.joone.io.* org.joone.script.*;

For instance, to create a new sigmoid layer, simply write:


newLayer = new SigmoidLayer();

How to call another script from within a macro Within a macro it is possible to call another script contained in the neural network (both user and event-driven scripts). To do this, use the following code:
name = NameOfTheMacroToCall; macro = jNet.getMacroPlugin().getMacroManager().getMacro(name); eval(macro);

The scope of the scripts variables All the scripts defined in a neural network (both user and event-driven scripts) share the same namespace and actual-values storage. Thus a global variable declared and initialised in script_A can be accessed in script_B. For example: 1. Add a macro named macro1 and insert into it the code: myVariable = 5;

16

Joone Editor User Guide 2. Add a macro named macro2 and insert into it the code: print(The value is:
+ myVariable);

Run first the macro1 and then the macro2; you will see into the standard output console the result: The value is: 5 For further details about scope and the use of variables, see the BeanShell manual.

3.6

LooknFeel

The default look and feel of the Joone editor is Metal. Currently there is no way to make another style the default.

3.6.1 Metal
Selecting this menu option displays the editor in the Metal GUI style.

3.6.2 CDE/Motif
Selecting this menu option displays the editor in the CDE/Motif GUI style.

3.6.3 Windows
Selecting this menu option displays the editor in the Windows GUI style.

3.7

Help

3.7.1 About Joone


This option displays the Joone Editor About screen. The current version of the Editor and of the Engine being used is also displayed. Version numbers are of the form majorrelease.minor-release.build, i.e. 1.2.5. If an older, incompatible engine version is detected, a warning will be displayed in the About screen, as backward compatibility is not guaranteed between editor and engine versions.

3.7.2 Help Contents


This option displays the on line help of the editor.

17

Joone Editor User Guide

Toolbar

The tool bar buttons are divided into two palettes. One contains all the drawing buttons, while the other contains all the construction components, as shown in the following figure:

The content of the drawing palette is determined by the Joone application while the component panel is configurable by modifying the layers.xml file. Please note that not all of the following components may appear by default in the Joone Editor application due to limited space on the tool bars. See the chapter on the XML parameter file for details on how to alter the items that appear in the toolbar.

4.1

Selection Tool

The Selection Tool allows Layers to be selected and dragged around the screen. It is also use to create links between Layers.

4.2

Label and Connected Label

The Label tool allows text comments to be placed on the screen. The Connected Label tool allows the addition of text to each drawing tool. The attached text will follow the figures movements.

4.3

Drawing Tools

These tools permit the addition of several figures to the drawing panel of the GUI editor. They will be saved along with the neural network (Save menu item) and then restored on the drawing panel (Open menu item).

4.4

Input Layers

The New File Input Layer, New URL Layer, New Excel Input Layer, the Switch Input Layer and the Learning Switch tools allow new input layers to be added to the screen.

4.5

Layers

The New Linear Layer, New Sigmoid Layer, New Tanh Layer, New Logarithmic Layer, New Context Layer, New Nested ANN Layer and New Delay Layer tools allow new processing layers to be added to the screen.

18

Joone Editor User Guide

4.6

Output Layers

The New Switch Output Layer, New File Output Layer, New Excel Output Layer and New Teacher Layer tools allow new output layers to be added to the screen.

4.7

Charting Component

This component belongs to the output components family, so it can be used anywhere that makes sense to insert an output component. For instance, it can be attached to the output of a Layer, or to the output of a Teacher component. The charting component has the following properties: maxXaxis: maxYaxis: name: resizable: The maximum value of the X axis. Set this value to the maximum number of the samples required to visualise in the chart. The maximum value of the Y axis. Set this value to maximum value you expect to display in the chart. The name of the chart component. If true, the window containing the chart will be resizable. The resizing of the window will be rescale the entire chart, adapting itself to the new size of the frame. Used to show or hide the window containing the chart. The text shown in the title bar of the window containing the chart. Indicates what series (column) in a multicolumn output data is to be displayed

show: title: serie:

All the above properties are updateable at any time, including during the running of the network making it possible to show the chart at several sizes or resolutions. This is an example of the use of the charting component:

19

Joone Editor User Guide

In the above example, the charting component is attached to the output of a teacher component to capture and display the RMSE values during the training phase. The maxXaxis property is set to the number of the training epochs, while the maxYaxis is set to the max value we want to show in the chart. The user can change either of these values at any time to show a zoomed view of the chart.

4.8

Synapses

These components allow the user to choose the type of synapse that connects two layers. In the toolbar there are three buttons as above, representing respectively a Full Synapse, a Direct Synapse and a Delayed Synapse. More type will be added to future versions of Joone.

20

Joone Editor User Guide


A Full Synapse will connect every output connection of one layer to this input of every neuron in another layer. A Direct Synapse will connect each output connection of one layer to exactly one neuron in the other layer. The number of outputs of the first layer must match the number of neurons in the second layer, or an exception will be generated when the net is started. A Delayed Synapse behaves as a full Synapse where each connection is implemented with a FIRFilter object. In this connection is implemented the temporal backpropagation algorithm by Eric A. Wan, as in 'Time Series Prediction by Using a Connectionist Network with Internal Delay Lines' in Time Series Prediction. Forecasting the Future and Understanding the Past, by A.Weigend and N.Gershenfeld. Addison-Wesley, 1994.

To use these components, the firstly selects the tool button corresponding to the synapse required, then drags a line from one layer to another in the drawing area. The new inserted synapse will be shown with an arrow containing a small box at its centre, as in the following figure:

The box contains a label indicating the type and state of the synapse. The available types are shown the following table: F = Full Synapse D = Direct Synapse -1 = Delayed Synapse If the little box is greyed, then the synapse is disabled, indicating that this branch of the net is interrupted. To disable a synapse, right-click it and set the Enabled property in the property panel to false. This feature is very useful in the designing of a neural network to try several architectures on the fly. The neural network contained in the file org/joone/samples/synapses/synapses.ser contains an example using synapses currently available in JoonEdit. Warning: Use the above tool buttons ONLY to connect two layers and NOT to connect any other component such as input or output components, plugins, etc. Doing otherwise will cause an exception. The basic method to connect two layers is to simply drag an arrow from the right handle of a layer to the other, which default to a Full Synapse.

21

Joone Editor User Guide

Neural networks saved with a previous version of JoonEdit will continue to work but the synapses will be shown unlabelled. This should not create confusion because in previous releases of the Joone editor only full synapses was created. Popup menu will work on older versions.

4.8.1 The Pop-Up Menu for all Connection Types


Right-clicking on any connection displays a pop-up menu as per other components.

The menu contains two items: Properties Delete NB


To show the property panel for the synapse selected. To delete the connection.

The Properties item is not shown if the line selected does not contain a synapse, for instance the arrow connecting an input component to a layer. The Delete item is included with all pop-up menus.

4.9

Plugins

The New Center Zero Layer, New Normalize Layer and the New Turning Point Layer tools, as well as Linear, Dynamic Annealing Monitor allow new plugin layers to be added to the screen.

22

Joone Editor User Guide

5
5.1

Layers
Processing Layers

There are numerous layer types available in the Joone editor.

These layers contain processing neurons. They consist of a number of neurons as set by the layers row parameter. The differences between layer types are described in this section.

5.1.1 Linear
The output of a linear layer neuron is the sum of the input values, scaled by the beta parameter. No transfer function is applied to limit the output value, the layer weights are always unity and are not altered during the learning phase.

5.1.2 Sigmoid
The output of a sigmoid layer neuron is the sum of the weighted input values, applied to a sigmoid function. This function is expressed mathematically as: y = 1 / (1 + e-x) This has the effect of smoothly limiting the output within the range 0 and 1.

5.1.3 Tanh
The tanh layer is similar to the sigmoid layer except that the applied function is a hyperbolic tangent function. This function is expressed as: y = (ex e-x)/(ex + e-x) This has the effect of smoothly limiting the output within the range 1 and 1. During training, the weights of the Sigmoid and Tanh layers are adjusted to match teacher layer data to the network output.

5.1.4 Logarithmic
The logarithmic layer is similar to the sigmoid layer except that the applied function is a logarithmic function. This function is expressed as: y = log(1 + x) y = log(1 x) if x >= 0 if x < 0

This layer is useful to avoid to saturate the input of the layer in presence of input values near the extreme points 0 and 1.

23

Joone Editor User Guide

5.1.5 Context
The context layer is similar to the linear layer except that it has an auto-recurrent connection between its output and input, like depicted in the following figure:
w

PE

Its activation function is expressed as: y = b * (x + y(t-1) * w) where : b = the beta parameter (inherited from the linear layer) w = the fixed weight of the recurrent connection (not learned) The w parameter is named timeConstant in the property panel because it backpropagates the past output signals and, as its value is less than one, the contribute of the past signals decays slowly toward zero at each cycle. In this manner the context layer has a own memory embedded mechanism. This layer is used in recurrent neural networks like the Jordan-Elman NNs.

5.1.6 Nested ANN


The nested neural network layer permits an entire neural network to be added to the network being edited. Using this component, it is possible to build modular neural networks, constituted of several pre-built neural networks, allowing complex, compound organisms to be created. The parameter Nested ANN must be filled with the name of a serialized neural network (saved with the File->Export neural net menu item). Note: A neural network to be used in a nested ANN component, must be composed solely of processing elements and not file I/O and/or Teacher layers.

5.1.7 Delay
The delay layer applies the sum of the input values to a delay line, so that the output of each neuron is delayed a number of iterations specified by the taps parameter. To understand the meaning of the taps parameter, the following picture that contains two different delay layers, one with 1 rows and 3 taps, and another with 2 rows and 3 taps:

24

Joone Editor User Guide


X2(t - 3) X1(t - 3) X1(t - 3) -1

-1
X1(t - 2)

-1
X2(t - 2) -1 X1(t - 2) X2(t - 1) X1(t - 1) -1 X1(t - 1)

-1

-1

-1
X2(t) X1(t) Rows = 1 Taps = 3 X1(t) X1(t)

-1
X2(t) X1(t) Rows = 2 Taps = 3

the delay layer has: the number of inputs equal to the rows parameter the number of outputs equal to the rows * (taps + 1) The taps parameter indicates the number of output delayed cycles for each row of neurons, plus one because the delayed layer also presents the actual input sum signal Xn(t) to the output. During a training phase, error values are fed backwards through the delay layer as required. This layer is very useful to train a neural network to predict a time-series, giving it a temporal window of the input raw data.

25

Joone Editor User Guide

5.2

I/O Layers

The I/O (Input / Output) layers represent interfaces between the processing layers of a neural network and the external environment, providing the net with the data needed for processing and/or training.

5.2.1 File Input


The file input layer allows data in a file to be applied to a network for processing. Data for processing is expected as a number of rows of semicolon-separated columns of values. For example, the following is a set of three rows of four columns: 0.2;0.5;0.6;0.4 0.3;-0.35;0.23;0.29 0.7;0.99;0.56;0.4 Each value in a row will be made available as an output of the file layer, and the rows will be processed sequentially by successive processing steps of the network. As some files may contain information additional to the required data, the parameters firstRow, lastRow, firstCol and lastCol may be used to define the range of useable data. The filename parameter specifies the file that is to be read from.

5.2.2 URL Input


The URL input layer allows data from a remote location to be applied to a network for processing. The allowed protocols are http and ftp. The data format is the same as for the FileInput layer.

5.2.3 Excel Input


The Excel Input layer permits data from an Excel file to be applied to a neural network for processing. Its sheet parameter allows the name of the sheet to be chosen from which the input data is read.

5.2.4 Switch Input


The switch input allows the choice of which input component is to be connected to the neural network, choosing between all the input components attached to it. The user, after having attached several input components to its input, can set the active input parameter with the name of the chosen component that is to be connected to the net. The default input parameter must be filled with the name of the default component (the one activated when the user selects the Control->Reset Input Streams menu item). The switch input component, along with the output switch layer, permits dynamic changing of the architecture of the neural network, changing the input and/or output data layers attached to the neural network at any time. This is useful to switch the input source, for instance, between the file containing the training data set and the file containing the validation data set to test the training of the neural network, as depicted in the following screen shot: 26

Joone Editor User Guide

5.2.5 Learning Switch


The learning switch is a special implementation of the Switch Input component and can be used to attach both a training data set and a validation data set to the neural net. In this way the user can test the generalization property of a neural network using a different data set to the one used during the training phase. The training input data set can be attached by dragging an arrow from the input component to the learning switch, while the validation input data set can be attached simply by dragging an arrow from the red square on top of the learning switch to the input component containing the validation data set. To switch between them, simply change the value of the 'validation' parameter shown in the Control Panel. This component has two properties: trainingPatterns: validationPatterns: Must be set to the number of the input patterns constituting the training data set. Must be set to the number of the input patterns constituting the validation data set.

Both these above parameters are obtained from the following formula lastRow - firstRow + 1 where the last/firstRow variables contain the values of the corresponding parameters of the attached input components. The following figure depicts the use of this component:

27

Joone Editor User Guide

Warning: Because a validation data set will also be required for the desired data, this component must be inserted both before the input layer of the neural network and between the Teacher layer and the desired input data sets.

5.2.6 File Output


The file output layer is used to convert the results of a processing layer to a text file. The filename parameter specifies the file that the results are to be written to. Results are written in the same semicolon-separated form as file input layers.

5.2.7 Excel Output


The Excel output layer is used to write the results of a processing layer to an Excel formatted file. The filename parameter specifies the file that the results are to be written to. The sheet parameter allows the name of the sheet to be chosen, to which the input data is to be written.

5.2.8 Switch Output


The switch output permits the choice of which output component is to be connected to the neural network, choosing between all the output attached components. The user, after having attached several components to its output, can set the active output parameter with the name of the chosen component that is to be connected to the net. The default output parameter must be filled with the name of the default component (which one activated when the user selects the Control->Reset Input Streams menu item).

5.2.9 Teacher
The Teacher layer allows the training of a neural net, using the back-propagation learning algorithm. It calculates the difference between the actual output of the net and the expected value from an input source. It provides this difference to the output layer for the training. To train a net, add a Teacher component, connecting it to the output layer of the net, and then connect an input layer component to it, linked to a source containing the expected data (see figure). 28

Joone Editor User Guide

29

Joone Editor User Guide

Plugins

There are three types of pre-processing plugins for the input data, and two monitor plugins. A connection to a plugin can be added by dragging an arrow from the magenta square handle on the bottom side of an input layer, as depicted in the following figure:

6.1

Pre-Processing Plugins

There are three pre-processing plugins implemented, but others can be implemented by extending the org.joone.util.ConverterPlugIn class: Centre on Zero Normalizer Turning Points Extractor This plugin centres the entire data set around the zero axis by subtracting the average value. This plugin can normalize an input data stream within a range determined by its min and max parameters. This plugin extracts the turning points of a time series, generating a useful input signal for a neural net, emphasising the relative max and min of a time series (very useful to extract buy and sell instants for stock forecasting). Its minChangePercentage parameter indicates what the minimum change around a turning point should be to consider it a real change of direction of the time series. Setting this parameter to a relative high value helps to reject the noise of the input data.

Every plugin has a common parameter named serie. This indicates what series (column) in a multicolumn input data is to be affected (0 = all series). A plugin can be attached to an input layer, or to another plugin so that pre-processing modules can be cascaded. If both centre on zero and normalize processing is required for an input stream, the centre on zero plugin can be connected to a file input layer, and then a normalizer plugin attached to this, as shown in the following figure:

30

Joone Editor User Guide

6.2

Monitor Plugins

There are also two Monitor Plugins. These are useful for dynamically controlling the behaviour of control panel parameters (parameters contained in the org.joone.engine.Monitor object). The Linear Annealing plugin changes the values of the learning rate (LR) and the momentum parameters linearly during training. The values vary from an initial value to a final value linearly, and the step is determined by the following formulas: step = (FinalValue - InitValue) / numberOfEpochs LR = LR step The Dynamic Annealing plugin controls the change of the learning rate based on the difference between the last two global error (E) values as follows: If E(t) > E(t-1) then LR = LR * (1 - step/100%). If E(t) <= E(t-1) then LR remains unchanged. The rate parameter indicates how many epochs occur between an annealing change. These plugins are useful to implement the annealing (hardening) of a neural network, changing the learning rate during the training process. With the Linear Annealing plugin, the LR starts with a large value, allowing the network to quickly find a good minimum, and then the LR reduces permitting the found minimum to be fine tuned toward the best value, with little the risk of escaping from a good minimum by a large LR. The Dynamic Annealing plugin is an enhancement to the Linear concept, reducing the LR only as required, when the global error of the neural net augments are larger (worse) than the previous steps error. This may at first appear counter-intuitive, but it allows a good minimum to be found quickly and then helps to prevent its loss.

31

Joone Editor User Guide

6.2.1 The Annealing Concept

Error surface

Actual error of the NN

Relative minimum Absolute minimum

To explain why the learning rate has to diminish as the error increases, look at the above figure: All the weights of a network represent an error surface of n-dimensions (for simplicity, in the figure there are only two dimensions). To train a network means to modify the connection weights so as to find the best group of values that give the minimum error for certain input patterns. In the above figure, the red ball represents the actual error. It runs on the error surface during the training process, approaching the minimum error. Its velocity is proportionate to the value of the learning rate, so if this velocity is too high, the ball can overstep the absolute minimum and become trapped in a relative minimum. To avoid this side effect, the velocity (learning rate) of the ball needs to be reduced as the error becomes worse (the grey ball).

Basic Tutorial

This tutorial creates a simple network connecting a file input layer containing four examples of two input values to a file output layer via a linear layer containing two neurons. 1. Using a text editor, create a new file and add four lines as follows: 0.2;0.3 0.4;0.5 0.6;0.8 0.9;1.0 2. Save the file to disk (e.g. c:\temp\sample.txt). 32

Joone Editor User Guide 3. Start JoonEdit and insert a new linear layer. Click on this layer to display the properties page. Set the rows value to 2. 4. Insert a File Input layer to the left of the linear layer, then click on it to display the properties window: Set the firstCol parameter to 1. Set the lastCol parameter to 2. Enter c:\temp\sample.txt in the fileName parameter. Leave the firstRow as 1 and the lastRow as 0 so that the input layer will read all the rows in the file. 5. Connect the input layer to the linear layer dragging a line from the little circle on the right hand side of the input layer, releasing the mouse button when the arrow is on the linear layer. 6. Now insert a File Output layer to the right of the linear layer, click on it and insert into the properties window: c:\temp\sampleout.txt on the fileName parameter. 7. Connect the linear layer to the file output layer by dragging a line from the little circle on the right hand side of the linear layer, releasing the mouse button when the arrow is on the file output layer. 8. At this stage the screen should look similar to this:

9. Click on the Net->Control Panel menu item to display the control panel. Insert the following: Set the totCicles parameter to 1. This will process the file once. Set the patterns parameter to 4. This sets the number of example rows to read. Leave the learningRate and the momentum fields unchanged. These parameters are used for training a net. Also set the learning parameter to FALSE, as the net is not being trained. 10. Click the START button 11. A file named c:\temp\sampleout.txt will be written with the results. 12. If you want, you can save the net to disk with the File->Save As menu item, and reload it later with File->Open.

33

Joone Editor User Guide

An Advanced Example: The XOR Problem

This example illustrates a more complete construction of a neural net to teach the classical XOR problem. In this example, the net is required to learn the following XOR truth table: Input 1 0 0 1 1 Input 2 0 1 0 1 Output 0 1 1 0

So, we must create a file containing these values:


0.0;0.0;0.0 0.0;1.0;1.0 1.0;0.0;1.0 1.0;1.0;0.0

A semicolon needs to separate each column. The decimal point is not mandatory if the numbers are integer. Create this file with a text editor and save it on the file system (for instance c:\joone\xor.txt in a Windows environment). Now we'll build a neural net like this:

Run the editor, and execute these following steps: 1. Add a new sigmoid layer parameter to 2 and set its layerName to 'Input' and the rows

2. Add a new sigmoid layer, and set its layerName to 'Hidden' and the rows parameter to 3 3. Add a new sigmoid layer, and set its layerName to 'Output', leaving the rows parameter as 1

34

Joone Editor User Guide 4. Connect the input layer to the hidden layer by dragging a line from the little circle on the right hand side of the input layer, releasing the mouse button when the arrow is on the hidden layer 5. Repeat the above step connecting the hidden layer to the output layer At this stage the screen should look similar to this:

6. Insert a File Input layer the properties window:


o o o o

to the left of the input layer, then click on it to display

Set the firstCol parameter to 1 Set the lastCol parameter to 2 Enter c:\joone\xor.txt in the fileName parameter Leave the firstRow as 1 and the lastRow as 0 so that the input layer will read all the rows in the file

7. Connect the File Input to the input layer 8. Insert a Teacher layer to the right of the output layer

9. Connect the output layer to the Teacher layer Now we must provide the desired data to the teacher (the last column of the file xor.txt) to train the net: 10. Insert a File Input layer the properties window:
o o o o o

above of the Teacher layer, then click on it to display

Set the firstCol parameter to 3 Set the lastCol parameter to 3 Enter c:\joone\xor.txt in the fileName parameter Leave the firstRow as 1 and the lastRow as 0 so that the input layer will read all the rows in the file Set the name parameter to 'Desired data'

11. Connect the Teacher layer to that last File Input layer by dragging a line from the little red square on the top side of the Teacher layer, releasing the mouse button when the yellow arrow is on the last inserted File Input layer At this stage the screen should look similar to this:

35

Joone Editor User Guide

12. Click on the Net->Control Panel menu item to display the control panel. Insert the following:
o o o o o

Set the totCicles parameter to 10,000. This will process the file 10,000 times Set the patterns parameter to 4. This sets the number of example rows to read. Set the learningRate parameter to 0.8 and the momentum parameter to 0.3 Set the learning parameter to TRUE, as the net must be trained. Set the validation parameter to FALSE, as the net is not being tested

13. Click the START button, and you'll see the training process starting. The Control Panel shows the cycles remaining and the current error of the net. At the end of the last cycle, the error should be very small (less than 0.1), otherwise click on the Net->Randomize menu item (to add a little noise to the weights of the net) and click the START button again. If required, you can save the net with the 'File->Save As' menu item, so you can reuse the net later by loading it from the file system.

36

Joone Editor User Guide

8.1

Testing the Trained Net

To test the trained net: 14. Add an Output File layer on the right of the output layer, click on it and insert into the properties window:
o

c:\temp\xorout.txt on the fileName parameter

15. Connect the output layer of the net to the File Output layer. 16. Select the line that connects the output layer to the Teacher layer and click on the Edit->Delete to disconnect the Teacher from the neural net. 17. On the Control Panel change the following:
o o

Set the totCicles parameter to 1. This will process the input file once. Set the learning parameter to FALSE, as the net is not being trained.

18. Click on the START button. 19. Open the xorout.txt file with an editor, and you'll see a result like this (the values can change from one run to another, depending on the initial random values of the weights):
0.02592995527027603 0.9664584492704527 0.9648193164739925 0.03994103766843536

This result shows that the neural net has learned the XOR problem, providing the correct results:

a value near zero when the input columns are equal to 0;0 and 1;1 a value near one when the input columns are equal to 0;1 and 1;0

37

Joone Editor User Guide

The XML Parameter File

This section explains how to modify the tool palette to personalize and extend the editor with new components. The following sections are provided in the layers.xml file, provided with the editor package. By default he Joone Editor will look for the file org/joone/data/layers.xml as the parameter file, but this behaviour can be overridden by specifying the parameter file as a parameter to the run command:
java org.joone.edit.JoonEdit /some_path/your_parameter_file.xml

The layers.xml file provided with Joone includes all the available component types in the Joone project although some of the less-used ones are commented out.

9.1

The <buttons> </buttons> Section

This section contains all the buttons that are present in the toolbar.
<layer type="" class=" image="" />

This section describes a layer. Its parameters are: type: class: The name of the layer that will be shown when the mouse passes over it. The name of the class (complete with the package xxx.yyy.zzz.classname). The class must extend the org.joone.engine.Layer class and must be in the JVM classpath. The name of the image of the toolbar button. The name of the file should be: org.joone.images. + image_name + .gif, so the searched file name will be: /org/joone/images/image_name.gif (only gif images are allowed in the actual version of the editor).

image:

<input_layer type="" class="" image="" />

This section describes an input layer. All parameters are the same as above. The input layer class must extend the org.joone.io.StreamInputSynapse and must be in the classpath.
<input_switch type="" class="" image="" />

This section describes an input switch layer. All parameters are the same as above. The input switch layer class must extend the org.joone.engine.InputSwitchSynapse and must be in the classpath.
<learning_switch type="" class="" image="" />

This section describes a learning switch layer. All parameters are the same as above. The learning switch layer class must extend the org.joone.engine.learning.LearningSwitch and must be in the classpath.
<output_layer type="" class="" image="" />

38

Joone Editor User Guide This section describes an output layer. All parameters are the same as above. The output layer class must extend the org.joone.io.StreamOutputSynapse and must be in the classpath.
<output_switch type="" class="" image="" />

This section describes an output switch layer. All parameters are the same as above. The output switch layer class must extend the org.joone.engine.OutputSwitchSynapse and must be in the classpath.
< teacher_layer type=" " class=" " image=" " />

This section describes a teacher layer. All parameters are the same as above. The teacher layer class must extend the org.joone.engine.learning.TeachingSynapse and must be in the classpath.
<input_plugin type="" class="" image="" />

This section describes an input plugin for the data pre-processing. All parameters are the same as above. The input plugin class must extend the org.joone.util.ConverterPlugin class and must be in the classpath.
<monitor_plugin type="" class="" image="" />

This section describes a monitor plugin to control the behaviour of the training process. All parameters are the same as above. The monitor plugin class must extend the org.joone.util.MonitorPlugin class and must be in the classpath.
<synapse type="" class="" label="" image="" />

This section describes a synapse to connect two layers together. This tag has the same properties as the other components, plus a label property to set the label shown in the little box. The label is not mandatory but its use is recommended to visually distinguish various types of synapse displayed in the drawing area. If no label is specified, the synapses will be drawn as an arrow. The synapse component must extend the org.joone.engine.Synapse class.

9.2

The <options> </options> Section

This section contains all the parameters that control the behaviour of the GUI editor. Currently it contains the following tags: <refreshing_rate value="nnn" /> This parameter controls the refreshing rate of the fields shown in the control panel during the training. To speed up the training process, set this value to a high value (100 or more).

9.3

Separators

Small spaces can be inserted between toolbar buttons by inserting separator tags at the required position in the layers file thus:
<separator/>

39

Joone Editor User Guide

9.4

Temporarily Removing Items

Any item can be temporarily removed from the toolbar by enclosing the item in comment tags <!-- and -->. For example, to temporarily remove a separator from the toolbar, edit the separator tag
<separator/>

to
<!--<separator/>-->

or alternatively
<!--<separator/-->

40

Joone Editor User Guide

10

Online Resources

10.1 Joone
Home page: http://www.joone.org/ Summary page: http://sourceforge.net/projects/joone/

10.2 Artificial Neural Networks Technology


http://www.dacs.dtic.mil/techs/neural/neural_ToC.html

10.3 Java
http://java.sun.com/

10.4 JAXP
http://java.sun.com/xml

10.5 JHotDraw
http://sourceforge.net/projects/jhotdraw/

10.6 Source Forge


http://www.sourceforge.net

10.7 Sun Microsystems


http://www.sun.com/

41

Joone Editor User Guide

11

Glossary

11.1 Ann / NN
Artificial Neural Network. Synonymous throughout this document with the term neural network, an interconnected network of simulated neurons.

11.2 Classpath
The environment variable listing the files and directories that Java will search for runtime classes. On a Windows system, this variable is available by right-clicking the My Computer icon on the desktop, selecting the Advanced tab of properties, selecting Environment Variables. A semicolon should separate different entries in the Classpath.

11.3 GUI
Graphical User Interface.

11.4 Jar File


Similar to a Zip file.

11.5 JAXP
Java API for XML Parsing. Javas set of packages for processing XML files.

11.6 Layer
A collection of neurons that make up a Neural Network. The output of each neuron in a layer is linked to the input of every neuron in connected layers.

11.7 Neuron
An independent processing unit, similar to a neuron in the brain. A neuron accepts inputs from the outputs of other neurons and produces a result from these.

11.8 Neural Network


Computer models based on the neural structure of the brain that are able to learn from experience.

11.9 PE
Processing Element, i.e. a node (the neuron) constituting a layer.

11.10 Swing
Javas GUI component set. The Joone editor uses these components as the graphical framework of the GUI.

42

Joone Editor User Guide

11.11 XML
Extensible Markup Language. The Joone editor uses this language to process some parameter files.

11.12 Zip File


A compressed set of files. Java is able to read zipped files at run time, allowing Java programs to be deployed as a small number of files.

43

This document was created with Win2PDF available at http://www.daneprairie.com. The unregistered version of Win2PDF is for evaluation or non-commercial use only.

Java Object Oriented Neural Engine

Joone Core Engine


Developer Guide
Paolo Marrone (pmarrone@users.sourceforge.net)

Joone Core Engine

Developer Guide

Summary
Revision................................................................................................................................................3 Overview ..............................................................................................................................................4 Requirements....................................................................................................................................4 Performances ......................................................................... Errore. Il segnalibro non definito. The first neural network .......................................................................................................................5 A simple but useless neural network................................................................................................5 A real implementation: the XOR problem. ......................................................................................6 Saving and restoring a neural network .................................................................................................9 The simplest way..............................................................................................................................9 Using a NeuralNet object ...............................................................................................................10 Using the outcome of a neural network..............................................................................................12 Writing the results to an output file................................................................................................12 Getting the results into an array .....................................................................................................13 Using multiple input patterns .....................................................................................................13 Using only one input pattern ......................................................................................................15

http://www.joone.org

Joone Core Engine

Developer Guide

Revision
Revision 0.1.0 0.1.5 0.1.6 0.1.7 June 4, 2002 Paolo Marrone Date April 11, 2002 April 16, 2002 May 6, 2002 Author Paolo Marrone Paolo Marrone Paolo Marrone Comments Pre-release draft

Added the overview section Added the example about the use of the NeuralNet object Fixed a bug in the restoreNeuralNet example Added the chapter about the use of the outcome of a NN Fixed some bugs on the example code Added the examples about the several methods to interrogate a trained NN

http://www.joone.org

Joone Core Engine

Developer Guide

Overview
This manual contains a developer guide to explain how to use the Joones core engine API. This paper illustrates the use of the engine through code examples, as we are confident that this is the best way to learn the mechanisms underlying this powerful library. To better understand the concepts behind Joone, we recommend you read the Technical Overview, available on the Joones sourceforge download area.

Requirements
To try the samples contained in this manual you need the following packages: joone-engine.jar - The joones core engine library jdk 1.1.8 or above Whatever JDK implementation (from SUN or IBM, for instance) The core engine doesnt need any other library, and can run on whatever operating system and hardware platform. Were committed to maintain true the above two assertions for all the classes contained in the org.joone.engine package for all the next releases of the Joones core engine. Its not guaranteed, instead, the future compatibility with the JVM 1.1.x.

Performance
To improve the performance of the engine, especially in the presence of large training data sets, we recommend to use a JVM 1.2 or above, enabling the JIT compiler (higher performances are obtained using the SUNs HotSpot technology). Because the engine is built on a multi-threaded structural design, Joone will benefit from a multiprocessor hardware architecture.

http://www.joone.org

Joone Core Engine

Developer Guide

The first neural network


A simple (but useless) neural network
Consider a feed-forward neural net composed of three layers like this:

To build this net with Joone, three Layer objects and two Synapse objects are required:

SigmoidLayer layer1 = new SigmoidLayer(); SigmoidLayer layer2 = new SigmoidLayer(); SigmoidLayer layer3 = new SygmoidLayer(); FullSynapse synapse1 = new FullSynapse(); FullSynapse synapse2 = new FullSynapse();

The SigmoidLayer objects and the FullSynapse objects are real implementations of the abstract Layer and Synapse objects. Set the dimensions of the layers:
layer1.setRows(3); layer2.setRows(3); layer3.setRows(3);

Then complete the net, connecting the three layers with the synapses:
layer1.addOutputSynapse(synapse1); layer2.addInputSynapse(synapse1); layer2.addOutputSynapse(synapse2); layer3.addInputSynapse(synapse2);

As you can see, each synapse is both the output synapse of one layer and the input synapse of the next layer in the net. This simple net is ready, but it can't do any useful work because there are no components to read or

http://www.joone.org

Joone Core Engine

Developer Guide

write the data. The next example shows how to build a real net that can be trained and used for a real problem.

A real implementation: the XOR problem.


Suppose a net to teach on the classical XOR problem is required. In this example, the net has to learn the following XOR truth table: Input 1 Input 2 Output 0 0 1 1 0 1 0 1 0 1 1 0

Firstly, a file containing these values is created:


0.0;0.0;0.0 0.0;1.0;1.0 1.0;0.0;1.0 1.0;1.0;0.0

Each column must be separated by a semicolon. The decimal point is not mandatory if the numbers are integer. Write this file with a text editor and save it on the file system (for instance c:\joone\xor.txt in a Windows environment). Now build a neural net that has the following three layers: An input layer with 2 neurons, to map the two inputs of the XOR function A hidden layer with 3 neurons, a good value to assure a fast convergence An output layer with 1 neuron, to represent the XOR function's output as shown by the following figure:

First, create the three layers (using the sigmoid transfer function):
SigmoidLayer input = new SigmoidLayer(); SigmoidLayer hidden = new SigmoidLayer(); SigmoidLayer output = new SygmoidLayer();

set their dimensions:


input.setRows(2); hidden.setRows(3); output.setRows(1);

http://www.joone.org

Joone Core Engine

Developer Guide

Now build the neural net connecting the layers by creating the two synapses using the FullSynapse class that connects all the neurons on its input with all the neurons on its output (see the above figure):
FullSynapse synapse_IH = new FullSynapse(); /* Input -> Hidden conn. */ FullSynapse synapse_HO = new FullSynapse(); /* Hidden -> Output conn. */

Next connect the input layer with the hidden layer:


input.addOutputSynapse(synapse_IH); hidden.addInputSynapse(synapse_IH);

and then, the hidden layer with the output layer:


hidden.addOutputSynapse(synapse_HO); output.addInputSynapse(synapse_HO);

Now create a Monitor object to provide the net with all the parameters needed for it to work:
Monitor monitor = new Monitor(); monitor.setLearningRate(0.8); monitor.setMomentum(0.3);

Give the layers a reference to that Monitor:


input.setMonitor(monitor); hidden.setMonitor(monitor); output.setMonitor(monitor);

The application registers itself as a monitor's listener, so it can receive the notifications of termination from the net. To do this, the application must implement the org.joone.engine.NeuralNetListener interface.
monitor.addNeuralNetListener(this);

Now define an input for the net, then create an org.joone.io.FileInputStream and give it all the parameters:
FileInputSynapse inputStream = new FileInputSynapse(); /* The first two columns contain the input values */ inputStream.setFirstCol(1); inputStream.setLastCol(2); /* This is the file that contains the input data */ inputStream.setFileName("c:\\joone\\XOR.txt");

Next add the input synapse to the first layer. The input synapse extends the Synapse object, so it can be attached to a layer like a synapse.
input.addInputSynapse(inputStream);

A neural net can learn from examples, so it needs to be provided it with the right responses. For each input the net must be provided with the difference between the desired response and the actual response gave from the net. The org.joone.engine.learning.TeachingSynapse is the object that has this task:
TeachingSynapse trainer = new TeachingSynapse(); /* Setting of the file containing the desired responses, provided by a FileInputSynapse */ FileInputSynapse samples = new FileInputSynapse(); samples.setFileName("c:\\joone\\XOR.txt"); trainer.setDesired(samples); /* The output values are on the third column of the file */ samples.setFirstCol(3); samples.setLastCol(3); /* We give it the monitor's reference */ trainer.setMonitor(monitor);

http://www.joone.org

Joone Core Engine

Developer Guide

The TeacherSynapse object extends the Synapse object. This can be added as the output of the last layer of the net.
output.addOutputSynapse(trainer);

Now all the layers must be activated by invoking their method start. The layers implement the java.lang.Runnable interface, in that way they run on separated threads.
input.start(); hidden.start(); output.start(); trainer.start();

Set all the training parameters of the net:


monitor.setPatterns(4); /* # of rows contained in the input file */ monitor.setTotCicles(20000); /* How many times the net must be trained*/ monitor.setLearning(true); /* The net must be trained */ monitor.Go(); /* The net starts the training phase */

Here is an example describing how to handle the netStopped and cicleTerminated events. Remember: To be notified, the main application must implement the org.joone.NeuralNetListener interface and must be registered to the Monitor object by calling the Monitor.addNeuralNetListener(this) method.
public void netStopped(NeuralNetEvent e) { System.out.println("Training finished"); System.exit(0); } public void cicleTerminated(NeuralNetEvent e) { Monitor mon = (Monitor)e.getSource(); long c = mon.getCurrentCicle(); long cl = c / 1000; /* We want print the results every 1000 cycles */ if ((cl * 1000) == c) System.out.println(c + " cycles remaining - Error = " + mon.getGlobalError()); }

(The source code can be found in the CVS repository in the org.joone.samples.xor package)

http://www.joone.org

Joone Core Engine

Developer Guide

Saving and restoring a neural network


To have the possibility of reusing a neural network built with Joone, we need to save it in a serialized format. To accomplish this goal, all the core elements of the engine implement the Serializable interface, permitting a neural network to be saved in a byte stream, to store it on the file system or data base, or transport it on remote machines using any wired or wireless protocol.

The simplest way


A simple way to save a neural network is to serialize each layer using an ObjectOutputStream object, like illustrated in the following example that extends the XOR java class:
public void saveNeuralNet(String fileName) { try { FileOutputStream stream = new FileOutputStream(fileName); ObjectOutputStream out = new ObjectOutputStream(stream); out.writeObject(input); out.writeObject(hidden); out.writeObject(output); out.writeObject(trainer); out.close(); } catch (Exception excp) { excp.printStackTrace(); } }

We dont need to explicitly save the synapses constituting the neural network, because they are linked by the layers. The writeObject method recursively saves all the objects contained in the nontransient variables of the serialized class, also avoiding having to store the same objects instance twice in case it is referenced by two separated objects for instance a synapse connecting two layers. We can later restore the above neural network using the following code:
public void restoreNeuralNet(String filename) { try { FileInputStream stream = new FileInputStream(fileName); ObjectInputStream inp = new ObjectInputStream(stream); Layer input = (Layer)inp.readObject(); Layer hidden = (Layer)inp.readObject(); Layer output = (Layer)inp.readObject(); TeachingSynapse trainer = (TeachingSynapse)inp.readObject(); } catch (Exception excp) { excp.printStackTrace(); } /* * After that, we can restore all the internal variables to manage * the neural network and, finally, we can run it. */ /* We restore the monitor of the NN. * Its indifferent which layer we use to do this */ Monitor monitor = input.getMonitor(); /* The main application registers itself as a NNs listener */

http://www.joone.org

Joone Core Engine


monitor.addNeuralNetListener(this); /* Now we can run the restored net */ input.start(); hidden.start(); output.start(); trainer.start(); monitor.Go(); }

Developer Guide

The method illustrated in this chapter is very simple and works well, but its not flexible enough, because we have to write a different piece of code for each saved neural network, as the number and the order of the saved layers of the network is hard-coded in the program. We now consider a quicker and more flexible method to save and restore a neural network.

Using a NeuralNet object


The org.joone.net.Neuralnet object comes in our aid by offering a simple but powerful mechanism to manage a neural network built with Joone. We now will try to rewrite the XOR sample using this new component. In any case we must create all the necessary components of the neural network, repeating all the instructions already written for the previous example:
/* The Layers */ SigmoidLayer input = new SigmoidLayer(); SigmoidLayer hidden = new SigmoidLayer(); SigmoidLayer output = new SygmoidLayer(); input.setRows(2); hidden.setRows(3); output.setRows(1); /* The Synapses */ FullSynapse synapse_IH = new FullSynapse(); /* Input -> Hidden conn. */ FullSynapse synapse_HO = new FullSynapse(); /* Hidden -> Output conn. */ input.addOutputSynapse(synapse_IH); hidden.addInputSynapse(synapse_IH); hidden.addOutputSynapse(synapse_HO); output.addInputSynapse(synapse_HO); /* The I/O components */ FileInputSynapse inputStream = new FileInputSynapse(); inputStream.setFirstCol(1); inputStream.setLastCol(2); inputStream.setFileName("c:\\joone\\XOR.txt"); input.addInputSynapse(inputStream); /* The Trainer and its desired file */ TeachingSynapse trainer = new TeachingSynapse(); FileInputSynapse samples = new FileInputSynapse(); samples.setFileName("c:\\joone\\XOR.txt"); trainer.setDesired(samples); samples.setFirstCol(3); samples.setLastCol(3); output.addOutputSynapse(trainer);

Now we add this structure to a NeuralNet object:


NeuralNet nnet = new NeuralNet(); nnet.addLayer(input, NeuralNet.INPUT_LAYER);

http://www.joone.org

10

Joone Core Engine


nnet.addLayer(hidden, NeuralNet.HIDDEN_LAYER); nnet.addLayer(output, NeuralNet.OUTPUT_LAYER); nnet.addTeacher(trainer);

Developer Guide

and we instead use the contained Monitor object to create a new one:
Monitor monitor = nnet.getMonitor(); monitor.setLearningRate(0.8); monitor.setMomentum(0.3); monitor.setPatterns(4); /* # of rows contained in the input file */ monitor.setTotCicles(20000); /* How many times the net must be trained*/ monitor.setLearning(true); /* The net must be trained */ monitor.addNeuralNetListener(this);

and now we can run the neural network simply writing:


nnet.start(); nnet.GetMonitor().Go();

Where are the differences? 1. We dont need any more to set the Monitor object for each component, as the NeuralNet does this task for us; 2. We dont need to invoke the start method for all the layers, but only on the NeuralNet object. But the main support provided by the NeuralNet object is the ability to easily store and read a neural network with few and generalized rows of code:
public void saveNeuralNet(String fileName) { try { FileOutputStream stream = new FileOutputStream(fileName); ObjectOutputStream out = new ObjectOutputStream(stream); out.writeObject(nnet); out.close(); } catch (Exception excp) { excp.printStackTrace(); } } public void restoreNeuralNet(String filename) { try { FileInputStream stream = new FileInputStream(fileName); ObjectInputStream inp = new ObjectInputStream(stream); nnet = (NeuralNet)inp.readObject(); } catch (Exception excp) { excp.printStackTrace(); return; } /* * After that, we can restore all the internal variables to manage * the neural network and, finally, we can run it. */ /* The main application registers itself as a NNs listener */ nnet.getMonitor().addNeuralNetListener(this); /* Now we can run the restored net */ nnet.start(); nnet.getMonitor().Go(); }

http://www.joone.org

11

Joone Core Engine

Developer Guide

Using the outcome of a neural network


After having learned how to train and save/restore a neural network, we will see how we can use the resulting patterns from a trained neural network. To do this, we must use an object inherited from the OutputStreamSynapse class, so that we will be able to manage all the output patterns of a neural network for both the following two cases: 1. Users needs: to permit a user to read the results of a neural network, we must be able to write them onto a file, in some useful format, for instance, in ASCII format. 2. Applications needs: to permit an embedding application to read the results of a neural network, we must be able to write them onto a memory buffer a 2D array of type double, for instance and to read them automatically at the end of the elaboration. Note: The examples shown in the following two chapters use the serialized form of the XOR neural network. To obtain that file, you must first create the XOR neural network with the editor, as illustrated in the GUI Editor User Guide, and export it using the File->Export menu item.

Writing the results to an output file


The first example we will see is about how to write the results of a neural network into an ASCII file, so a user can read and use it in practice. To do this, we will use a FileOutputSynapse object, attaching it as the output of the last layer of the neural network. Assume that we have saved the XOR neural net from the previous example in a serialized form named xor.snet so we can use it by simply loading it from the file system and attaching to its last layer the output synapse. First of all, we write the code necessary to read a serialized NeuralNet object from an external application:
NeuralNet restoreNeuralNet(String fileName) { NeuralNet nnet = null; try { FileInputStream stream = new FileInputStream(fileName); ObjectInputStream inp = new ObjectInputStream(stream); nnet = (NeuralNet)inp.readObject(); } catch (Exception excp) { excp.printStackTrace(); } return nnet; }

then we write the code to use the restored neural network:


NeuralNet xorNNet = this.restoreNeuralNet("/somepath/xor.snet"); if (xorNNet != null) { Vector layers = xorNNet.getLayers(); // we get the third layer Layer output = (Layer)layers.elementAt(2); // we create an output synapse FileOutputSynapse fileOutput = new FileOutputSynapse(); FileOutput.setFileName("/somepath/xor_out.txt"); // we attach the output synapse to the last layer of the NN output.addOutputSynapse(fileOutput); // we run the neural network for only one cycle in recall mode xorNNet.getMonitor().setTotCicles = 1;

http://www.joone.org

12

Joone Core Engine


xorNNet.getMonitor().setLearning(false); xorNNet.start(); xorNNet.getMonitor().Go(); }

Developer Guide

After the above execution, we can print out the obtained file, and, if the net is correctly trained, we will see a content like this:
0.016968769233825207 0.9798790621933134 0.9797402885436198 0.024205151360285334

This demonstrates the correctness of the previous training cycles.

Getting the results into an array


We now will see the use of a neural network from an embedding application that needs to use its results. The obvious approach in this case is to obtain the result of the recall phase into an array of doubles, so the external application can use it as needed. We will see two usages of a trained neural network: 1. The test of a net using a set of predefined patterns; in this case we want interrogate the net with several patterns all collected before to query the net 2. The test of a net using only one input pattern; in this case we need to interrogate the net with a pattern provided by an external asynchronous source of data We will see an example of both the above methods.

Using multiple input patterns


To accomplish this goal we will use the org.joone.io.MemoryOutputSynapse object, as illustrated in the following example. Look at the following code:
// The input array used for this example private double[][] inputArray = { {0, 0}, {0, 1}, {1, 0}, {1, 1} }; private void Go(String fileName) { // We load the serialized XOR neural net NeuralNet xor = restoreNeuralNet(fileName); if (xor != null) { Vector layers = xor.getLayers(); /* We get the first layer of the net (the input layer), then remove all the input synapses attached to it and attach a MemoryInputSynapse */ Layer input = (Layer)layers.elementAt(0); input.removeAllInputs(); MemoryInputSynapse memInp = new MemoryInputSynapse(); memInp.setFirstRow(1); memInp.setFirstCol(1); memInp.setLastCol(2); input.addInputSynapse(memInp); memInp.setInputArray(inputArray);

http://www.joone.org

13

Joone Core Engine

Developer Guide

/* We get the last layer of the net (the output layer), then remove all the output synapses attached to it and attach a MemoryOutputSynapse */ Layer output = (Layer)layers.elementAt(2); // Remove all the output synapses attached to it... output.removeAllOutputs(); //...and attach a MemoryOutputSynapse MemoryOutputSynapse memOut = new MemoryOutputSynapse(); output.addOutputSynapse(memOut); // Now we interrogate the net xor.getMonitor().setTotCicles(1); xor.getMonitor().setPatterns(4); xor.getMonitor().setLearning(false); xor.start(); xor.getMonitor().Go(); for (int i=0; i < 4; ++i) { // Read the next pattern and print out it double[] pattern = memOut.getNextPattern(); System.out.println("Output Pattern #"+(i+1)+" = "+pattern[0]); } xor.stop(); System.out.println("Finished"); } }

As illustrated in the above code, we load the serialized neural net (using the same restoreNeuralNet method used in the previous chapter), and then we attach a MemoryInputSynapse to its input layer and a MemoryOutputSynapse to its output layer. Before that, we have removed all the I/O components of the neural network, to be not aware of the I/O components used in the editor to train the net. This is a valid example about how to dynamically modify a serialized neural network to be used in a different environment respect to that used for its design and training. To provide the neural network with the input patterns, we call the MemoryInputSynapse.setInputArray method, passing a predefined 2D array of double. To get the resulting patterns from the recall phase we call the MemoryOutputSynapse.getNextPattern method; this synchronized method waits for the next output pattern from the net, returning an array of doubles containing the response of the neural network. This call is made for each input pattern provided to the net. The above code must be written in the embedding application, and to simulate this situation, we can call it from a main() method:
public static void main(String[] args) { if (args.length < 1) { System.out.println("Usage: EmbeddedXOR XOR.snet"); } else { EmbeddedXOR xor = new EmbeddedXOR(); xor.Go(args[0]); } }

The complete source code of this example is contained in the EmbeddedXOR.java file in the org.joone.samples.xor package.

http://www.joone.org

14

Joone Core Engine

Developer Guide

Using only one input pattern


We now will see how to interrogate the net using only an input pattern. We will show only the differences respect to the previous example:
private void Go(String fileName) { // We load the serialized XOR neural net NeuralNet xor = restoreNeuralNet(fileName); if (xor != null) { Vector layers = xor.getLayers(); /* We get the first layer of the net (the input layer), then remove all the input synapses attached to it and attach a MemoryInputSynapse */ Layer input = (Layer)layers.elementAt(0); input.removeAllInputs(); DirectSynapse memInp = new DirectSynapse(); input.addInputSynapse(memInp); ...

As you can read, we now use as input a DirectSynapse instead of the MemoryInputSynapse object. What are the differences? 1. The DirectSynapse object is not a I/O component, as it doesnt inherit the StreamInputSynapse class 2. Consequently, it doesnt call the Monitor.nextStep method, so the neural network is not more controlled by the Monitors parameters (see the Technical Overview to better understand these concepts). Now the embedding application is responsible of the control of the neural network (it must know when to start and stop it), while during the training phase the start and stop actions was determined by the parameters of the Monitor object, being that process not supervisioned (remember that a neural network can be trained on remote machines without a central control). 3. For the same reasons, we dont need to call the Monitor.Go method, nor to set its TotCycles and Patterns parameters. Thus, to interrogate the net we can just write, after having invoked the NeuralNet.start method:
for (int i=0; i < 4; ++i) { // Prepare the next input pattern Pattern iPattern = new Pattern(inputArray[i]); iPattern.setCount(1); // Interrogate the net memInp.fwdPut(iPattern); // Read the output pattern and print out it double[] pattern = memOut.getNextPattern(); System.out.println("Output Pattern #"+(i+1)+" = "+pattern[0]); }

In the above code we give the net only one pattern for each query, using the DirectSynapse.fwdGet method (note that this method accepts a Pattern object). As in the previous example, to retrieve the output pattern we call the MemoryOutputSynapse.getNextPattern method. The complete source code of this example is contained in the ImmediateEmbeddedXOR.java file in the org.joone.samples.xor package.

http://www.joone.org

15

This document was created with Win2PDF available at http://www.daneprairie.com. The unregistered version of Win2PDF is for evaluation or non-commercial use only.

Java Object Oriented Neural Engine

Joone Core Engine


Technical Overview
Paolo Marrone (pmarrone@users.sourceforge.net)

Joone Core Engine

Technical Overview

Summary
Revision................................................................................................................................................3 Introduction ..........................................................................................................................................5 Overview ..............................................................................................................................................6 The Architecture...................................................................................................................................6 The Core Engine...................................................................................................................................9 The Layer .......................................................................................................................................10 The Recall Phase ........................................................................................................................10 The Learning Phase ....................................................................................................................11 Connecting a Synapse to a Layer ...............................................................................................11 The Synapse ...................................................................................................................................12 The Pattern .....................................................................................................................................14 The Matrix......................................................................................................................................14 The Monitor....................................................................................................................................14 The NN Parameters ....................................................................................................................14 The NN control...........................................................................................................................15 Managing the events...................................................................................................................17 I/O components ..............................................................................................................................20 The StreamInputSynapse............................................................................................................21 The StreamOutputSynapse .........................................................................................................23 The Supervised Learning components ...........................................................................................24 The TeacherSynapse...................................................................................................................24 The TeachingSynapse.................................................................................................................25 Using the Neural Network as a Whole...........................................................................................27 The NeuralNet ............................................................................................................................27

http://www.joone.org

Joone Core Engine

Technical Overview

Revision
Revision 0.1.0 0.2.0 0.3.0 0.3.5 Date March 6, 2002 March 7, 2002 March 26, 2002 April 14, 2002 0.3.6 0.3.7 April 16, 2002 May 8, 2002 Author Paolo Marrone Harry Glasgow Paolo Marrone Paolo Marrone Paolo Marrone Paolo Marrone Paolo Marrone Comments Pre-release draft

Updated the I/O components chapter to reflect the new object model Added the Supervised learning Compnents section Added the NeuralNet description section Broken the document separating it into two papers named Technical Overview and Developer Guide Added the introduction Added the description of the mechanism to manage the Monitors events Added the schema to better understand the input model Expanded the introduction

http://www.joone.org

Joone Core Engine

Technical Overview

would like to present the objectives that I had in mind when I started to write the first lines of code of Joone in the early 1996. My dream was (and still is) to create a framework to implement a new approach the use of neural networks. I felt this necessity because the biggest (and unresolved until now) problem is to find the fittest network for a given problem, without falling into local minima, thus finding the best architecture. Okay - you'll say 'this is what we can do simply by training some randomly initialised neural network (NN) with a supervised or unsupervised algorithm'. Yes, it's true, but this is just scholastic theory, because training only one neural network, especially for hard problems of the real life, is not enough. To find the best neural network is a really hard task because we need to determine many parameters of the net such as the number of the layers, how many neurons for each layer, the transfer function, the value of the learning rate, the momentum, etc... often causing frustrating failures. The basic idea is to have an environment to easily train many neural networks in parallel, initialised with different weights, parameters or different architectures, so the user can find the best NN simply by selecting the fittest neural network after the training process. Not only that but this process can continue retraining the selected NNs until some final parameter is reached (i.e. a low RMSE value) like a distillation process. The best architecture is discovered by Joone, not by the user! Many programs today exist that permit selection of the fittest neural network applying a genetic algorithm. I want to go beyond this, because my goal is to build a flexible environment programmable by the end user, so any existing or newly discovered global optimisation algorithm can be implemented. This is why Joone has its own distributed training environment and why it is based on a multithreaded engine. My dreams aren't finished, because another one was to make easily usable and distributable a trained NN by the end user. For example, I'm imagining an assurance company that continuously trains many neural networks on risk evaluation, distributing the best distilled resulting network to its sales force, that they can use it on their mobile devices. This is why Joone is serializable and remotely transportable using any wired and wireless protocol, and it is easily runnable using a simple, small and generalized program. Moreover, my dream can become a more solid reality thanks to the advent of handheld devices like mobile phones and PDA having inside a java virtual machine. Joone is ready to run on them, too. Hoping youll find Joone interesting and useful, I thank you for your interest to it. Paolo Marrone

http://www.joone.org

Joone Core Engine

Technical Overview

Introduction
This paper describes the technical concepts underlying the core engine of Joone, explaining in detail the architectural design that is at its foundation. This guide is intended to provide the developer - or anyone interested to use Joone - with the knowledge of the basic mechanisms of the core engine, so that anyone can understand how to use it and expand it to resolve ones needs. To see some examples about how to use the engine from within a java program to embed Joone on various application, please read the Developer Guide downloadable from the Joones sourceforge download area.

http://www.joone.org

Joone Core Engine

Technical Overview

Overview
Each neural network (NN) is composed of a number of components (layers) connected together by connections (synapses). Depending on how these components are connected, several neural network architectures can be created (feed forward NN, recurrent NN, etc). This document deals with feed forward neural networks (FFNN) for simplicitys sake, but it is possible to build whatever neural network architecture is required with Joone. A FFNN is composed of a number of consecutive layers, each one connected to the next by a synapse. Recurrent connections from a layer to a previous one are not permitted. Consider the following figure:
Synapse

Layers

This is a sample FFNN with two layers and one synapse. Each layer is composed of a certain number of neurons, each of which have the same characteristics (transfer function, learning rate, etc). A neural net can be composed of several layers of different kinds of layer. Each layer processes its input signal by applying a transfer function and sending the resulting pattern to the synapses that connect it to the next layer. So a neural network can process an input pattern, transferring it from its input layer to the output layer.

The Architecture
To ensure that it is possible to build whatever neural network architecture is required with Joone, a method to transfer the patterns through the net is required without the need of a central point of control. To accomplish this goal, each layer of Joone is implemented as a Runnable object, so each layer runs independently from the other layers (getting the input pattern, applying the transfer function to it and putting the resulting pattern on the output synapses so that the next layers can receive it, processing it and so on) as depicted by the following basic scheme:
f(X) I1 I2 wN1 wN2 XN YN

Ip wNP

Where for each neuron N: http://www.joone.org 6

Joone Core Engine XN = (I1 * WN1) + + (IP * WNP) f(X) = Transfer function (depending on the kind of layers property) YN = f(XN)

Technical Overview

Complex neural network architectures can be easily built, either linear or recursive, because there is no necessity for a global controller of the net. Look at the following figure (the arrows represent the synapses):
Input Layer Hidden Layers Layer 2 Layer 3 Output Layer

Layer 1

Layer 5

Layer 4

In this manner any form of modular neural networks can be built. Modular neural networks are a more generalized kind of multi-layered NN that process their input using several parallel sub-NNs. They recombine the results from each module, tending to create structures within the topology, which can promote specialization of functions in each sub-module. From this point of view, a standard multi-layered neural network is just the simplest kind of modular neural networks. Joone allows this kind of net to be built through its modular architecture, like a LEGO bricks system! To build a neural network, simply connect each layer to another as required using a synapse, and the net will run without problems. Each layer (running in its own thread) will read its input, apply the transfer function, and write the result in its output synapses, to which there are other layers connected running on separate threads, and so on. This transport mechanism is also used to bring the error from the output layers to the input layers during the training phases, allowing the weights and biases to be changed according to the chosen learning algorithm (for example the backprop algorithm). To accomplish this, each layer has two opposing transport mechanisms, one from the input to the output to transfer the input pattern during the recall phase, and another from the output to the input to transfer the learning error during the training phase, as depicted in the following figure:

http://www.joone.org

Joone Core Engine

Technical Overview

Input signal

Forward Transfer Backward Transfer


Weights adjustment

Forward Transfer Backward Transfer

Error

Layer

Synapse

Layer

Each Joone component (both layers and synapses) has its own pre-built mechanisms to adjust the weights and biases according to the chosen learning algorithm. By this means:

The engine is flexible: you can build any architecture you want simply by connecting each layer to another with a synapse, without being concerned about the architecture. Each layer will run independently, processing the signal on its input and writing the results to its output, where the connected synapses will transfer the signal to the next layers, and so on. The engine is scalable: if you need more computation power, simply add more CPU to the system. Each layer, running on a separated thread, will be processed by a different CPU, enhancing the speed of the computation. The engine closely mirrors reality: conceptually, the net is not far from a real system (the brain), where each neuron works independently from each other without a global control system.

Having seen how each component is implemented in Joone, the following sections look both at the object model and the implementation code.

http://www.joone.org

Joone Core Engine

Technical Overview

The Core Engine


The core engine of Joone is composed of a small number of interfaces and abstract classes forming a nucleus of objects that implement the basic behaviours of a neural network illustrated in the previous chapter. The following UML class diagram contains the main objects constituting the model of the core engine of Joone:

To simplify the model, only the relevant properties and methods are shown for each object. As depicted, all the objects implement the java.io.Serializable interface, so each neural network built with Joone can be saved as a byte stream to be stored in a file system or data base, or be transported to other machines to be used remotely. The two main components are represented by two abstract classes (both contained in the org.joone.engine package): the Layer and the Synapse objects.

http://www.joone.org

Joone Core Engine

Technical Overview

The Layer
The Layer object is the basic element that forms the neural net. It is composed of neurons, all having the same characteristics. This component transfers the input pattern to the output pattern by executing a transfer function. The output pattern is sent to a vector of Synapse objects attached to the layer's output. It is the active element of a neural net in Joone, in fact it runs in a separated thread (it implements the java.lang.Runnable interface) so that it can run independently from other layers in the neural net. Its heart is represented by the method run:
public void run() { while (running) { int dimI = getRows(); int dimO = getDimension(); // Recall phase inps = new double[dimI]; this.fireFwdGet(); if (m_pattern != null) { forward(inps); m_pattern.setArray(outs); fireFwdPut(m_pattern); } if (step != -1) // Checks if the next step is a learning step m_learning = monitor.isLearningCicle(step); else // Stops the net running = false; // Learning phase if ((m_learning) && (running)) { gradientInps = new double[dimO]; this.fireRevGet(); backward(gradientInps); m_pattern = new Pattern(gradientOuts); m_pattern.setCount(step); fireRevPut(m_pattern); } } // END while (running = false) myThread = null; }

The end of the cycle is controlled by the running variable, so the code loops until some ending event occurs. The two main sections of the code have been highlighted with a border:

The Recall Phase


The code in the first block reads all the input patterns from the input synapses (fireFwdGet), where each input pattern is added to the others to produce the inps vector of doubles. It then calls the Forward method, which is an abstract method in the Layer object. In the forward method the inherited classes must implement the required formulas of the transfer function, reading the input values from the inps vector and returning the result in the outs vector of doubles. By using this mechanism based on the template pattern, new kind of layer can easily be built by extending the Layer object. After this, the code calls the fireFwdPut method to write the calculated pattern to the output synapses, from which subsequent layers can process the results in the same manner. http://www.joone.org 10

Joone Core Engine

Technical Overview

In more simple terms the layer objects behaviour acts like a pump that decants the liquid (the pattern) from one recipient (the synapse) to another.

The Learning Phase


After the recall phase, if the neural net is in a training cycle, the code calls the fireRevGet method to read the error obtained on the last pattern from the output synapses, then calls the abstract backward method where, like in the forward method, the inherited classes must implement the processing of the error to modify the biases of the neurons constituting the layer. The code does this task by reading the error pattern in the gradientInps vector and writing the result to the gradientOuts vector. After this, the code writes the error pattern contained in the gradientOuts vector to the input synapses (fireRevPut), from which other layers can subsequently process the back propagated error signal. Once the overall vision of the task is established, the Layer object alternately pumps the input signal from the input synapses to the output synapses, and the error pattern from the output synapses to the input synapses, as depicted in the following figure (the numbers indicate the sequence of the execution):

1 Input signal

fwGet( )

forward()

fwPut( )

revPut( )
4

backward()

revGet( )
3

Error

Input Synapse

Layer

Output Synapse

Connecting a Synapse to a Layer


To connect a synapse to a layer, the program must call the Layer.addInputSynapse method for an input synapse, or the Layer.addOutputSynapse method for an output synapse. These two methods, inherited from the NeuralLayer interface, are implemented in the Layer object as follows:
/** Adds a new input synapse to the layer * @param newListener neural.engine.InputPatternListner */ public synchronized void addInputSynapse(InputPatternListener newListener) { if (aInputPatternListener == null) { aInputPatternListener = new java.util.Vector(); }; aInputPatternListener.addElement(newListener); if (newListener.getMonitor() == null) newListener.setMonitor(getMonitor()); this.setInputDimension(newListener); notifyAll(); }

http://www.joone.org

11

Joone Core Engine

Technical Overview

The Layer object has two vectors containing the list of the input synapses and the list of the output synapses connected to it. In the fireFwGet and fireRevPut methods the Layer scans the input vector and, for each input synapse found, it calls the fwGet and the revPut methods respectively (implemented by the input synapse from the InputPatternListener interface). Look at the following code that implements the fireFwGet method:
/** * Calls all the fwdGet methods on the input synapses to get the input patterns */ protected synchronized void fireFwdGet() { double[] patt; int currentSize = aInputPatternListener.size(); InputPatternListener tempListener = null; for (int index = 0; index < currentSize; index++){ tempListener = (InputPatternListener)aInputPatternListener.elementAt(index); if (tempListener != null) { m_pattern = tempListener.fwdGet(); if (m_pattern != null) { patt = m_pattern.getArray(); if (patt.length != inps.length) inps = new double[patt.length]; sumInput(patt); step = m_pattern.getCount(); } }; }; }

In the bordered code there is a loop that scans the vector of input synapses. The same mechanism exists for the fireFwPut and fireRevGet methods applied to the vector of output synapses implementing the OutputPatternListener interface. This mechanism is derived from the Observer Pattern, where the Layer is the Subject and the Synapse is the Observer. Using these two vectors, it is possible to connect many synapses (both input and output) to a Layer, permitting complex neural net architectures to be built.

The Synapse
The Synapse object represents the connection between two layers, permitting a pattern to be passed from one layer to another. The Synapse is also the memory of a neural network. During the training process the weighs of the synapse (contained in the Matrix object) are modified according the implemented learning algorithm. As described above, a synapse is both the output synapse of a layer and the input synapse of the next connected layer in the NN. To do this, the synapse object implements the InputPatternListener and the OutputPatternListener interfaces. These interfaces contain respectively the described methods fwGet, revPut, fwPut and revGet. The following code describes how they are implemented in the Synapse object:
public synchronized void fwdPut(Pattern pattern) { if (isEnabled()) {

http://www.joone.org

12

Joone Core Engine


count = pattern.getCount(); if ((count > ignoreBefore) || (count == -1)) { while (items > 0) { try { wait(); } catch (InterruptedException e) { return; } } m_pattern = pattern; inps = (double[])pattern.getArray(); forward(inps); ++items; notifyAll(); } } } public synchronized Pattern fwdGet() { if (!isEnabled()) return null; while (items == 0) { try { wait(); } catch (InterruptedException e) { return null; } } --items; notifyAll(); m_pattern.setArray(outs); return m_pattern; }

Technical Overview

The Synapse is a shared resource of two Layers that, as already mentioned, run on two separate threads. To avoid a layer trying to read the pattern from its input synapse before the other layer has written it, the shared synapse in synchronized. Looking at the code, the variable called items represents the semaphore of this synchronization mechanism. After the first Layers calls the fwdPut method, the items variable is incremented to indicate that the synapse is full. Conversely, after the subsequent Layer calls the fwdGet method, this variable is decremented, indicating that the synapse is empty. Both the above methods control the items variable when they are invoked. If a layer tries to call the fwPut method when items is greater then zero, its thread falls in the wait state, because the synapse is already full. In the fwGet method, if a Layer tries to get a pattern when items is equal to zero (meaning that the synapse does not contain a pattern) then its corresponding thread falls in the wait state. The notifyAll call present at the end of the two methods permits the awakening of the other waiting layer, signalling that the synapse is ready to be read or written. After the notifyAll, at the end of the method, the running thread releases the owned object permitting another waiting thread to take ownership. Note that although all waiting threads are notified by notifyAll, only one will acquire a lock and the other threads will return to a wait state. The synchronizing mechanism is the same in the corresponding revGet and revPut methods for the training phase of the neural network. The fwPut method calls the abstract forward method (at the same time as the revPut calls the abstract backward method) to permit to the inherited classes to implement respectively the recall and the learning formulas, as already described for the Layer object (according to the Template pattern). Writing the appropriate code in these two methods, the engine can be extended with new synapses and layers implementing whatever learning algorithm and architecture is required. http://www.joone.org 13

Joone Core Engine

Technical Overview

The Pattern
The Pattern object is the container of the data used to interrogate or train a neural network. It is composed of two parameters: an array of doubles to contain the values of the transported pattern, and an integer to contain the sequence number of that pattern. The dimensions of the array are set according to the dimensions of the pattern transported. The Pattern object is also used to stop all the Layers in the neural network. When its count parameter contains the value 1, all the layers that will receive that pattern will exit from their running state and will stop (the unique safe way to stop a thread in Java is to exit from its run method). Using this simple mechanism the threads within which the Layer objects run can easily be controlled. The Pattern object is also cloneable, permitting a duplicate of a pattern to be passed to any layer during its transfer from first to last layer within a neural network.

The Matrix
The matrix object simply contains a matrix of doubles to store the values of the weights of the connections and the biases. An instance of a matrix object is contained within both the Synapse and Layer objects. Each element of a matrix contains two values: the actual value of the represented weight, and the corresponding delta value. The delta value is the difference between the actual value and the value of the previous cycle. The delta value is useful during the learning phase, permitting the application of momentum to quickly find the best minimum of the error surface. The momentum algorithm adds the previous variation to the actual calculated weights value. See the literature for more information about the algorithm.

The Monitor
The Monitor object is the container of all the parameters to control the behaviour of the neural net. It controls the start/stop actions and permits net parameters to be set, e.g. learning rate, momentum, etc. Each component of the neural net (Layers and Synapses) is connected to a Monitor object so that it can read the parameters to control its work. The monitor can be different for all the components, though normally it is useful to create only one Monitor in a neural net, the reference set by a components setMonitor method. The Monitor can also notify a listener when some events occur. In fact, through the event handling based on a JavaBeans-like mechanism (the Observer Pattern), a listener object that implements the org.joone.engine.NeuralNetListener interface can register itself to receive all the events of the net. The following is a list of the Monitor objects features.

The NN Parameters
The Monitor contains all the parameters needed during the training phases, e.g. the learning rate, the momentum, etc. Each parameter has its own getter and setter method, conforming to the JavaBeans specifications. These parameters are used by an external application, for example, to display them in a user interface, or to calculate the formulas written in their neural network components backward()

http://www.joone.org

14

Joone Core Engine

Technical Overview

methods, as shown in the following code extracted from the org.joone.engine.SigmoidLayer class (bold text):
public void backward(double[] pattern) { super.backward(pattern); double dw, absv; int x; int n = getRows(); for (x = 0; x < n; ++x) { gradientOuts[x] = pattern[x] * outs[x] * (1 - outs[x]); // bias adjustment if (monitor.getMomentum() < 0) { if (gradientOuts[x] < 0) absv = -gradientOuts[x]; else absv = gradientOuts[x]; dw = monitor.getLearningRate() * gradientOuts[x] + absv * bias.delta[x][0]; } else dw = monitor.getLearningRate() * gradientOuts[x] + monitor.getMomentum() * bias.delta[x][0]; bias.value[x][0] += dw; bias.delta[x][0] = dw; } }

In this way each component has a standard mechanism for getting the parameters needed for its work.

The NN control
The Monitor object is also a central point for controlling the start/stop times of a neural network. It has some parameters that are useful to control the behaviour of the NN, e.g. the total number of epochs, the total number of the input patterns, etc. Before explaining how does this works, an explanation is required of how the input components of a neural network work. To provide an input pattern to a neural net, a component must be inherited from the org.joone.io.StreamInputSynapse class. This abstract class extends the Synapse object, so it can be connected to the input of a Layer like any other Synapse. When the Layer calls the fwGet method on the StreamInputSynapse (see the Layer object explained in a previous chapter), this object calls the Monitor.nextStep() method to advise the Monitor that a new cycle must be processed. Look at the implementation of the nextStep method:
public synchronized boolean nextStep() { while (run == 0) { try { if (!firstTime) { if (currentCicle > 0) { --currentCicle; fireCicleTerminated(); run = patterns; } if (currentCicle == 0) { fireNetStopped(); if (saveRun == 0) { saveRun = patterns; saveCurrentCicle = totCicles;

http://www.joone.org

15

Joone Core Engine


} firstTime = true; return false; //wait();

Technical Overview

} } else /* If goes here, it means that this method * was called first to call Go() or runAgain() */ wait(); } catch (InterruptedException e) { //e.printStackTrace(); return false; } } if (run > 0) --run; return true; }

Looking at the bordered block of code, the variable run contains the actual input pattern processed, while the currentCicle contains the current epoch (both descending from the max initial value to zero during the work of the neural net). If the run variable is equal to zero, then the Monitor calls the fireCicleTerminated method to advise the registered observers that the actual epoch has finished, after that it decreases the currentCicle by one. If the currentCicle is zero, then it calls the fireNetStopped method to indicate that the last epoch is terminated, and returns a FALSE value to the calling object. Otherwise, if the variable run is greater than zero, the Monitor simply decrements it, returning the value TRUE to the calling object. The following notification mechanism is obtained by implementing the Observer Pattern; the observer objects register themselves with the Monitor by calling the Monitor.addNeuralNetListener method, passing this as a parameter. To receive these notifications, the observer objects must implement the org.Joone.engine.NeuralNetListener interface. In this manner the following services are made available using the Monitor object: 1. The StreamInputSynapse knows if it can read and process the next input pattern (otherwise it stops), being advised by the returned Boolean value. 2. An external application can start/stop a neural network simply by setting the run parameter to a value greater than zero (to start) or equal to zero (to stop). To simplify these actions, the methods Go (to start), Stop (to stop) and runAgain (to restore a previous stopped network to running) have been added to the Monitor. 3. The observer objects (e.g. the main application) connected to the Monitor can be advised when a particular event raises, as when an epoch or the entire training process has finished (for example either to show to the user the actual epoch number or the actual training error). To see how to manage the events of the Monitor to read the parameters of the neural network, read the following paragraph.

http://www.joone.org

16

Joone Core Engine

Technical Overview

Managing the events


To explain how the events of the Monitor object can be used by an external application, the following explains in detail what happens when a neural network is trained and when the last epoch is reached.
Monitor

xxxInput Synapse

Input Layer

Hidden Layer

Output Layer

Teacher Synapse

Training Data

Desired Data

Suppose to have a neural network composed, as depicted in the above figure, of three layers: a xxxInputSynapse to read the training data, a TeacherSynapse to calculate the error for the backprop algorithm, and a Monitor object that controls the overall training process. As already mentioned, all the components of a neural network built with Joone obtain a reference to the Monitor object, represented in the figure by the dotted lines. Supposing the net is started in training mode, in the following figures all the phases involved in the process are shown when the end of the last epoch is reached. The numbers in the label boxes indicate the sequence of the processing:
2: the inputSynapse calls the nextStep method

Monitor

xxxInput Synapse

Input Layer

Hidden Layer

Output Layer

Teacher Synapse

Training Data

1: the input layer calls the fwdGet method

Desired Data

When the input layer calls the xxxInputSynapse.fwdGet method (1), the called object calls the Monitor.nextStep method to see if the next pattern must be processed (2).

http://www.joone.org

17

Joone Core Engine

Technical Overview

4: the Monitor returns a false Boolean value

Monitor

3: the Monitor raises the netStopped event

xxxInput Synapse

Input Layer

Hidden Layer

Output Layer

Teacher Synapse

Training Data

5: the inputSynapse creates and injects in the net a stop pattern

Desired Data

Since the last epoch is finished, the Monitor object raises a netStopped event (3) and returns a false Boolean value to the xxxInputSynapse (4). The xxxInputSynapse, because receives a false value, creates a stop pattern composed of a Pattern object with the counter set to 1, and injects it in the neural network (5).

Monitor

xxxInput Synapse

Input Layer

Hidden Layer

Output Layer

Teacher Synapse

Training Data

6: all the layers stop their running threads when receive the stop pattern

Desired Data

All the layers of the net stop their threads simply exiting from the run() method when they receive a stop pattern (6).

http://www.joone.org

18

Joone Core Engine

Technical Overview

8: the Monitor raises the errorChanged event

Monitor

7: the Teacher calculates and sets the global error contained in the Monitor

xxxInput Synapse

Input Layer

Hidden Layer

Output Layer

Teacher Synapse

Training Data

Desired Data

The TeacherSynapse calculates the global error and communicates this value to the Monitor object (7), which raises an errorChanged event to its listeners (8). Warning: As explained in the above process, the netStopped event raised by the Monitor cannot be used to read the last error value of the net, nor to read the resulting output pattern from a recall phase, because this event could be raised when the last input pattern is still travelling across the layers, before it reaches the last output layer of the neural network. So, to be sure to read the right values from the net, the rules explained below must be followed: Reading the error: to read the error of the neural network, the errorChanged event must be waited for, so a listener that implements the NeuralNetListener interface must be built, and the code written to manage the error in the inherited errorChanged method. Reading the outcome: to be sure to have received all the resulting patterns of a cycle from a recall phase, a stop pattern must be waited for from the output layer of the net. To do this, an object that extends the OutputStreamSynapse must be built and the code to manage the output pattern written, implemnting the fwdPut method of this class. Appropriate actions can be taken by checking the count parameter of the received Pattern. Some pre-built output synapse classes are provided with Joone, and many others will be released in future versions.

http://www.joone.org

19

Joone Core Engine

Technical Overview

I/O components
The I/O components of the core engine are stored in the org.joone.io package. They permit both the connection of a neural network to external sources of data and the storage of the results of the network to whatever output device is required. The object model is shown in the following figure:

http://www.joone.org

20

Joone Core Engine

Technical Overview

The abstract StreamInputSynapse and StreamOutputSynapse classes represent the core elements of the IO package. They extend the abstract Synapse class, so they can be attached to the input or the output of a generic Layer object since they expose the same interface required by any i/o listener of a Layer. Using this simple mechanism the Layer is not affected by the category of synapses connected to it because as they all have the same interface, the Layer will continue to call the xxxGet and xxxPut methods without needing to know more about their specialization.

The StreamInputSynapse
The StreamInputSynapse object is designed to provide a neural network with input data by providing a simple method to manage data that is organized as rows and columns, for instance as semicolon-separated ASCII input data streams. Each value in a row will be made available as an output of the input synapse, and the rows will be processed sequentially by successive calls to fwdGet method. As some files may contain information additional to the required data, the parameters firstRow, lastRow, firstCol and lastCol, derived from the InputSynapse interface, may be used to define the range of usable data. The Boolean parameter stepCounter indicates if the object is to call the Monitor.nextStep() method for each pattern read (see the NN control paragraph). By default it is set to TRUE but in some cases it must be set to FALSE. Read below to see why: In a neural network that is to be trained, there needs to be at least two StreamInputSynapse objects: one to give the sample input patterns to the neural network and another to provide the net with the desired output patterns to implement some supervised learning algorithm. Since the Monitor object is the same for all the components in a neural network built with Joone, there can be only one input component that calls the Monitor.nextStep() method, otherwise the counters of the Monitor object will be modified twice (or more) for each cycle. To avoid this side effect, the stepCounter parameter of the StreamInputSynapse that provides the desired output data to the neural network, is set to FALSE. A StreamInputSynapse can store its input data permanently by setting the buffered parameter to TRUE (the default). So an input component can be saved or transported along with its input data, permitting a neural network to be used without the initial input file. This feature is very useful for remotely training a neural network in a distributed environment, as provided by the Joone framework. The FileInputSynapse and URLInputSynapse objects are real implementations of the abstract StreamInputSynapse class which read input patterns from files and http/ftp sockets respectively. To extract all the values from a semicolon-separated input stream, the above two classes use the StreamInputTokenizer object. These are able to parse each line of the input data stream to extract all the single values from it and return them by the getTokenAt and getTokensArray methods. To add a new xxxInputSynapse that reads patterns from a different kind of input data to semicolon separated values, you must: 1. 2. 3. 4. Create a new class implementing the PatternTokenizer interface (e.g. xxxInputTokenizer) Write all the code necessary to implement all the public methods of the inherited interface. Create a new class inherited from StreamInputSynapse (e.g. xxxInputSynapse). Override the abstract method initInputStream, writing the code necessary to initialise the token parameter of the inherited class. To do this, you must call the method super.setToken from within initInputStream, passing the newly created xxxInputTokenizer 21

http://www.joone.org

Joone Core Engine

Technical Overview

after having initialised it. For more details see the implementation built into FileInputSynapse. To better understand the concepts underlying the I/O model of Joone, we must considerate that the I/O component package is based on two distinct tiers to logically separate the neural network from its input data. Since a neural network can natively process only floating point values, the I/O of Joone is based on this assumption, then if the nature of the input data is already numeric (integer or float/double), the user doesnt need to make further format transformations on them. The I/O object model is based on two separated levels of abstraction, like depicted in the following figure:
It implements the PatternTokenizer interface It extends the StreamInputSynapse class

Input data

xxxInputTokenizer Devices specific interface (driver)

xxxInputSynapse

To the NN Input layer

Any data format, depending on the input device

Numeric (double) data format - ONLY

Numeric (double) data format - ONLY

The two colored blocks represent the objects that must be written to add a new input data format and/or device to the neural network. The first is the driver that knows how to read the input data from the specific input device. It converts the specific input data format to the neural networks accepted numeric double format. The actual implemented StreamInputTokenizer is an object to transform semicolon separated ASCII values to numeric double values, and it was the first implementation made because the most common format of data is contained in text files; if the input data are already contained in this ASCII format, you can just use it, without implement any transformation. For data contained in array of doubles, (i.e. for input provided from another application), we have built the MemoryInputTokenizer and the MemoryInputSynapse classes that implement the above two layers to provide the neural network with data contained in a 2D array of doubles. To use them, simply create a new instance of the MemoryInputSynapse and set the input array calling its setInputArray method, then connect it to the input layer of the neural network.

http://www.joone.org

22

Joone Core Engine

Technical Overview

The StreamOutputSynapse
The StreamOutputSynapse object allows a neural network to write output patterns. It writes all the values of the pattern passed by the call of fwdPut method to an output stream. The values are written separated by the character contained in the separator parameter (the default is the semicolon), and each row is separated by a carriage return. Extending this class allows output patterns from an output device to be written as, for example, ASCII files, FTP sites, spreadsheets, charting visual components, etc. Joone has three real implementations of the above abstract class: FileOutputSynapse, to write the output on an ASCII file in the comma separated format; XLSOutputSynapse, to write the output to a file in Excel format; MemoryOutputSynapse, to write the output in a 2D array of doubles, to use the output of a neural network from an embedding or external application. Many others can be added simply extending the StreamOutputSynapse abstract class; in this manner Joone could be used to manipulate several physical devices like robots arms servomotors, regulator valves, servomechanisms, etc.

http://www.joone.org

23

Joone Core Engine

Technical Overview

The Supervised Learning components


To implement the supervised learning techniques, some mechanism is needed to provide the neural network with the error for each input pattern, expressed as the difference between the output generated by the actual processed pattern and the desired output value for that pattern. All the learning components are in the org.joone.engine.learning package, and its object model is represented in the following figure:

The TeacherSynapse
In this package the core component is the TeacherSynapse object. Its function is to calculate the difference between the output of the neural network and a desired value obtained from some external data source. The calculated difference is injected backward into the neural network starting from the output layer of the net, so each component can process the error pattern to modify the internal connections by applying some learning algorithm. The TeacherSynapse object, as its name suggests, implements the Synapse object so that it can be attached as the output synapse of the last layer in the neural network, as depicted in the following figure:

http://www.joone.org

24

Joone Core Engine


fwdGet( ) Desired patterns

Technical Overview

StreamInputSynapse
1 2

fwdGet( )

forward()

fwdPut( )

Diff.

RMSE FIFO

fwdGet( )

revPut( )
4

backward()

revGet( )
3

Error

Output Layer

TeacherSynapse

The TeacherSynapse object receives as does any other Synapse the pattern from the preceding Layer by the call to its fwdPut method. In the code contained in this method the teacher calls the fwdGet method on the internal attached StreamInputSynapse (the desired parameter) to get the desired pattern, calculates the difference between the two patterns and makes the result available to the connected Layer, which can get it simply calling the TeacherSynapse.revGet method. So the training cycle is complete! The error pattern can be transported from the last to the first layer of the neural network using the mechanism illustrated in the previous chapters of this paper. In this simple manner the output layer doesnt concern itself about the nature of the attached output synapse, since it continues to call the methods fwdPut and revGet on its output Synapse. To give to an external application the RMSE root mean squared error - calculated on the last cycle, at the end of each cycle the TeacherSynapse pushes this value into a FIFO First-In-FirstOut - structure (see the org.joone.engine.fifo class). From here any external application can get the resulting RMSE value by calling the fwdGet method on the teacher object. The use of a FIFO structure permits loose coupling between the neural network and the external thread that reads and processes the RMSE value, avoiding the training cycles having to wait before processing of the RMSE pattern. In fact, to get the RMSE values, simply connect another Layer that runs on a separate Thread to the output of the TeacherSynapse object, and connect to the output of this Layer, for instance, a FileOutputSynapse object, to write the RMSE values to an ASCII file.

The TeachingSynapse
To simplify the construction of the whole described chain teacher -> fifo -> layer the TeachingSynapse has been built as an implementation of this chain. In fact, the teaching object is composed of a TeacherSynapse object plus a LinearLayer connected to the Fifo structure, to which any OutputStreamSynapse can be connected to write the RMS patterns to any output stream. The internal structure of the TeachingSynapse object is depicted in the following figure:

http://www.joone.org

25

Joone Core Engine


TeachingSynapse Object

Technical Overview

fwdGet( )

Desired patterns

StreamInputSynapse
RMSE FIFO Error

Diff.

fwdGet( )

forward()

RMSE To an external OutputStream Synapse object

TeacherSynapse

LinearLayer

In this manner a TeachingSynapse object can be connected to the output layer of a neural network, and also connected to this object for instance - a FileOutputSynapse, to obtain the RMSE values written to an ASCII file. This compound object is a fundamental example about how to use the basic components of Joone to built more complex components that implement some more sophisticated feature. In other words, this is an additional example of the simplicity of the LEGO bricks system philosophy on which Joone is based.

http://www.joone.org

26

Joone Core Engine

Technical Overview

Using the Neural Network as a Whole


After a neural network has been built with Joone, the next problem is how to store and reload a trained neural network to use it later again. We have seen that any component of the Joones core engine implements the Serializable interface, permitting a neural network to be saved on whatever device is required, or transported using any wired or wireless protocol. To do this, we can simply write all the constituting Layers of a neural network on an ObjectOutputStream object, using a code like this:
/* We assume to have three layers named layer1, layer2 and layer3 */ FileOutputStream stream = new FileOutputStream(fileName); ObjectOutput output = new ObjectOutputStream(stream); output.writeObject(layer1); output.writeObject(layer2); output.writeObject(layer3); output.close();

We dont need to explicitly save the synapses because they are linked by the layers - each layer has two vectors that contain all the input and output synapses nor the Monitor object, since also it is contained in a non-transient variable into the layers. The problem is when we reload a serialized neural network using an ObjectInputStream. To do this, we could write:
FileInputStream stream = new FileInputStream(fileName); ObjectInput input = new ObjectInputStream(stream); Layer layer1 = (Layer)input.readObject(); Layer layer2 = (Layer)input.readObject(); Layer layer2 = (Layer)input.readObject();

This piece of code works very well, but we can use this method only if we know exactly how many layers compose the neural network to be restored, because the reading of each layer is hard-coded in the program. A better technique can be represented by serializing a container that encloses all the layers, for instance a Vector. In this case we reload the whole container, without being concerned about the dimensions of the neural network. This solution also works well, but a Vector (or any Collection object) doesnt give any other useful service to our network, leaving it to the developer to manage all its components. To elegantly resolve these needs, we have built an object that can contain a neural network, and in the meantime also it provides the developers with a set of useful features. This object is the NeuralNet object, and resides in the org.joone.net package.

The NeuralNet
The NeuralNet object represents a container of a neural network, giving the developer the possibility of managing a neural network as a whole. With this component a neural network can be saved and restored using a unique writeObject and readObject command, without be concened about its internal composition. Also by using a NeuralNet object, we can also easily transport a neural network on remote machines and run it there by writing a little generalized Java code. We now will look at its internals, to see what features it implements. http://www.joone.org 27

Joone Core Engine

Technical Overview

The following figure depicts the object model of the org.joone.net package, showing the NeuralNet and its link with other classes and interfaces of the engine:

The NeuralNet provides the following services: A neural network container The main purpose of the NeuralNet object is represented by the possibility to contain a whole neural network. It exposes several methods useful to add, remove and get the layers constituting the contained neural network. By the addLayer and removeLayer methods we can add and remove layers; by the getLayer method we can obtain a contained layer by its name. A neural network helper The NeuralNet object provides the contained neural network with some components useful to its work. Starting from the assumption that to build a neural network with Joone we must connect to it both a Monitor and a TeachingSynapse object (see the above chapters), the NeuralNet already contains internally these two objects. The NeuralNet creates an instance of the Monitor object and connects it automatically to any layer added to it. Also it holds a pointer to a TeachingSynapse object and permits this to be set by its setTeacher method. A neural network manager

http://www.joone.org

28

Joone Core Engine

Technical Overview

The NeuralNet object is also the manager of all the behaviour of the contained neural network exposing methods like Start, addNoise, Randomize, resetInput, etc. taking care to apply these methods to its contained components the NeuralNet.start() method, for instance, starts all the Layers of the net, avoiding the user having to invoke this methods on each separate layer. The NeuralNet object also provides to the user with some useful features to manage feed forward neural networks. Its method addLayer(Layer layer, int tier) indicates to which tier the newly inserted layer belongs. The tier parameter can contain one of three values obtained from static constants of the object: NeuralNet.INPUT_LAYER, NeuralNet.HIDDEN_LAYER and NeuralNet.OUTPUT_LAYER. The alternative method addLayer(Layer layer) instead adds a HIDDEN layer, and it is useful for building non-feedforward neural networks or when it unimportant to discriminate between the different types of tier. To complete the management of feedforward neural networks, the NeuralNet has two methods named getInputLayer and getOutputLayer. They return the first and the last tier of a neural network, giving either the declared input/output layers, or searching them following these rules: 1. A layer is an input layer if: a. It has not input synapses connected, or b. It has an input synapse belonging to the StreamInputSynapse or the InputSwitchSynapse classes 2. A layer is an output layer if: a. It has not output synapses connected, or b. It has an output synapse belonging to the StreamOutputSynapse or the OutputSwitchSynapse or the TeacherSynapse or TheachingSynapse classes The use of these two methods is very important to manage the input/output of a neural network, when, for instance, we want to dynamically change the connected I/O devices.

http://www.joone.org

29

This document was created with Win2PDF available at http://www.daneprairie.com. The unregistered version of Win2PDF is for evaluation or non-commercial use only.

Das könnte Ihnen auch gefallen