Saturday, January 31, 2015

Searching for food using a LambdaNet neural network

From time to time I have a fit and start doing a bit of AI. I saw that a new version of LambdaNet was released so I though I would take it for a spin and try something (a little) bit more complicated that their XOR example.

The problem is simple. In a rectangular world, there is food is one place. The food "smells" and so each position in the world has a smell associated with it, the higher the smell meaning the closer to the food. Can we have a neural network that can navigate to the food?

A few definitions:

-- | Direction to go to
data Direction = TopLeft | Left | BottomLeft | Top | Center | Bottom | TopRight | Right | BottomRight
  deriving (Show,Read,Eq,Ord,Bounded,Enum)

-- | Strength of the smell
type Smell = Int
-- | Input information
type Input = [(Direction,Smell)]
-- | Position in world
type Position = (Int,Int)
-- | Size of the world
type Size = (Int,Int)
-- | Maximum number of steps to take
type Max = Int

-- | Number of directions
dirLength :: Int

dirLength = 1 + fromEnum (maxBound :: Direction)

-- | The world
data World = World
    { wSize   :: Size -- ^ size
    , wSmell  :: Smell -- ^ Smell of the food position
    , wSmells :: DM.Map Position Smell -- ^ All smell strengths by position
    deriving (Show,Read,Eq,Ord)

-- | Function deciding in which direction to move
type StepFunction = Position -> Input -> Direction

Fundamental is the concept of Direction, since we want to move. Basically, when we are in a given position in a world, we can get nine directions and their associated smell (staying in the same place is one position). The function to decide what to do in a given position given all the smells of the neighbouring positions is called StepFunction.

The algorithm is easy to write for a human brain:

-- | The base algorithm: just go toward the highest smell
baseAlg :: StepFunction
baseAlg _ = fst . maximumBy (comparing snd)

Note that we ignore the current position, we only work with the input structure.

On top of that, we need function to build the world with the proper smell indicators, run the algorithm till we find the food, etc. All this code can be found in the GitHub project but is not really critical for our understanding of neural networks. One function of interest is running one step of the algorithm, showing the intermediate structures generated:

-- | Perform one step and return the information generated: direction/smell input, direction output
algStepExplain :: World -> StepFunction -> Position -> (Position,([(Direction,Smell)],Direction))

We get the position back, and the second element of the tuple is the input and the output of the StepFunction.

What we want to do is train a neural network, which should be easy since we have an algorithm we know will work well to find the best position to move to, and then use that network as an implementation of StepFunction.

The hardest in neural network programming is to design the input and output structures, so that they represent adequately the information about your problem in a format that the network can deal with. Here, we have a fixed input size: the smells of the 9 neighbouring positions. The StepFunction returns a Direction, and a Direction is an enum of nine values, so the output of the network could also be 9 values, the highest of these indicating the direction chosen by the network.

The networks in LambdaNet requires Vectors as their input and output data, so lets format the inputs:

-- | Format the inputs suitable for the network
formatInputs ::  World -> [(Direction,Smell)] ->  Vector Float
formatInputs w =   fromList . map (\i-> fromIntegral (snd i) / fromIntegral (wSmell w))    

So an input of 1 means we're on the food itself, and the input value will decrease as we're further from the food, while staying between 0 and 1.

If we have a network, the implementation of StepFunction is straightforward:

-- | Use the network to give the answer 
neuralAlg ::  World -> Network Float -> StepFunction
neuralAlg w n _ is = toEnum $ maxIndex $ predict (formatInputs w is) n 

We format the input, run predict, retrieve the index for the maximum value in the output vector, and use that as the index in the Direction enum. We just need a trained network!

To get that, we generate the training data from a given world. We list all possible positions in the world, calculate the corresponding inputs, run the basic algorithm on the input to get the optimal answer. For the result direction will set the output value to 1, and zero for all the others

-- | Training data: for each position in the world, use the base algorithm to get the training answer
trainData ::  World -> [(Vector Float, Vector Float)]
trainData w = map onePos $ allPositions w
    onePos p = 
      let (_,(is,dir)) = algStepExplain w baseAlg p
          os = map (\(d,_)->if dir==d then 1 else 0) is 
      in (formatInputs w is,fromList os) 

From here, we unimaginatively reuse the LambdaNet tutorial code to build a network...

-- | Create the network
buildNetwork :: RandomGen g => g -> Network Float
buildNetwork g = createNetwork normals g $ replicate 3 $ LayerDefinition sigmoidNeuron dirLength connectFully

And train it:

-- | Train a network on several given worlds
train :: Network Float -> [World] -> Network Float
train n ws = 
  let t = BackpropTrainer (3 :: Float) quadraticCost quadraticCost'
      dat = concatMap trainData ws
  in trainUntilErrorLessThan n t online dat 0.01

What is critical here is that we train the network on several different worlds. I tried training only one world, the resulting network would perform well on worlds of the same size or smaller, but not bigger worlds, because it was too fit for the actual smell values. Training even on only two quite different worlds brought big enhancements in the intelligence of the network, at the code of longer learning time.

Once the network is trained, you can run it on several different worlds and see how it can find the food. There is a simple visualization module that allows you to see clearly the moves, for example:

Iteration 1

(X being the food, @ the current position)

Iteration 3

Iteration 6


If you're interested, the full source code with tasty unit tests in on Github.

This is of course very basic, and only begs to be enhanced with more complicated worlds (maybe with walls, several sources of food, places with no smell at all, etc). What do you do when you don't know the best algorithm yourself? Maybe I'll come back for more later to find out!

Thursday, January 29, 2015

EclipseFP 2.6.4 released!

Hello all. I've just released EclipseFP 2.6.4, to provide a fix for people that could not create projects due to a NullPointerException. Unfortunately I am at the moment the only contributor to EclipseFP which means nobody else than myself tests it before a release, so regressions happen. I would love to see more people contribute and build from source!

A couple of fixes and small enhancements have been made, the release note are here.

Just update by pointing your Eclipse update feature to

Happy Haskell Hacking!

Friday, January 23, 2015

Writing a low level graph database

I've been interested in graph databases for a long time, and I've developed several applications that offer an API close enough to a graph API, but with relational storage. I've also played with off the shelf graph databases, but I thought it would be fun to try an implement my own, in Haskell of course.
I found that in general literature is quite succinct on how database products manage their physical storage, so I've used some of the ideas behind the Neo4J database, as explained in the Graph Databases books and in a few slideshows online.
So I've written the start of a very low level graph database, writing directly on disk via Handles and some Binary instances. I try to use fixed length record so that their IDs translate easily into offsets in the file. Mostly everything ends up looking like linked lists on disk: vertices have a pointer to their first property and their first edge, and in turn these have pointers to the next property or edge. Vertex have pointers to edges linking to and from them.
I've also had some fun trying to implement an index trie on disk.
All in all, it's quite fun, even though I realize my implementations are quite naive, and I just hope that the OS disk caching is enough to make performance acceptable. I've written a small benchmark using the Hackage graph of packages as sample data, but I would need to write the same with a relational backend.

If anybody is interested in looking at the code or even participate, everything is of course on Github!

Monday, January 12, 2015

EclipseFP 2.6.3 released

A new version of EclipseFP, the Eclipse plugins for Haskell development, has been released. The release notes can be found here.

As usual install or update from Eclipse by using the update site

Happy Haskell Hacking!

Wednesday, December 10, 2014

Using Ansi console for nicer console output in EclipseFP

A little tip from the trenches (-:. I use the Ansi Console plugin in EclipseFP to be able to see a nice color output from my Haskell executable runs.

For example, the Tasty example:

Works also for Yesod applications, etc.

Note that some support for Tasty is being added in EclipseFP 2.6.3, to at least provide some templates for the cabal test-suite section and test modules. I'm not too sure if it's worth to develop a runner for it, since contrary to HTF we won't have a link to the source, so the standard output is probably nice enough.

Happy Haskell Hacking!

Wednesday, December 03, 2014

Circling through ideas

I can't make my mind up about what to do in my free time (I have some, yes yes). I keep circling between the same ideas, thinking "that would be good", then jumping to the next one.

  • Keep on working on EclipseFP and other Haskell IDE related ideas. I would like for example to unify the storage of metadata between buildwrapper, scion-browser and eclipsefp, to have one database to would keep library information (definitions, documentations) like scion-browser, AST with types like buildwrapper, usage references like the usage DB in EclipseFP. It could be interesting to try to use that database to drive an IDE and have a clear repository of metadata. But then I'm tired of working on my own on EclipseFP, and when I see that Leksah really has also one active maintainer, I think people are really not interested in advancing Haskell IDEs, they must be happy with Emacs/vi and ghc-mod at a push, so why bother?
  • Then I think working on games is fun! I had fun writing Mazes of Monad, and other little games, and I do enjoy playing role playing or adventure games (I can really recommend the last one I've completed, the Longest Journey), so maybe writing a game the reactive way or even for Android like the guys at Keera Studios do would be a good use of my time. Then I remember I'm a programmer that sucks at graphics design and would probably suck as much at game design.
  • Games are too trivial, let's do something that will change the world, like work on AI! A few books I've read like Kurzweil's and Hawkins' have been truly inspirational, so maybe I could write some HMM neural temporal gizmo that would become sentient over night!! Then I wake up. People smarter than me and with more time than me are already working on that, so I would not contribute anything anyway. Why don't I help these people by providing a better programming experience, for example a Haskell IDE? Back to idea 1!
Ah well, I'll go back to browsing Reddit!

Sunday, November 23, 2014

EclipseFP 2.6.2 released!

I've just released EclipseFP 2.6.2. This is mainly a bug fixing release with a better handling of cabal sandboxes and related functionality. It wraps also an important change in scion-browser, which should now mean that Hoogle uses all the packages present in your sandbox, so should give better results.

Browse the release notes!

Install by pointing your eclipse to

This release is brought to you by me, myself and I. I would love to see more people contribute, even if only to install the development versions to provide some testing before the release, or write some documentation. There's work that can be done on the Haskell side, on the Java side, etc. Contact me if you're not sure, or send me pull requests! The code is at and

Happy Haskell Hacking!!