One of the topics that I’ve never covered in this blog is about developing data-mining and business analytics applications. Especially, the analytics part (which happened to have popularity with terms like Artificial Intelligence or Neural Networks) which is one of the most curious implementation fields in the area of software development.
If you are unfamiliar with the subject but are really interested about it, then I recommend you to have a look at the SharpNEAT implementation. The C# architecture of the solution is far from representing efficient development approach (that’s the inseparable feature of scientific projects) but some concepts behind are really nice.
In short, this project leverages neural networks (these were a hype not that much time ago) with augmenting topologies to solve tasks of forecasting and machine learning. The network “training” is done via the evolutionary (genetics) algorithm.
The augmenting topologies part is one of the key features of the project. It is about letting the networks to “grow” more complex during the training process. This is much better than simply changing the synapse weights as it allows to adapt the system to the complexity of the solution (just do not forget about the over-training problem).
Obviously such approach requires the introduction of some new heuristics to the evolutionary algorithm like segmentation of the species into the pools (mixing together networks with different topology will be just a waste of CPU) or introducing the pruning phase to “compact” existing networks that are too large.
If we compare this approach to the traditional statistical learning:
- it works better, but nobody can really tell how the models work (black-box situation);
- it allows to capture really complex multi-factor dependencies of the nonlinear nature;
- it works with the incomplete or noisy data;
- less human hours are needed to actually create and update analysis and forecasting models;
- it makes development more fun.
One of the major problems with this neuro-evolutional approach lies within the analogy road. Scientists get way too stuck with this neural-networks concept from the Mother Nature to see the development opportunities of the flexibility and technology. Well at least they like to think they are close to real world because these networks do solve real-world problems (although they really are just non-deterministic brute-force algorithms with some slight optimization of the search space).
However, building a full feature set for the proper real-world analogy will require at least:
- introduction of the proper electro-chemical reactions with the mediators in an asynchronous and continuous manner;
- introduction of the Glia (networks are not just about neurons and synapses, you know);
- delivery of the quantum computers into the mass-production.
Fortunately, we do not have to stick to these concepts and can play directly with the development-pure things like mathematical interpretators, model ensembles or evolving functions. New technologies like Windows Azure (we are interested in the Cloud Computing part) or Microsoft Accelerator Research Project (.NET library for delegating some computations to your GPU) make it more affordable.
Side effect is that models are more flexible, leverage CPU more efficiently and are more fun to develop.
We’ll see how this research topic on non-statistical learning will go further. Stay tuned.