Neural network

From Wikipedia, the free encyclopedia
Jump to: navigation, search

A Neural network (also called an ANN or an Artificial Neural network) is an artificial system (made of artificial neuron cells). It is modeled after the way the human brain works through imitating how the brain's neurons are fired or activated. Several computing cells work in parallel to produce a result. This is usually seen as one of the possible ways artificial intelligence can work. Most neural networks can tolerate if one or more of the processing cells fail.

What is important in the idea of neural networks is that they are able to learn by themselves, an ability which makes them remarkably distinctive in comparison to normal computers, which cannot do anything for which they are not programmed.

Learning methods[change | change source]

There are three ways a neural network can learn: Supervised learning, Unsupervised learning and Reinforcement learning. All these methods work by finding the biggest or smallest answer to a cost function. Each method takes different inputs so each method is better at certain tasks.

Supervised Learning[change | change source]

In Supervised learning, the neural network is trained using example inputs and the correct output. The network can then work out the relationship between the input and output. For example, a network could be trained by showing it details about houses and the sale price. Once it has finished training it could estimate the sale price of another house using the details like the number of bedrooms and local crime rate.

Another example is the ALV (Autonomous Land Vehicle). DARPA funded this project in the 1980s. In a demonstration in 1987 it travelled 600 metres at 3 km/h over difficult land, with sharp rocks, vegetation and steep ravines. This vehicle could drive itself as fast as 30 km/h. This network watched a 'teacher' drive, and saw the road using laser radar. The learning process was repeated for different road types. ALV used a kind of neural network called a multi-layer perceptron.[1]

Unsupervised Learning[change | change source]

Unsupervised learning only trains using inputs, and the network has to figure out how they relate to each other. Clustering problems, estimation problems, and self organising maps work in this way. For example, a self organising map can be used to categorise iris flowers by stem size and colour.[2]

Reinforcement Learning[change | change source]

A reinforcement learning neural network uses an object's or a teacher's actions. It works out the smallest cost and tries to use this to work out how to make the smallest cost in future. It can be thought of as a Markov decision process. Another simple way to think of this is as "carrot and stick" learning (learning that rewards good behaviour and punishes bad behaviour).

Recently, a research team from the University of Hertfordshire, UK used reinforcement learning to make an iCub humanoid robot learn to say simple words by babbling.[3]

Notes[change | change source]