From Simple English Wikipedia, the free encyclopedia

In machine learning, the perceptron (or McCulloch-Pitts neuron) is an algorithm to separate two categories of data with a straight boundary. A list of numbers called the weights describe the boundary.

History[change | change source]

Warren McCulloch and Walter Pitts thought of the perceptron in 1943.[1] Frank Rosenblatt built the first perceptron in 1958.[2]

Definition[change | change source]

The algorithm calculates the inner product of a data point and a list of numbers called the weights and then adds another number called the bias. It will group the negative numbers and the positive numbers separately. The algorithm only works if the two groups can be divided with a straight boundary. The groups of data are on opposite sides of the boundary.[3] It can be written as where is the weights, is the data point, and is the bias.[4]

References[change | change source]

  1. McCulloch, Warren S.; Pitts, Walter (December 1943). "A logical calculus of the ideas immanent in nervous activity". The Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259. ISSN 0007-4985.
  2. Marcus, Gary (2013-12-31). "Hyping Artificial Intelligence, Yet Again". The New Yorker. Retrieved 2023-04-14.
  3. Murty, M. N.; Raghava, Rashmi (2016), "Perceptron", Support Vector Machines and Perceptrons, Cham: Springer International Publishing, pp. 27–40, doi:10.1007/978-3-319-41063-0_3, ISBN 978-3-319-41062-3, retrieved 2023-04-15
  4. Vazirani; Rao, U.C. Berkeley – CS 270: Algorithms – Lecture 8 (PDF)