Machine learning

From Simple English Wikipedia, the free encyclopedia

Machine learning gives computers the ability to learn without being explicitly programmed (Arthur Samuel, 1959).[1][2] It is a subfield of computer science.[3]

The idea came from work in artificial intelligence.[4] Machine learning explores the study and construction of algorithms which can learn and make predictions on data.[5] Such algorithms follow programmed instructions, but can also make predictions or decisions based on data.[6]:2 They build a model from sample inputs.

Machine learning is done where designing and programming explicit algorithms cannot be done. Examples include spam filtering, detection of network intruders or malicious insiders working towards a data breach,[7] optical character recognition (OCR),[8] search engines and computer vision.

Using machine learning has risks. Some algorithms create a final model which is a black box.[9] Models have been criticized for biases in hiring,[10] criminal justice,[11] and recognizing faces.[12]

Overview[change | change source]

Learning algorithms try to predict what will happen in the future with patterns from the past. These predictions can be obvious: for example, if the sun rose for the past 10,000 days, it will probably rise again. These predictions can also be more complex. An example of a complex prediction is facial recognition (knowing who someone is by looking at face).

Machine learning programs can do things that is hasn't been told to do by a programmer. Machine learning programs will be shown some patterns. These patterns will be an input (such as a question) and an output (the answer to the question). Then, the machine learning program will predict the output based on the input. Machine learning isn't always necessary. Computers can do simple tasks by being told instructions. However, sometimes there are a lot of things that control the output. Then, it is hard for a human to tell the computer all of the instructions. It is sometimes easier to tell the computer how to teach itself.[13]

There are a lot of different ways to tell the computer to teach itself. When a problem has a lot of answers, different answers can be marked as valid. This is used to form data that the computer is trained with. One example of training data is the MNIST data. The MNIST data has images of handwritten numbers. The computer can learn to identify handwritten numbers using the MNIST data.

References[change | change source]

  1. John McCarthy & Edward Feigenbaum 1990. In Memoriam Arthur Samuel: pioneer in machine learning. AI Magazine. AAAI. 11 (3).[1] Archived 2018-01-22 at the Wayback Machine
  2. Phil Simon (2013). Too big to ignore: the business case for big data. Wiley. p. 89. ISBN 978-1-118-63817-0.
  3. "Machine Learning | Data Basecamp". 2021-11-26. Retrieved 2022-08-14.
  4. "Machine learning | artificial intelligence | Britannica".
  5. Ron Kohavi; Foster Provost (1998). "Glossary of terms". Machine Learning. 30: 271–274. doi:10.1023/A:1007411609915. S2CID 36227423.
  6. Christopher Bishop 1995. Neural networks for pattern recognition. Oxford University Press. ISBN 0-19-853864-2
  7. "TechCrunch".
  8. Wernick et al 2010. Machine learning in medical imaging, IEEE Signal Processing Society|IEEE Signal Processing Magazine. 27, 4, 25-38.
  9. "Government aims to make its 'black box' algorithms more transparent". Sky News. Retrieved 2021-12-02.
  10. "Amazon scraps secret AI recruiting tool that showed bias against women". Reuters. 2018-10-10. Retrieved 2021-12-02.
  11. Mattu, Jeff Larson,Julia Angwin,Lauren Kirchner,Surya. "How We Analyzed the COMPAS Recidivism Algorithm". ProPublica. Retrieved 2021-12-02.{{cite web}}: CS1 maint: multiple names: authors list (link)
  12. "The Problem of Bias in Facial Recognition". www.csis.org. Retrieved 2021-12-02.
  13. Ethem Alpaydin (2020). Introduction to Machine Learning (Fourth ed.). MIT. pp. xix, 1–3, 13–18. ISBN 978-0262043793.