In many branches of science, including mathematics, mathematical optimization is a branch that is about finding the element that gives an optimal solution to a problem, given some criteria. In the simplest case, this means that a function needs to be minimized or maximized.
The first step is usually to take the derivative of the initial function. From here maxima or minima can be found within a domain by simply finding the critical points. These include domain (x) values where the differentiated function equals 0 or does not exist. Simply finding critical points is not enough to determine maxima or minima. The final step to determining a maximum, minimum, or neither can be one of two methods.
(1) X values lesser and lesser than the critical point must be substituted into the differentiated function to determine if the values change sign. If they go from negative to positive, the critical point is a minimum. If they go from positive to negative, the critical point is a maximum. If neither occurs, the point is neither. This is known as the first derivative test.
(2) The derivative of the differentiated function can be taken, creating a second-order derivative. The critical point must be substituted into the second derivative. If the output is positive, the critical point is a minimum. If the output is negative, the critical point is a maximum. This is known as the second derivative test.
Applications[change | change source]
The application of optimization span many fields. A simple example is attempting to find the smallest possible difference in the distance of two objects in two-dimensional space (x and y). In this context, the derivative of the function that gives the difference is taken in order to find the minimum. A more complicated example is in Machine Learning, in which the optimization function attempts to find the global minimum of the loss function in order to minimize the difference or loss between the algorithm’s predictions and the actual values. This example is more difficult as Machine learning algorithms often utilize multidimensional data usually in the form of tensors yielding more complicated functions.
Related software[change | change source]
Today, there are many tools to support optimization studies:
References[change | change source]
- Snyman, J. A. (2005). Practical mathematical optimization (pp. 97-148). Springer Science+ Business Media, Incorporated.
- Intriligator, M. D. (2002). Mathematical optimization and economic theory. Society for Industrial and Applied Mathematics.
- Luptacik, M. (2010). Mathematical optimization and economic analysis (p. 307). New York, NY, USA:: Springer.
- Nocedal, J., & Wright, S. (2006). Numerical optimization. Springer Science & Business Media.
- Bonnans, J. F., Gilbert, J. C., Lemaréchal, C., & Sagastizábal, C. A. (2006). Numerical optimization: theoretical and practical aspects. Springer Science & Business Media.
- Sra, S., Nowozin, S., & Wright, S. J. (Eds.). (2012). Optimization for machine learning. MIT Press.
- Bottou, L., Curtis, F. E., & Nocedal, J. (2018). Optimization methods for large-scale machine learning. SIAM Review, 60(2), 223-311.
- Venkataraman, P. (2009). Applied optimization with MATLAB programming. John Wiley & Sons.
- Bhatti, M. A. (2012). Practical Optimization Methods: With Mathematica® Applications. Springer Science & Business Media.