Dot product
The English used in this article or section may not be easy for everybody to understand. (September 2020) |
In matrix algebra and vector calculus, the dot product is an operation that takes two vectors as input, yielding a scalar number as output. Such a number depends on the length of both vectors and the angle enclosed by the two vectors.
Dot product is derived from the centered dot • often used to denote such operation.[1] Another name is scalar product, which emphasizes the result's scalar rather than vector nature.
In three-dimensional space, the dot product contrasts with the cross product, yielding a vector – typically in the form of + + – as result.
Definition
[change | change source]The dot product of two vectors a = [a1, a2, ..., an] and b = [b1, b2, ..., bn] is defined as:[2]
where Σ denotes summation notation (the sum of all the terms) and n is the dimension of the vector space.
In dimension 2, the dot product of vectors [a,b] and [c,d] is ac + bd. The same way, in a dimension 3, the dot product of vectors [a,b,c] and [d,e,f] is ad + be + cf. For example, the dot product of two three-dimensional vectors [1, 3, −5] and [4, −2, −1] is
Geometric interpretation
[change | change source]Vector Projection
[change | change source]The dot product of two vectors a and b can be interpreted as the product of two lengths: the length of a orthogonally projected onto b, and the length of b itself. This can be written as , where θ (theta) is the angle between the two vectors. In the diagram shown, is the length of a orthogonally projected onto b, found using trigonometry.
The formula can be used to find certain properties.
Rotation
[change | change source]A rotation of the orthonormal basis in terms of which vector a is represented is obtained with a multiplication of a by a rotation matrix R. This matrix multiplication is just a compact representation of a sequence of dot products.
For instance, let
- B1 = {x, y, z} and B2 = {u, v, w} be two different orthonormal bases of the same space R3, with B2 obtained by just rotating B1,
- a1 = (ax, ay, az) represent vector a in terms of B1,
- a2 = (au, av, aw) represent the same vector in terms of the rotated basis B2,
- u1, v1, w1 be the rotated basis vectors u, v, w represented in terms of B1.
Then the rotation from B1 to B2 is performed as follows:
Notice that the rotation matrix R is assembled by using the rotated basis vectors u1, v1, w1 as its rows, and these vectors are unit vectors. By definition, Ra1 consists of a sequence of dot products between each of the three rows of R and vector a1. Each of these dot products determines a scalar component of a in the direction of a rotated basis vector (see previous section).
If a1 is a row vector, rather than a column vector, then R must contain the rotated basis vectors in its columns, and must post-multiply a1:
Physics
[change | change source]In physics, magnitude is a scalar in the physical sense, in that it is a physical quantity independent of the coordinate system, expressed as the product of a numerical value and a physical unit, not just a number. The dot product is also a scalar in this sense, given by the formula, independent of the coordinate system. For example:
- Mechanical work is the dot product of force and displacement vectors.
- Magnetic flux is the dot product of the magnetic field and the area vectors.
- Volumetric flow rate is the dot product of the fluid velocity and the area vectors.
Properties
[change | change source]The following properties hold if a, b, and c are real vectors and r is a scalar.
The dot product is commutative:[3]
The dot product is distributive over vector addition:
The dot product is bilinear:
When multiplied by a scalar value, dot product satisfies:
(these last two properties follow from the first two).
Two non-zero vectors a and b are perpendicular if and only if a • b = 0.
Unlike multiplication of ordinary numbers, where if ab = ac, then b always equals c unless a is zero, the dot product does not obey the cancellation law:
- If a • b = a • c and a ≠ 0, then we can write: a • (b − c) = 0 by the distributive law; the result above says this just means that a is perpendicular to (b − c), which still allows (b − c) ≠ 0, and therefore b ≠ c.
Provided that the basis is orthonormal, the dot product is invariant under isometric changes of the basis: rotations, reflections, and combinations, keeping the origin fixed. The above mentioned geometric interpretation relies on this property. In other words, for an orthonormal space with any number of dimensions, the dot product is invariant under a coordinate transformation based on an orthogonal matrix. This corresponds to the following two conditions:
- The new basis is again orthonormal (that is, orthonormal expressed in the old one).
- The new base vectors have the same length as the old ones (that is, unit length in terms of the old basis).
If a and b are functions, then the derivative of a • b is a' • b + a • b'.
Triple product expansion
[change | change source]This is a very useful identity (also known as Lagrange's formula) involving the dot- and cross-products. It is written as
which is easier to remember as "BAC minus CAB", keeping in mind which vectors are dotted together. This formula is commonly used to simplify vector calculations in physics.
Proof of the geometric interpretation
[change | change source]Consider the element of Rn
Repeated application of the Pythagorean theorem yields for its length |v|
But this is the same as
so we conclude that taking the dot product of a vector v with itself yields the squared length of the vector.
- Lemma 1
Now consider two vectors a and b extending from the origin, separated by an angle θ. A third vector c may be defined as
creating a triangle with sides a, b, and c. According to the law of cosines, we have
Substituting dot products for the squared lengths according to Lemma 1, we get
- (1)
But as c ≡ a − b, we also have
- ,
which, according to the distributive law, expands to
- (2)
Merging the two c • c equations, (1) and (2), we obtain
Subtracting a • a + b • b from both sides and dividing by −2 leaves
Generalization
[change | change source]The inner product generalizes the dot product to abstract vector spaces and is usually denoted by .[1] Due to the geometric interpretation of the dot product, the norm ||a|| of a vector a in such an inner product space is defined as
such that it generalizes length, and the angle θ between two vectors a and b by
In particular, two vectors are considered orthogonal if their inner product is zero
For vectors with complex entries, using the given definition of the dot product would lead to quite different geometric properties. For instance, the dot product of a vector with itself can be an arbitrary complex number, and can be zero without the vector being the zero vector; this in turn would have severe consequences for notions like length and angle. Many geometric properties can be salvaged, at the cost of giving up the symmetric and bilinear properties of the scalar product, by alternatively defining
where bi is the complex conjugate of bi. Then the scalar product of any vector with itself is a non-negative real number, and it is nonzero except for the zero vector. However, this scalar product is not linear in b (but rather conjugate linear), and the scalar product is not symmetric either, since
- .
This type of scalar product is nevertheless quite useful, and leads to the notions of Hermitian form and of general inner product spaces.
The Frobenius inner product generalizes the dot product to matrices. It is defined as the sum of the products of the corresponding components of two matrices having the same size.
Generalization to tensors
[change | change source]The dot product between a tensor of order n and a tensor of order m is a tensor of order n+m-2. The dot product is worked out by multiplying and summing across a single index in both tensors. If and are two tensors with element representation and the elements of the dot product are given by
This definition naturally reduces to the standard vector dot product when applied to vectors, and matrix multiplication when applied to matrices.
Occasionally, a double dot product is used to represent multiplying and summing across two indices. The double dot product between two 2nd order tensors is a scalar.
Related pages
[change | change source]References
[change | change source]- ↑ 1.0 1.1 "Comprehensive List of Algebra Symbols". Math Vault. 2020-03-25. Retrieved 2020-09-06.
- ↑ Weisstein, Eric W. "Dot Product". mathworld.wolfram.com. Retrieved 2020-09-06.
- ↑ Nykamp, Duane. "The dot product". Math Insight. Retrieved September 6, 2020.
Other websites
[change | change source]- A quick geometrical derivation and interpretation of dot product
- Interactive GeoGebra Applet
- Java demonstration of dot product
- Another Java demonstration of dot product
- Explanation of dot product including with complex vectors
- "Dot Product" by Bruce Torrence, Wolfram Demonstrations Project, 2007.
- Intuitive explanation video 1 and video 2 from online Interactive 3D graphics course