Ugrás a tartalomhoz

## Convex Geometry

Csaba Vincze (2013)

University of Debrecen

1. fejezet - Elements

## 1.1 Linear Algebra

In what follows

 ${E}^{n}=\left\{\left({v}^{1},\mathrm{\dots },{v}^{n}\right)|{v}^{1},\mathrm{\dots },{v}^{n}\in R\right\}$ (1.1)

is the standard real coordinate space of dimension n equipped with the canonical inner product

 $〈v,w〉:={v}^{1}{w}^{1}+\mathrm{\dots }+{v}^{n}{w}^{n},$ (1.2)

where

The elements of the coordinate space are called both vectors and points denoted by the symbols of the Latin alphabet in general. Especially we refer to the context for both terminology and notations. We speak about the norm

 $\parallel v\parallel :=\sqrt{〈v,v〉}=\sqrt{\left({v}^{1}{\right)}^{2}+\mathrm{\dots }+\left({v}^{n}{\right)}^{2}}$ (1.3)

of vectors but the distance

 $d\left(p,q\right):=\parallel p-q\parallel =\sqrt{\left({p}^{1}-{q}^{1}{\right)}^{2}+\mathrm{\dots }+\left({p}^{n}-{q}^{n}{\right)}^{2}}$ (1.4)

between points. Mathematical objects labelled by indices will appear as v(1), ..., v(k) in text mode. The notation refers to the one-to-one correspondence between the set of indices and the set of objects labelled by them. Otherwise (in displayed mathematical formulas) we use

 ${v}_{1},\mathrm{\dots },{v}_{k}$ (1.5)

as usual. Let v and w be non-zero vectors in the coordinate space of dimension n and consider the auxiliary funtion

 $f\left(t\right):=〈v+tw,v+tw〉$ (1.6)

as t runs through the set of real numbers. Using the basic properties of the inner product it can be easily seen that the function 1.6 is a quadratic polynomial. Since the inner product is positive definite its discriminant must be less or equal than zero which leads to the so-called Cauchy-Buniakowsky-Schwartz inequality

 $|〈v,w〉|\le \parallel v\parallel \cdot \parallel w\parallel .$ (1.7)

The angle between non-zero vectors v and w can be defined as

 $\mathrm{\angle }\left(v,w\right):=\mathrm{a}\mathrm{r}\mathrm{c}\mathrm{c}\mathrm{o}\mathrm{s}\frac{}{\parallel v\parallel \cdot \parallel w\parallel }$ (1.8)

in the usual way. According to inequality 1.7 the absolute value of the ratio between the inner product and the product of the norms must be less or equal than one. The system 1.5

(i) generates the vector space if each vector w can be written as the linear combination

 ${\mu }_{1}{v}_{1}+\mathrm{\dots }+{\mu }_{k}{v}_{k}=w.$

(ii) is linearly independent if

 ${\lambda }_{1}{v}_{1}+\mathrm{\dots }{\lambda }_{k}{v}_{k}=0$

implies that all of the coefficients are zero: λ(1)= ... =λ(k)=0.

Otherwise it is linearly dependent. Geometrically, the linear dependence means that we have a non-trivial polygonal chain with parallel sides to the vectors in the given system. Minimal generating systems (equivalently: maximal linearly independent systems) are called bases in the vector space. The common number of the members in minimal generating systems (maximal linearly independent systems) isthe dimension of the space. In this case each vector has exactly one expression as the linear combination of the members of the given system. The coefficients are called coordinates (with respect to the given basis). The canonical basis consists of the vectors

 ${e}_{i}=\left(0,\mathrm{\dots },0,1,0,\mathrm{\dots },0\right),$ (1.9)

where the number 1 stands at the ith position and i=1, 2, ..., n. Recall that a non-empty subset in the space is a linear subspace if it is closed under the vector addition and the scalar multiplication. Especially, the dimension of the subspace

 $\mathcal{L}\left({v}_{1},\mathrm{\dots },{v}_{k}\right)$ (1.10)

consisting of all linear combinations of the vectors in the argument is the rank of the system. It is clear that the rank is less or equal than k. Suppose that the vectors

 ${w}_{1}={w}_{1}^{1}{v}_{1}+\mathrm{\dots }+{w}_{1}^{n}{v}_{n},$

 ${w}_{2}={w}_{2}^{1}{v}_{1}+\mathrm{\dots }+{w}_{2}^{n}{v}_{n},$

 $.$

 $.$

 $.$

 ${w}_{k}={w}_{k}^{1}{v}_{1}+\mathrm{\dots }+{w}_{k}^{n}{v}_{n}$

are given in terms of the coordinates with respect to a basis

 ${v}_{1},\mathrm{\dots },{v}_{n}.$ (1.11)

To decide the linear dependence (or independence) we have the following standard methods:

(i) The vanishing of a linear combination of the vectors

 ${w}_{1},\mathrm{\dots },{w}_{k}$ (1.12)

can be written as a system of linear equations

 ${\left(\begin{array}{lllll}{w}_{1}^{1}& {w}_{2}^{1}& .& .& {w}_{k}^{1}\\ {w}_{1}^{2}& {w}_{2}^{2}& .& .& {w}_{k}^{2}\\ .& .& .& .& .\\ .& .& .& .& .\\ .& .& .& .& .\\ {w}_{1}^{n}& {w}_{2}^{n}& .& .& {w}_{k}^{n}\end{array}\right)}_{n×k}\left(\begin{array}{l}{\lambda }_{1}\\ {\lambda }_{2}\\ .\\ .\\ .\\ {\lambda }_{k}\end{array}\right)=\left(\begin{array}{l}0\\ 0\\ .\\ .\\ .\\ 0\end{array}\right)$

for the unknown coefficients

 ${\lambda }_{1},\mathrm{\dots },{\lambda }_{k}.$ (1.13)

(ii) If the coordinates of the given vectors form the rows (or columns) of a matrix we can determine its rank which is just the same as the dimension of the generated subspace. If it is less than k then the system is linearly dependent. Otherwise (in case of rank k) the system is linearly independent.

(iii) In case of a square matrix we can calculate the determinant ofthe matrix for checking the linear dependence (the determinant vanishes) or independence (the determinant is different from zero). Especially any linearly independent (or generating) system containing exactly n vectors forms a basis in the coordinate space of dimension n.