Ugrás a tartalomhoz

Convex Geometry

Csaba Vincze (2013)

University of Debrecen

11.2 On the curvature of polyellipses

11.2 On the curvature of polyellipses

In what follows we are going to investigate the polyellipses from the viewpoint of differential geometry. According to the convexity of the function F these are convex curves in the plane. Let

w : t ( w 1 ( t ) , w 2 ( t ) )

(11.10)

be a twice continuously differentiable parameterized curve in the plane and consider the normalized tangent vector field

T : = 1 | | w ' | | w ' .

Differentiating equations

c o s ( θ ) = T

and

s i n ( θ ) = T

we have that

θ ' = T 1 ( T 2 ) ' - T 2 ( T 1 ) ' .

The derivative of the angle function θ is called the (signed) curvature in case of curves with unit speed. Otherwise we divide it by the length of the velocity vector w' to provide the invariance under orientation preserving re-parametrizations. As it can be seen the derivative of the angle function is just the scalar product of T' and the unit normal vector field N if they form a positively oriented (likethe canonical) basis at each parameter t:

N : = 1 w ' ( - w 2 ' , w 1 ' ) .

Since

T ' =   t h e   t a n g e n t i a l   t e r m   + 1 w ' w ' '

and N is orthogonal to the tangential term we have that

κ s = θ ' w ' = 1 w ' T ' , N = w ' ' , N w ' = w 1 ' w 2 ' ' - w 2 ' w 1 ' ' w ' 3 .

(11.11)

Excercise 11.2.1 Prove that the the vanishing of the curvature characterizes the line segments in the plane. What about the curvature of circles?

Remark The curvature at the point p belonging to the parameter t(0) can be characterized in the following (geometric) way: consider the circle passing through the points w(s(1)), w(t(0)) and w(s(2)). If exists then the curvature is just the reciprocal of the radius of the limit circle as s(1) and s(2) tend to t(0). In general the linear independence of the velocity and the acceleration vector at t(0) provides the existence of such a limit circle in case of twice continuously differentiable parameterized curves.

Figure 71: The geometric description of the curvature.

In terms of the function F the gradient vector field represents the normal directions along the curve which means that

N : = ± 1 g r a d F g r a d F ,

where the sign refers to the orientation. Therefore

κ s = ± w 1 ' ' D 1 F ( w ) + w 2 ' ' D 2 F ( w ) w ' 2 g r a d F w .

On the other hand F is constant along w and, consequently,

0 = ( F w ) ' = w 1 ' D 1 F ( w ) + w 2 ' D 2 F ( w ) .

(11.12)

Differentiating equation 11.12 again

0 = w 1 ' ' D 1 F ( w ) + w 2 ' ' D 2 F ( w ) + w 1 ' ( w 1 ' D 1 D 1 F ( w ) + w 2 ' D 2 D 1 F ( w ) ) +

w 2 ' ( w 1 ' D 1 D 2 F ( w ) + w 2 ' D 2 D 2 F ( w ) ) .

In terms of the Hessian matrix formed by the second order partial derivatives

0 = w 1 ' ' D 1 F ( w ) + w 2 ' ' D 2 F ( w ) + H e s s F w ( w ' , w ' )

and we have that

κ s = 1 g r a d F w H e s s F w ( T , T ) .

(11.13)

Figure 72: Ludwig Otto Hesse, 1811-1874.

In case of convex functions the curvature (the absolute value of the signed curvature) is

κ = 1 g r a d F w H e s s F w ( T , T )

(11.14)

because the calculus of convex functions states that if F is convex then its Hessian matrix is poisitive semidefinite. Using the Laplacian (the trace of the Hessian matrix)

Δ F : = D 1 D 1 F + D 2 D 2 F

11.14 can be also written into the form

κ = 1 g r a d F w Δ F w - H e s s F w ( N , N ) .

(11.15)

Figure 73: Pierre-Simon Laplace, 1749-1827.

Excercise 11.2.2 Find the Hessian matrix of the second order polynomial function

f ( x , y ) = a 11 x 2 + 2 a 12 x y + a 22 y 2 + 2 a 13 x + 2 a 23 y + a 33 .

Since the derivative of F does not exists at the focuses in the usual sense we shall suppose that polyellipses under consideration do not pass through any of them.

Remark In their paper [41] the authors proved that if c is large enough then the polyellipse is contained between two concentric circles whose radii differ by an arbitrarily small amount, Proposition 6, p. 247. In other words the curvature function goesto being identically zero as c tends to the infinity.

Here we are going to give not only a limit, but lower and upper bounds for the curvature involving explicit constants: the number of the focuses, the rate of the level and the global minimum of the function F. In what follows w denotes the parameterization of the polyellipse

F ( w ) = c

(11.16)

with focuses p(1), ..., p(n),

c 1 = F ( p 1 ) , , c n = F ( p n )

and the minimum c* of the function F is attained at the point p* in the plane.

Lemma 11.2.3 For the Euclidean distance from the minimizer along the curve we have the estimations

c - c * n d ( w , p * ) c + c * n ,

(11.17)

which means that the polyellipse is contained in the ring centered at the minimizer with the radii

r 1 : = c - c * n   a n d   r 2 : = c + c * n .

Proof Taking the sum as i runs from 1 to n the triangle inequalities

d ( w , p i ) - d ( p i , p * ) d ( w , p * ) d ( w , p i ) + d ( p i , p * )

give both the upper and the lower bound. ▮

Remark As a direct consequence of the previous result it follows that the convex hull of any polyellipse is a compact set; compactness and further convexity-topological properties in terms of the general notion of the norm are investigated in [30].

Corollary 11.2.4 For the area of the domain bounded by a polyellipse we have the estimations

( c - c * n ) 2 π A ( c + c * n ) 2 π .

Lemma 11.2.5 For the length of the gradient vector along the curve we have the estimations

n c - c * c + c * g r a d w F n .

(11.18)

Proof From the definition of the subgradient it follows that if a convex function is differentiable at w then

g r a d F w , q - w F ( q ) - F ( w ) .

In case of q=p* the relation

c - c * g r a d F w w - p *

can be derived by using the Cauchy-Schwarz-Buniakowski inequality. By inequalities 11.17

c - c * c + c * n g r a d F w

which gives the lower bound for the norm of the gradient. On the other hand, the gradient is just the sum of the unit vectors going from w to the focal points. This means that the norm of this vector couldn't be greater than the number of the focuses as was to be stated.

Remark A straightforward calculation shows that

g r a d F w 2 = n + 2 i < j c o s α i j ,

where the double index refers to the angle of the position vectors

v i : = p i - w   a n d   v j : = p j - w .

Lemma 11.2.6 For the Laplacian along the curve we have the estimations

n i = 1 n 1 c + c i Δ F w n i = 1 n 1 | c - c i | .

(11.19)

Proof A straightforward calculation shows that

D 1 D 1 F w = i = 1 n 1 d 3 ( w , p i ) ( w 2 - p i 2 ) 2

and the similar formula

D 2 D 2 F w = i = 1 n 1 d 3 ( w , p i ) ( w 1 - p i 1 ) 2

holds in case of the second order derivatives with respect to the y variable. Therefore

Δ F w = i = 1 n 1 d ( w , p i ) .

(11.20)

For any i=1, ...,n

c - c i = d ( w , p i ) + j i d ( w , p j ) - d ( p j , p i ) .

Using the estimations

- d ( w , p i ) d ( w , p j ) - d ( p j , p i ) d ( w , p i )

it follows that

| c - c i | d ( w , p i ) + j i | d ( w , p j ) - d ( p j , p i ) | d ( w , p i ) + ( n - 1 ) d ( w , p i )

= n d ( w , p i ) .

On the other hand

c + c i = d ( w , p i ) + j i d ( w , p j ) + d ( p i , p j ) d ( w , p i ) + ( n - 1 ) d ( w , p i )

= n d ( w , p i ) .

Therefore

n c + c i 1 d ( w , p i ) n | c - c i |

which implies both the lower and upper bound for the Laplacian.

Theorem 11.2.7 For the curvature along the curve we have the upper bound

κ w c + c * c - c * i = 1 n 1 | c - c i | .

(11.21)

Proof Since the function F is convex its Hessian matrix is positive semi-definite. Therefore

κ w Δ F w g r a d F w

which gives, by the help of Lemma 11.2.5 and Lemma 11.2.6, the upper bound for the curvature function.

Excercise 11.2.8 Taking the limit as c tends to the infinity prove that the curvature function goes to being identically zero.

In order to give a lower bound for the curvature we need the determinant of the matrix formed by the second order derivatives. Since

D 1 D 2 F w = - i = 1 n 1 d 3 ( w , p i ) ( w 1 - p i 1 ) ( w 2 - p i 2 )

we have that

d e t D i D j F w = i < j 1 d 3 ( w , p i ) d 3 ( w , p j ) ( ( w 1 - p i 1 ) ( w 2 - p j 2 ) - ( w 1 - p j 1 ) ( w 2 - p i 2 ) ) 2

which implies the formula

d e t D i D j F w = 4 i < j 1 d 3 ( w , p i ) d 3 ( w , p j ) μ 2 [ w , p i , p j ] ,

(11.22)

where μ means the area of the triangle spanned by the points in the argument. By the help of the relation between the geometric and arithmetic means we have the estimation

d ( w , p i ) d ( w , p j ) d ( w , p i ) + d ( w , p j ) 2 c 2

and, consequently,

4 ( 2 c ) 6 i < j μ 2 [ w , p i , p j ] d e t D i D j F w .

Moreover, the square function is convex which implies that

( i < j μ [ w , p i , p j ] ) 2 n 2 i < j μ 2 [ w , p i , p j ] .

On the other hand

μ [ p 1 , , p n ] i < j μ [ w , p i , p j ] ,

where μ(p(1), ..., p(n)) is the area of the convex hull of the focuses. We have just proved the following result.

Lemma 11.2.9 For any element of the polyellipse 11.16

8 ( 2 c ) 6 1 n ( n - 1 ) μ 2 [ p 1 , , p n ] d e t D i D j F w .

(11.23)

Remark As the previous result shows if the focuses are not collinear then the second order partial derivatives form the coefficients of a positive definite bilinear form (cf. the expression for the second order partial derivative with respect to the first variable).

Theorem 11.2.10 Suppose that the focuses are not collinear; the reciprocal of the curvature function can be estimated by the formula

1 κ w ( c 2 ) 6 ( n 2 ) 3 n - 1 μ 2 [ p 1 , , p n ] i = 1 n 1 | c - c i | .

(11.24)

Proof Let λ(1) and λ(2) be the eigenvalues of the matrix consisting of the second order partial derivatives at w and suppose that they are labelled in a non-increasing order. Since λ(1) and λ(2) are just the solutions of the characteristic equation

λ 2 - λ Δ F w + d e t D i D j F w = 0

we have that

d e t D i D j F w λ 2 2 + d e t D i D j F w = λ 2 Δ F w .

(11.25)

On the other hand, the first eigenvalue is just the maximum of the mapping represented by the Hessian matrix at w on the unit circle, which means that

0 λ 1 - 1 g r a d F w 2 H e s s F w ( g r a d F w , g r a d F w ) .

Therefore

λ 2 λ 2 + λ 1 - 1 g r a d F p 2 H e s s F w ( g r a d F p , g r a d F p )

= κ p g r a d F p

because the Laplacian is just the sum of the eigenvalues. We have by 11.25 that

1 κ p g r a d F p λ 2 g r a d F p Δ F p d e t D i D j F ( p )

where all the terms can be estimated by inequality 11.23, Lemma 11.2.6 and Lemma 11.2.5.