Unit Vector

VECTOR ALGEBRA

George B. Arfken , ... Joseph Priest , in International Edition University Physics, 1984

Radial and Normal Unit Vectors

In addition to i, j, and k , two other unit vectors are used in this text. A vector of unit magnitude in the radially outward direction is designated by r ^ . The unit vector r ^ is formed by taking the position vector r and dividing it by its magnitude:

(2.18) r ^ = r / r

This procedure gives r ^ unit magnitude but preserves the radial direction of r. Unlike the three Cartesian unit vectors, r ^ may vary in direction because the position vector r varies in direction.

The unit vector n ^ is normal, or perpendicular, to a surface at a given point (Figure 2.25). For the special case of a spherical surface (origin at the center) the normal vector n ^ is radial and n ^ = r ^ .

Figure 2.25. The unit vector r ^ points radially outward. The unit vector n ^ is normal (perpendicular) to the surface.

Questions

6.

How can the direction cosines be negative? Describe the orientation of a vector having all three direction cosines negative.

7.

The components of a vector are given by the corresponding coordinates: Ax = x, Ay = y, and Az = z. What is the vector?

8.

Explain why Eq. 2.11 (the replacement of cos β by sin α) is limited to two-dimensional space, that is, to a vector lying in the plane of the two axes.

9.

What units are associated with the unit vectors i, j, and k?

10.

Is the vector sum i + j + k of the three unit vectors a unit vector in the sense of having unit magnitude? Explain.

11.

Explain why a vector equation may contain more information than a scalar equation.

12.

At every point on a particular finite, closed surface the unit vector r ^ is equal to the unit vector n ^ . What kind of a surface do you have?

13.

Sketch a football. Draw the unit vectors r ^ and n ^ at several points along the surface of the football. Where is r ^ ? = n ^ ? Where is r ^ n ^ ?

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120598588500078

Matrices, Determinants, and Vectors

Sarhan M. Musa , in Fundamentals of Technical Mathematics, 2016

8.3.2 Standard unit vectors

Unit vectors are useful in defining the direction of any vector; we define two special unit coordinate vectors.

i = 1,0 and j = 1,0 are called standard unit vectors.

i = 1,0 is a unit vector in the direction of the x-axis.

j = 1,0 is a unit vector in the direction of the y-axis.

The standard unit vectors can represent any vector u = u 1 , u 2 as follows:

u = u 1 , u 2 = u 1 1,0 + u 2 1,0 = u 1 i + u 2 j ,

where

u 1 i + u 2 j is called the linear combination of the vectors i and j

u 1 is called the horizontal component of u

u 2 is called the vertical component of u .

Any vector in the plane can be written as a linear combination of the standard unit vectors i and j .

Example 1

Let v be the vector with initial point (3, −7) and terminal point (−2, 5). Express v as a linear combination of the standard unit vectors i and j .

Solution

v = v 1 , v 2 = 2 3 , 5 + 7 = 5 , 12 v = 5 i + 12 j

Example 2

Let u = 4 i + 11 j and v = 3 i 5 j . Find 5 u 2 v .

Solution

5 u 2 v = 5 ( 4 i + 11 j ) 2 ( 3 i 5 j ) = 20 i + 55 j + 6 i + 10 j 5 u 2 v = 26 i + 65 j

There are many applications to describe a vector in terms of its magnitude and direction, rather than in terms of its components.

If vector v has an angle θ (counterclockwise) form the positive x-axis to v , then the terminal point of v lies on the plane and we can express the vector v as shown in the following figure.

v = v ( cos θ i + sin θ j ) = v cos θ , sin θ

We call θ the direction angle of the vector v .

Since v = v 1 i + v 2 j = v ( cos θ i + sin θ j ) , we can determine the direction angle θ for the vector v by

tan θ = sin θ cos θ = v sin θ v cos θ = v 2 v 1 .

Example 3

A vector v of length 4 making an angle of 60° with the positive x-axis. Find the vector v .

Solution

v = v ( cos θ i + sin θ j ) = v cos θ , sin θ v = 4 ( cos 60 ° i + sin 60 ° j ) = 4 ( 1 2 i + 3 2 j ) v = 2 i + 2 3 j

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128019870000083

Vectors and Vector Algebra

Robert G. Mortimer , in Mathematics for Physical Chemistry (Fourth Edition), 2013

4.2.5 The Scalar Product of Two Vectors

Just as in two dimensions the scalar product of two vectors is given by the product of the magnitudes of the vectors and the cosine of the angle between them:

(4.40)

where α is the angle between the vectors. If two vectors are perpendicular to each other, their scalar product vanishes. Vectors that are perpendicular to each other are sometimes said to be orthogonal to each other.

The unit vectors have unit magnitude and are mutually orthogonal to each other

(4.41)

(4.42)

Using these relations, we have

(4.43)

Example 4.6

Let A = 2 i + 3 j + 7 k and B = 7 i + 2 j + 3 k .

(a)

Find A · B and the angle between A and B

A · B = 14 + 6 + 21 = 41 .

The magnitude of A in this example happens to equal the magnitude of B:

| A | = A = | B | = B = ( 2 2 + 3 2 + 7 2 ) 1 / 2 = 62 .

Let α be the angle between the vectors A and B

α = arccos A · B | A | | B | = arccos 41 62 62 = arccos 0.6613 = 0.848 rad = 48.6 ° .

(b)

Find ( 3 A ) · B

( 3 A ) · B = 6 × 7 + 9 × 2 + 21 × 3 = 123 = 3 A · B .

Exercise 4.7

(a)

Find the Cartesian components of the position vector whose spherical polar coordinates are r = 2.00 , θ = 90 ° , ϕ = 0 ° . Call this vector A.

(b)

Find the scalar product of the vector A from part a and the vector B whose Cartesian components are ( 1.00 , 2.00 , 3.00 ) .

(c)

Find the angle between these two vectors.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124158092000045

Handbook of the Geometry of Banach Spaces

Dale Alspach , Edward Odell , in Handbook of the Geometry of Banach Spaces, 2001

1 Preliminaries

We first recall a few key properties of Lp and ℓ p which are discussed throughout the basic concepts chapter.

The unit vector basis for ℓ p is a 1-symmetric basis [49, Section 3]. The Haar basis (hi )0 is an unconditional basis of Lp for 1 < p < ∞ [49, Section 3], [24]. It is also a monotone basis for Lp for 1 ≤ p < ∞. The Rademacher functions ( r n ) n = 1 , [49, Section 4], are equivalent to the unit vector basis of ℓ2 for p < ∞, (and the unit vector basis of ℓ1 for p = ∞).

Thus for 0 < p < ∞ there exist constants Ap , Bp with

(1.1) A p ( | a n | 2 ) 1 / 2 ( 0 1 | a n r n ( t ) | p d t ) 1 / p B p ( | a n | 2 ) 1 / 2

for all scalars (an ). Ap = 1 if 2 ≤ p < ∞ and Bp = 1 if p ≤ 2.

If ( x i ) 1 is a normalized sequence of disjointly supported functions on [0, 1] in Lp with (1 ≤ p < ∞) then (xi ) is 1-equivalent to the unit vector basis of ℓ p and [(xi )] is 1-complemented via the projection

P ( x ) = i = 1 ( 0 1 sign x i ( t ) | x i ( t ) | p 1 x ( t ) d t ) x i .

For 1 < p < ∞, Lp is uniformly convex and uniformly smooth with modulus of convexity (respectively, of smoothness) of power type p (respectively, of power type q with 1 p + 1 q = 1 ), [49, Section 6], Lp is of type min(2, p) and cotype max(2, p) for 1 ≤ p < ∞ [49, Section 8].

For ( x i ) 1 n L p

(1.2) A p ( 1 n || x i || p 2 ) 1 / 2 ( 0 1 || 1 n r i ( t ) x i || p p d t ) 1 / p ( 1 n || x i || p p ) 1 / p

if 1 ≤ p < 2, and

(1.3) ( 1 n || x i || p p ) 1 / p ( 0 1 || 1 n r i ( t ) x i || p p d t ) 1 / p B p ( 1 n || x i || p 2 ) 1 / 2

if 2 < p < ∞.

For example to see (1.2) we use Fubini's theorem, || || L p || || L 2 , (1.1) with p = 2, and || || p || || 2 , to obtain

(1.4) 0 1 || 1 n r i ( t ) x i || p p d t = 0 1 0 1 | 1 n r i ( t ) x i ( s ) | p d t d s 0 1 [ 0 1 | 1 n r i ( t ) x i ( s ) | 2 d t ] p / 2 d s = 0 1 ( 1 n | x i ( s ) | 2 ) p / 2 d s 0 1 1 n | x i ( s ) | p d s

which yields the right hand inequality of (1.2). Also by (1.1),

0 1 0 1 | 1 n r i ( t ) x i ( s ) | p d t d s A p p 0 1 ( 1 n | x i ( s ) | 2 ) p / 2 d s .

Now

( 1 n || x i || p 2 ) p / 2 = || ( || x i || p p ) 1 n || 2 / p = 1 n || x i || p p a i | for some ( a i ) 1 n 2 / ( 2 p ) of norm 1 0 1 ( 1 n | x i ( s ) | 2 ) p / 2 ( 1 n | a i | 2 / ( 2 p ) ) ( 2 p ) / 2 d s by Höder's inequality = 0 1 ( 1 n | x i ( s ) | 2 ) p / 2 d s ,

which completes the proof of (1.2).

(1.2) and (1.3) can be viewed as generalizations of Clarkson's inequalities [29]. Since || || L p || || L 2 for p ≤ 2 we also have using (1.2) for p = 2 that

(1.5) ( 0 1 || 1 n r i ( t ) x i || p p d t ) 1 / p ( 1 n || x i || 2 2 ) 1 / 2 for 1 p < 2 ,

and similarly

(1.6) ( 0 1 || 1 n r i ( t ) x i || p p d t ) 1 / p ( 1 n || x i || 2 2 ) 1 / 2 for 2 < p ,

The technique of integrating against the Rademacher yields some useful inequalities for unconditional basic sequences in Lp . If (xn ) is a λ-unconditional basic sequence in Lp then

(1.7) λ 1 [ 0 1 ( | a n | 2 | x n ( s ) | 2 ) p / 2 ds ] 1/2 || a n x n || p λ B p [ 0 1 ( | a n | 2 | x n ( s ) | 2 ) p / 2 d s ] , if 2 p< ,

(1.8) ( λ , A p ) 1 [ 0 1 ( | a n | 2 | x n ( s ) | 2 ) p / 2 d s ] 1 / p || a n x n || p λ [ 0 1 ( | a n | 2 | x n ( s ) | 2 ) p / 2 ] 1 / p if 1 p 2 ,

which implies that (xn ) and ( | x n | ) are equivalent.

If (xn ) is also normalized,

(1.9) λ 1 ( | a n | p ) 1 / p || a n x n || p λ B p ( | a n | 2 ) 1 / 2 , if 2 p < .

(1.10) ( λ A p ) 1 ( | a n | 2 ) 1 / 2 || a n x n || p λ ( | a n | p ) 1 / p , if 2 p 2.

These last two inequalities are immediate consequences (1.2) and (1.3).

Any martingale difference sequence in Lp is unconditional [25] which generalizes the fact that the Haar basis is unconditional. In particular any sequence of mean zero independent random variables in Lp is unconditional. Rosenthal's inequality [91] gives us some information on such sequences. Let 2 < p < ∞. There exists Kp < ∞ so that if (xi ) n 1 areindependent mean zero random variables in Lp then

(1.11) 1 2 max { ( i = 1 n || x i || p p ) 1 / p , ( i = 1 n || x i || 2 2 ) } || i = 1 n x i || p K p max { ( i = 1 n || x i || p p ) 1 / p , ( i = 1 n || x i || 2 2 ) 1 / 2 } .

It is shown in [55] that Kp ~ p/ln p.

A Banach space X is a L p-space if for all finite dimensional spaces FX there exists a finite dimensional E with F E X so that d ( E , p dim E ) λ . It ultimately turns out (see Section 5) that a separable X is L p for some λ and 1 < p < ∞ iff X is isomorphic to a complemented subspace of Lp which is not isomorphic to Hilbert space [66,68].

The situation for L 1 is more complicated. It is conjectured that every infinite dimensional complemented subspace X of L 1 is isomorphic to L 1 or ℓ1. It is known that if X contains an isomorph of L 1 then X is isomorphic to L 1 [36] and if X embeds into ℓ1 then X is isomorphic to ℓ1 [65]. Various characterizations of L 1 - (and L ) spaces are given in [68], Much work was done to study and attempt to classify the L p -spaces up to isomorphism and this is discussed in Section 5 below.

We begin with some results on the global structure of Lp and in particular those involving the Haar basis. All Banach spaces are presumed to be separable unless otherwise stated.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/S187458490180005X

U

Fred E. Szabo PhD , in The Linear Algebra Survival Guide, 2015

Unit Vector

A unit vector in a normed vector space is a vector for which || v ||   =   1 in the norm of the space.

Illustration

A unit vector in the Euclidean space ℝ 2

v   =   {3, 7};

u = 1 Norm v v

3 58 , 7 58

Norm[u] == 1

True

A unit vector in ℝ3 relative to a nonstandard inner product

Clear[x, y, z]

MatrixForm[A   =   DiagonalMatrix [{1, 2, 3}]];

w   =   {x, y, z} ;

〈u_, v_〉 : = u.A.v

||w_|| := Sqrt[〈w, w〉]

||w||

x 2 + 2 y 2 + 3 z 2

w123   =   w /. {x     1, y     2, z     3}

{1, 2, 3}

||w123||

6

u = 1 w 123 w 123

1 6 , 1 3 , 1 2

||u||

1

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B978012409520550028X

ELECTRIC FIELD AND GAUSS' LAW

George B. Arfken , ... Joseph Priest , in International Edition University Physics, 1984

Electric Field of a Point Charge

From Coulomb's law, the force exerted on a small test charge q 0 by a point charge q is

(27.3) F = ( 1 4 π ε o q q o r 2 ) r ^

By convention the unit vector r ^ is radially outward from q (Figure 27.1). If q and q 0 are taken to be positive, the force on q 0 is along the line joining q 0 and q and directed away from q. Applying the definition of electric field,

Figure 27.1.

E = F q o

we get the field of a point charge:

(27.4) E = ( 1 4 π ε o q r 2 ) r ^

The field E of a positive charge is directed outward, away from the positive charge. The field created by a negative charge is directed inward, toward the negative charge.

There are two ideas that you should keep in mind: First, Eq. 27.4 is not the definition of an electric field. It merely describes the electric field for the specific case of a point charge. Second, because Coulomb's law and Newton's law of gravitation have the same mathematical form, Faraday's field concept can also be applied to gravitation. The earth, for example, can be said to move in the gravitational field of the sun, which can be written

E grav = F grav m = G ( M sun r 2 ) r ^

where m is a "test mass" analogous to the test charge q 0 .

The gravitational force of the earth on a mass m at the earth's surface is given by F = mg. The gravitational force per unit mass F/m is simply g. We can refer to the gravitational acceleration g as the earth's gravitational field. The direction of the field is downward.

Faraday's concept of electric field allows us to shift our attention from the electric charges to the medium between the charges, and to look at the medium (which may be a vacuum) as a mechanism for transmitting the electrical force. Although physicists today are cautious about giving mechanical properties to electromagnetism, the concept of electric fields has survived because, as we will soon see, it is very useful.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120598588500327

Multidimensional Problems

Frank E. Harris , in Mathematics for Physical Science and Engineering, 2014

Vectors in Curvilinear Coordinates

In Cartesian coordinates, a unit vector e ˆ x is of unit length and in the x direction. That is simple and straightforward because the " x direction" is everywhere the same direction. However, in spherical polar coordinates the " r direction" is surely not the same everywhere and we need to define it unambiguously. Because the direction associated with the change in a coordinate may depend upon the value of it (and the other coordinates), it is most useful to define the " r direction" and other directions as those generated by infinitesimal changes in the coordinate values.

Our definition of the " r direction" is that of a vector from ( r , θ , φ ) to ( r + dr , θ , φ ) . The unit vector r ˆ or e ˆ r is then a vector in the " r direction" and of unit length.

Continuing with spherical polar coordinates, we now wish to consider a unit vector in the θ direction. This direction is that of an infinitesimal vector from ( r , θ , φ ) to ( r , θ + d θ , φ ) , and it (and the corresponding unit vector θ ˆ or e ˆ θ ) will be perpendicular to the unit vector r ˆ . The third unit vector, φ ˆ or e ˆ φ , will be perpendicular to r ˆ and θ ˆ , so our spherical polar coordinate system is orthogonal.

Observations similar to those of the preceding paragraph indicate that circular cylindrical coordinates also form an orthogonal system. The orthogonality is also apparent from drawings showing the intersections of contours of constant coordinate values.

The unit vectors can be used to decompose vectors into their components in curvilinear systems. However, it is important to notice that vector components cannot be combined (either for addition or for forming dot products) unless the vectors are associated with the same point in space. Violation of this rule would cause the same unit-vector symbol to have different meanings at different occurrences in a single expression, thereby surely causing errors.

If two vectors A and B are indeed associated with the same spatial point, then (using spherical polar coordinates as an example), they have respective component decompositions

A = A r r ˆ + A θ θ ˆ + A φ φ ˆ , B = B r r ˆ + B θ θ ˆ + B φ φ ˆ ,

and (adding components)

A + B = ( A r + B r ) r ˆ + ( A θ + B θ ) θ ˆ + ( A φ + B φ ) φ ˆ .

Rewriting the above in a general notation in which A i refers to the component of A in the direction of the unit vector e ˆ i , we have

(6.18) A + B = ( A 1 + B 1 ) e ˆ 1 + ( A 2 + B 2 ) e ˆ 2 + ( A 3 + B 3 ) e ˆ 3 .

Since we have restricted the discussion to orthogonal coordinate systems, we have

(6.19) e ˆ i · e ˆ j = δ ij ,

and it is straightforward to compute the dot product A · B :

(6.20) A · B = ( A 1 e ˆ 1 + A 2 e ˆ 2 + A 3 e ˆ 3 ) · ( B 1 e ˆ 1 + B 2 e ˆ 2 + B 3 e ˆ 3 ) = A 1 B 1 + A 2 B 2 + A 3 B 3 .

This is the same as the formula that applies in Cartesian coordinates.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128010006000067

POSITION AND MOMENTUM DISTRIBUTIONS DO NOT DETERMINE THE QUANTUM MECHANICAL STATE

Andrew Vogt , in Mathematical Foundations of Quantum Theory, 1978

Proof:

Let f be a unit vector in range E. Then ||Ef 1||2 = |f1 · f|2 = |Ff1 · Ff|2 = |g· Ff|2 and ||Ef2||2 = |f2 · f|2 = |Ff2· Ff|2 = |RCg · Ff|2. Thus, to prove Proposition 3 it suffices to find a function g satisfying the conditions preceding (1.1) and (1.2) and also satisfying the equation |g · Ff| = |RCg · Ff|, which can be reexpressed as:

(3.1) | n g ( p ) ( Ff ) ( p ) ¯ dp | = | n g ( p ) ( Ff ) ( p ) ¯ dp | .

Choose Borel subsets A and B of ℝn having positive Lebesgue measure such that the sets A, B, -A, and -B form an a.e. disjoint partition of ℝn. (Here -A = {-p : p is in A}.) Examples of such sets are A = {p : 0 ≤ p1 < 1} and B = {p : 1 ≤ p1}.

Let g be an element of L2(ℝn) such that g| AU-A ≠0≠g| BU-B such that g(p) = g(-p) ¯ for p in A ∪ -A, and such that g(p) = g(-p) ¯ ω for p in B ∪ -B where |ω| = 1 ≠ ω. Then

n g ( p ) ( Ff ) ( p ) ¯ dp = A A + B + B = a + b 1 + b 2

and

n g ( p ) ( Ff ) ( p ) ¯ dp = A A + B + B = a + ω ¯ b 1 + ω ¯ b 2 .

If a = 0, (3.1) is satisfied since | b 1 + b 2 | = | ω ¯ b 1 + ω ¯ b 2 | and, since g satisfies the conditions preceding (1.1) and (1.2), the argument is complete. So assume henceforth that a ≠ 0.

Define a function g1 by g1(p) = λ g(p) for p in A ∪ -A, g1(p) = g(p) for p in B, and g1(p) = μg(p) for p in -B. Here λ and μ are complex numbers such that |λ| = |μ| = 1 and will be specified later in terms only of a, b1, b2, and ω. Note that g1 is in L2(ℝn), that g 1 ( p ) = g 1 ( p ) ¯ ( λ / λ ¯ ) for p in A ∪ -A, and that g1(p) = g1(-p) ωμ for p in B ∪ -B. Thus g1 satisfies the conditions preceding (1.1) and (1.2) provided ω μ = λ / λ ¯ = λ 2 .

With this restriction we are free to replace g by g1 in equation (3.1), obtaining:

n g 1 ( p ) ( Ff ) ( p ) ¯ dp = λ a + b 1 μ b 2 , and n g 1 ( p ) ¯ ( Ff ) ( p ) ¯ dp = λ ¯ a + μ ω ¯ b 1 + ω ¯ b 2 .

The revised version of (3.1) will thus be satisfied iff | λ a + b 1 + μ b 2 | = | λ ¯ a + μ ω ¯ b 1 + ω ¯ b 2 | = | λ a ¯ + μ ω b ¯ 1 + ω b ¯ 2 | . A sufficient condition for these equations to hold is that ( b 1 + μ b 2 ) λ a = ω ( μ b ¯ 1 + b ¯ 2 ) / λ a ¯ . . So μ is to be chosen to satisfy:

μ ( b 2 / a ω b ¯ 1 / a ¯ ) = ω ( b ¯ 2 / a ¯ ω ¯ b 1 / a ) .

The coefficients of μ and ω on opposite sides of this equation are conjugates and |ω| = 1. So μ exists as needed with |μ| = 1. If λ is chosen so that |λ| = 1 but λ2 ≠ ωμ (3.1) is valid for g1 instead of g and g1 satisfies the conditions preceding (1.1) and (1.2)

We note that Propositions 1, 2, and 3, remain true when the infinite-dimensional Hilbert space L2 (ℝn) is replaced by a finite-dimensional Hilbert space L2(ℤm) ≈ ℂm provided m ≥ 2 in Propositions 1 and 2 and m ≥ 3 in Proposition 3. In the finite-dimensional cases there is a single "position" operator X (defined by (Xf)(j) = jf(j) for j = 1, …, m), there is a single "momentum" operator P = F−1 XF, and the discrete Fourier transform F is defined in terms of a primitive mth root of unity.

We conclude this note by discussing a few implications of the examples reported here. The conjugation operator used to generate these examples has played an important role (although it is very likely that other methods can be used to generate examples). We discovered that a wave function f1 and its conjugate f 2 = f ¯ 1 can be inequivalent but have the same distributions w.r.t. position, momentum, and even energy. Thus, at a fixed moment in time the corresponding states cannot be distinguished by measurements of one of these observables. Are these wave functions physically equivalent? Perhaps measurements of angular momentum (at least for n > 1) can be used to distinguish them. It is interesting to note that under Schrödinger evolution these wave functions will in general evolve in such a way that they do not remain conjugates: the Schrödinger evolution takes f to e -itH/h f=f t and the presence of the imaginary root i will disrupt the conjugacy relation with the likely consequence (unless f1 and f2 are eigenstates of energy) that ft 1 and ft 2 will lose one or more of their common distributions as t changes. Thus time may distinguish the states even if they are not originally distinguishable.

From the mathematical point of view we can try to determine a class C of self-adjoint operators with the property that if two wave functions have the same distributions w.r.t. each operator in C then the wave functions are equivalent. Obviously the class of all self-adjoint operators has this property as does its subclass the class of all orthogonal projections with one-dimensional range. Is there a minimal class C with this property? On the basis of a degrees-of-freedom argument Ron Wright has conjectured that, at least in the finite-dimensional case, there exist three self-adjoint operators such that if the distributions of a pure state w.r.t. each of these operators is known, then the state is completely determined. We have shown above that C cannot consist of position and momentum operators alone, and that supplementing these with operators commuting with the conjugation operator and/or with a single-dimensional projection E does not change matters.

If a minimal class C can be demonstrated, will all of its members consist of physically significant operators - ones corresponding to possible measurement procedures? Or must we admit that the mathematical distinctions between some wave functions have no physical significance, that some inequivalent wave functions represent the same physical situation and ought to be treated by means of a different equivalence relation (as in [1, p. 170–172])? More disturbingly, is it possible that certain wave functions, including our examples above, actually represent no physical situation?

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124732506500248

Phenomenology of non-leptonic decays of hyperons

L.B. OKUN , in Leptons and Quarks, 1984

8.3 Spin correlations in hyperon decays

Consider the decay

Λ p+ π .

Let η and ζ be unit vectors characterizing the polarization of the Λ-hyperon and the proton, respectively, in the rest frames of each of these particles:

ψ 1 ψ 1 + = 1 2 ( 1 + η σ ) , ψ 2 ψ 2 + = 1 2 ( 1 + ζ σ ) .

Let n be a unit vector in the direction of the proton momentum in the A-hyperon rest frame. Let us find the decay probability as a function of η, ζ, and n :

W ( η , ζ , n ) | M | 2 Tr ( 1 + ζ σ ) ( S + P σ n ) ( 1 + η σ ) × ( S * + P * σ n ) { | S | 2 ( 1 + η ζ ) + | P | 2 ( 1 + 2 ( η n ) ( ζ n ) η ζ ) + ( S P * + S * P ) ( η n + ζ n ) + i ( S P * S * P ) ζ [ η n ] } { 1 + α ( η n + ζ n ) + β ζ [ η n ] + γ η ζ + ( 1 γ ) ( η n ) ( ζ n ) } .

Here

α = S P * + S * P | S | 2 + | P | 2 , β = i S P * S * P | S | 2 + | P | 2 , γ = | S | 2 | P | 2 | S | 2 + | P | 2 .

(It is easily found that α2 + β2 + γ2 = 1). The following relations were used to calculate the trace (see appendix):

σ i σ k = δ i k + i ε i k l σ l , Tr σ i σ k = 2 δ i k , Tr σ i σ k σ l = 2 i ε i k l , Tr σ i σ k σ l σ m = 2 ( δ i k δ l m + δ i m δ k l δ i l δ k m ) .

Let us analyze the expression for W(η ζ, n ). The decay probability in the S-wave is zero if η and ζ are antiparallel. This result is quite natural: with zero orbital momentum, the proton spin must be in the same direction as that of the Λ-hyperon. This is not true for the P-wave: the probability is maximum when the proton spin is directed along the vector 2 n (η·n ) − η.

If the proton polarization is not measured, we put ζ = 0. In this case the angular distribution of protons takes the form 1 + αη·n . The P-odd angular asymmetry is a result of interference between the S- and P-waves. If the decaying hyperon is not polarized, then η = 0 and the decay probability is proportional to 1 + αζ·n . This means that the proton is polarized longitudinally (its spin is directed along its momentum), and that the degree of this polarization is α. As can be readily found from the expression for W(η, ζ, n ), the proton polarization is given by

P = n ( α + η n ) + β ( η n ) + γ [ n [ η n ] ] 1 + α η n .

(The decay probability W( P , ζ) is proportional to 1 + ζ·P and reaches a maximum for ζ || P.)

The nucleon polarization along the normal to the plane containing the vectors η and n , is proportional to Im SP* and is non-vanishing only for the non-zero relative phases, Δ, of the S- and P-wave amplitudes:

α = 2 | S | | P | cos Δ | S | 2 + | P | 2 , β = 2 | S | | P | sin Δ | S | 2 + | P | 2 .

We show below that Δ can be expressed as a function of pion-proton scattering phases, provided time-reversal invariance holds.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780444869241500118

Clifford Algebras and Their Representations

A. Trautman , in Encyclopedia of Mathematical Physics, 2006

Pin Groups

It is convenient to define a unit vector vV C V , g to be such that v 2=1 for V complex and v 2=1 or −1 for V real. The group Pin V , g is defined as the subgroup of Cpin V , g consisting of products of all finite sequences of unit vectors. Defining now the twisted adjoint representation Ad ˜ by Ad ˜ ( a ) v = α ( a ) va 1 , one ontains the exact sequence

[27] 1 Z 2 Pin ( V , g ) Ad ˜ O ( V , g ) 1

If dim V is even, then the adjoint representation Ad a v = ava 1 also yields an exact sequence like [27]; if it is odd, then the image of Ad is SO V , g and the kernel is the four-element group 1 , 1 , η , η .

Given an orthonormal frame ( e μ ) in (V,g) and a Pin ( V , g ) , one defines the orthogonal matrix R ( a ) = ( R μ v ( a ) ) by

[28] Ad ˜ ( a ) e μ = e v R μ v ( a )

If (V,g) is complex, then the algebras C V , g and C V , g are isomorphic; this induces an isomorphism of the groups Pin V , g and Pin V , g . If V = C m , then this group is denoted by Pin m C . If V = R k + l and g of signature (k,l), then one writes Pin V , g = Pin K , l . A similar notation is used for the groups spin, see below.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B012512666200016X