The Matrix Cookbook [ http://matrixcookbook.com ] Kaare Brandt Petersen Michael Syskind Pedersen Version: November 14, 2008 What is this? These pages are a collection of facts (identities, approximations, inequalities, relations, ...) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference . Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books - see the references for a full list. Errors: Very likely there are errors, typos, and mistakes for which we apologize and would be grateful to receive corrections at cookbook@2302.dk. Its ongoing: The project of keeping a large repository of relations involving matrices is naturally ongoing and the version will be apparent from the date in the header. Suggestions: Your suggestion for additional content or elaboration of some topics is most welcome at cookbook@2302.dk. Keywords: Matrix algebra, matrix relations, matrix identities, derivative of determinant, derivative of inverse matrix, differentiate a matrix. Acknowledgements: We would like to thank the following for contributions and suggestions: Bill Baxter, Brian Templeton, Christian Rishøj, Christian Schröppel Douglas L. Theobald, Esben Hoegh-Rasmussen, Glynne Casteel, Jan Larsen, Jun Bin Gao, Jürgen Struckmeier, Kamil Dedecius, Korbinian Strimmer, Lars Christiansen, Lars Kai Hansen, Leland Wilkinson, Liguo He, Loic Thibaut, Miguel Barão, Ole Winther, Pavel Sakov, Stephan Hattinger, Vasile Sima, Vincent Rabaud, Zhaoshui He. We would also like thank The Oticon Foundation for funding our PhD studies. 1 CONTENTS CONTENTS Contents 1 Basics 1.1 Trace and Determinants . . . . . . . . . . . . . . . . . . . . . . . 1.2 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . . 2 Derivatives 2.1 Derivatives 2.2 Derivatives 2.3 Derivatives 2.4 Derivatives 2.5 Derivatives 2.6 Derivatives 2.7 Derivatives 2.8 Derivatives of of of of of of of of a Determinant . . . . . . . . . . . . an Inverse . . . . . . . . . . . . . . . Eigenvalues . . . . . . . . . . . . . . Matrices, Vectors and Scalar Forms Traces . . . . . . . . . . . . . . . . . vector norms . . . . . . . . . . . . . matrix norms . . . . . . . . . . . . . Structured Matrices . . . . . . . . . 3 Inverses 3.1 Basic . . . . . . . . . . . 3.2 Exact Relations . . . . . 3.3 Implication on Inverses . 3.4 Approximations . . . . . 3.5 Generalized Inverse . . . 3.6 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 5 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 7 8 9 9 11 13 13 14 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 16 17 19 19 20 20 4 Complex Matrices 23 4.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 23 4.2 Higher order and non-linear derivatives . . . . . . . . . . . . . . . 26 4.3 Inverse of complex sum . . . . . . . . . . . . . . . . . . . . . . . 26 5 Solutions and Decompositions 5.1 Solutions to linear equations . 5.2 Eigenvalues and Eigenvectors 5.3 Singular Value Decomposition 5.4 Triangular Decomposition . . 5.5 LU decomposition . . . . . . 5.6 LDM decomposition . . . . . 5.7 LDL decompositions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 27 29 30 32 32 32 32 6 Statistics and Probability 33 6.1 Definition of Moments . . . . . . . . . . . . . . . . . . . . . . . . 33 6.2 Expectation of Linear Combinations . . . . . . . . . . . . . . . . 34 6.3 Weighted Scalar Variable . . . . . . . . . . . . . . . . . . . . . . 35 Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 2 CONTENTS CONTENTS 7 Multivariate Distributions 7.1 Cauchy . . . . . . . . . . 7.2 Dirichlet . . . . . . . . . . 7.3 Normal . . . . . . . . . . 7.4 Normal-Inverse Gamma . 7.5 Gaussian . . . . . . . . . . 7.6 Multinomial . . . . . . . . 7.7 Student’s t . . . . . . . . 7.8 Wishart . . . . . . . . . . 7.9 Wishart, Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 36 36 36 36 36 36 36 37 38 8 Gaussians 8.1 Basics . . . . . . . . 8.2 Moments . . . . . . 8.3 Miscellaneous . . . . 8.4 Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 39 41 43 44 9 Special Matrices 9.1 Block matrices . . . . . . . . . . . . . . . . 9.2 Discrete Fourier Transform Matrix, The . . 9.3 Hermitian Matrices and skew-Hermitian . . 9.4 Idempotent Matrices . . . . . . . . . . . . . 9.5 Orthogonal matrices . . . . . . . . . . . . . 9.6 Positive Definite and Semi-definite Matrices 9.7 Singleentry Matrix, The . . . . . . . . . . . 9.8 Symmetric, Skew-symmetric/Antisymmetric 9.9 Toeplitz Matrices . . . . . . . . . . . . . . . 9.10 Transition matrices . . . . . . . . . . . . . . 9.11 Units, Permutation and Shift . . . . . . . . 9.12 Vandermonde Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 45 46 47 48 48 50 51 53 54 55 56 57 10 Functions and Operators 10.1 Functions and Series . . . . . 10.2 Kronecker and Vec Operator 10.3 Vector Norms . . . . . . . . . 10.4 Matrix Norms . . . . . . . . . 10.5 Rank . . . . . . . . . . . . . . 10.6 Integral Involving Dirac Delta 10.7 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 58 59 61 61 62 62 63 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Functions . . . . . . . . . . . . . . . . . . . . A One-dimensional Results 64 A.1 Gaussian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64 A.2 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 65 B Proofs and Details 67 B.1 Misc Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 3 CONTENTS CONTENTS Notation and Nomenclature A Aij Ai Aij An A−1 A+ A1/2 (A)ij Aij [A]ij a ai ai a m (tall) and rank(A) = m, then Ax = b x = (AT A)−1 AT b = A+ b ⇒ (244) that is if there exists a solution x at all! If there is no solution the following can be useful: Ax = b ⇒ xmin = A+ b (245) Now xmin is the vector x which minimizes ||Ax − b||2 , i.e. the vector which is ”least wrong”. The matrix A+ is the pseudo-inverse of A. See [3]. 5.1.7 Under-determined Rectangular Assume A is n × m and n < m (”broad”) and rank(A) = n. Ax = b xmin = AT (AAT )−1 b ⇒ (246) The equation have many solutions x. But xmin is the solution which minimizes ||Ax − b||2 and also the solution with the smallest norm ||x||2 . The same holds for a matrix version: Assume A is n × m, X is m × n and B is n × n, then ⇒ AX = B Xmin = A+ B (247) The equation have many solutions X. But Xmin is the solution which minimizes ||AX − B||2 and also the solution with the smallest norm ||X||2 . See [3]. Similar but different: Assume A is square n × n and the matrices B0 , B1 are n × N , where N > n, then if B0 has maximal rank AB0 = B1 Amin = B1 BT0 (B0 BT0 )−1 ⇒ (248) where Amin denotes the matrix which is optimal in a least square sense. An interpretation is that A is the linear approximation which maps the columns vectors of B0 into the columns vectors of B1 . 5.1.8 Linear form and zeros Ax = 0, ∀x ⇒ A=0 (249) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 28 5.2 Eigenvalues and Eigenvectors5 5.1.9 SOLUTIONS AND DECOMPOSITIONS Square form and zeros If A is symmetric, then xT Ax = 0, 5.1.10 ∀x ⇒ A=0 (250) The Lyapunov Equation AX + XB = C vec(X) = (I ⊗ A + BT ⊗ I)−1 vec(C) (251) (252) Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec operator. 5.1.11 Encapsulating Sum P n An XBn = C −1 P T vec(X) = vec(C) n Bn ⊗ A n (253) (254) See Sec 10.2.1 and 10.2.2 for details on the Kronecker product and the vec operator. 5.2 5.2.1 Eigenvalues and Eigenvectors Definition The eigenvectors v and eigenvalues λ are the ones satisfying Avi = λi vi AV = VD, (D)ij = δij λi , (255) (256) where the columns of V are the vectors vi 5.2.2 General Properties Assume that A ∈ Rn×m and B ∈ Rm×n , eig(AB) = eig(BA) A is n × m ⇒ At most min(n, m) distinct λi rank(A) = r ⇒ At most r non-zero λi (257) (258) (259) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 29 5.3 Singular Value Decomposition 5 SOLUTIONS AND DECOMPOSITIONS 5.2.3 Symmetric Assume A is symmetric, then VVT λi Tr(Ap ) eig(I + cA) eig(A − cI) eig(A−1 ) = ∈ = = = = I R P (i.e. V is orthogonal) (i.e. λi is real) p i λi 1 + cλi λi − c λ−1 i (260) (261) (262) (263) (264) (265) For a symmetric, positive matrix A, eig(AT A) = eig(AAT ) = eig(A) ◦ eig(A) 5.2.4 (266) Characteristic polynomial The characteristic polynomial for the matrix A is 0 = det(A − λI) = λn − g1 λn−1 + g2 λn−2 − ... + (−1)n gn (267) (268) Note that the coefficients gj for j = 1, ..., n are the n invariants under rotation of A. Thus, gj is the sum of the determinants of all the sub-matrices of A taken j rows and columns at a time. That is, g1 is the trace of A, and g2 is the sum of the determinants of the n(n − 1)/2 sub-matrices that can be formed from A by deleting all but two rows and columns, and so on – see [17]. 5.3 Singular Value Decomposition Any n × m matrix A can be written as A = UDVT , where 5.3.1 U = p eigenvectors of AAT D = diag(eig(AAT )) V = eigenvectors of AT A (269) n×n n×m m×m (270) Symmetric Square decomposed into squares Assume A to be n × n and symmetric. Then      T  A = V D V , (271) where D is diagonal with the eigenvalues of A, and V is orthogonal and the eigenvectors of A. Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 30 5.3 Singular Value Decomposition 5 SOLUTIONS AND DECOMPOSITIONS 5.3.2 Square decomposed into squares Assume A ∈ Rn×n . Then  A  =  V  D  UT  , (272) where D is diagonal with the square root of the eigenvalues of AAT , V is the eigenvectors of AAT and UT is the eigenvectors of AT A. 5.3.3 Square decomposed into rectangular Assume V∗ D∗ UT∗ = 0 then we can expand the SVD of A into  T       D 0 U A = V V∗ , 0 D∗ UT∗ (273) where the SVD of A is A = VDUT . 5.3.4 Rectangular decomposition I Assume A is n × m, V is n × n, D is n × n, UT is n × m       A D UT = V , (274) where D is diagonal with the square root of the eigenvalues of AAT , V is the eigenvectors of AAT and UT is the eigenvectors of AT A. 5.3.5 Rectangular decomposition II Assume A is n × m, V is n × m, D is m × m, UT is m × m        D   UT A V = 5.3.6   (275) Rectangular decomposition III Assume A is n × m, V is n × n, D is n × m, UT is m × m         UT , A D = V (276) where D is diagonal with the square root of the eigenvalues of AAT , V is the eigenvectors of AAT and UT is the eigenvectors of AT A. Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 31 5.4 Triangular Decomposition 5 SOLUTIONS AND DECOMPOSITIONS 5.4 Triangular Decomposition 5.5 LU decomposition Assume A is a square matrix with non-zero leading principal minors, then A = LU (277) where L is a unique unit lower triangular matrix and U is a unique upper triangular matrix. 5.5.1 Cholesky-decomposition Assume A is a symmetric positive definite square matrix, then A = UT U = LLT , (278) where U is a unique upper triangular matrix and L is a unique lower triangular matrix. 5.6 LDM decomposition Assume A is a square matrix with non-zero leading principal minors1 , then A = LDMT (279) where L, M are unique unit lower triangular matrices and D is a unique diagonal matrix. 5.7 LDL decompositions The LDL decomposition are special cases of the LDM decomposition. Assume A is a non-singular symmetric definite square matrix, then A = LDLT = LT DL (280) where L is a unit lower triangular matrix and D is a diagonal matrix. If A is also positive definite, then D has strictly positive diagonal entries. 1 If the matrix that corresponds to a principal minor is a quadratic upper-left part of the larger matrix (i.e., it consists of matrix elements in rows and columns from 1 to k), then the principal minor is called a leading principal minor. For an n times n square matrix, there are n leading principal minors. [31] Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 32 6 6 STATISTICS AND PROBABILITY Statistics and Probability 6.1 Definition of Moments Assume x ∈ Rn×1 is a random variable 6.1.1 Mean The vector of means, m, is defined by (m)i = hxi i 6.1.2 (281) Covariance The matrix of covariance M is defined by (M)ij = h(xi − hxi i)(xj − hxj i)i (282) M = h(x − m)(x − m)T i (283) or alternatively as 6.1.3 Third moments The matrix of third centralized moments – in some contexts referred to as coskewness – is defined using the notation (3) mijk = h(xi − hxi i)(xj − hxj i)(xk − hxk i)i (284) h i (3) (3) M3 = m::1 m::2 ...m(3) ::n (285) as where ’:’ denotes all elements within the given index. M3 can alternatively be expressed as M3 = h(x − m)(x − m)T ⊗ (x − m)T i (286) 6.1.4 Fourth moments The matrix of fourth centralized moments – in some contexts referred to as cokurtosis – is defined using the notation (4) mijkl = h(xi − hxi i)(xj − hxj i)(xk − hxk i)(xl − hxl i)i (287) as i h (4) (4) (4) (4) (4) (4) (4) (4) M4 = m::11 m::21 ...m::n1 |m::12 m::22 ...m::n2 |...|m::1n m::2n ...m(4) ::nn (288) or alternatively as M4 = h(x − m)(x − m)T ⊗ (x − m)T ⊗ (x − m)T i (289) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 33 6.2 Expectation of Linear Combinations 6 STATISTICS AND PROBABILITY 6.2 Expectation of Linear Combinations 6.2.1 Linear Forms Assume X and x to be a matrix and a vector of random variables. Then (see See [26]) E[AXB + C] = AE[X]B + C Var[Ax] = AVar[x]AT Cov[Ax, By] = ACov[x, y]BT (290) (291) (292) Assume x to be a stochastic vector with mean m, then (see [7]) E[Ax + b] = Am + b E[Ax] = Am E[x + b] = m + b 6.2.2 (293) (294) (295) Quadratic Forms Assume A is symmetric, c = E[x] and Σ = Var[x]. Assume also that all coordinates xi are independent, have the same central moments µ1 , µ2 , µ3 , µ4 and denote a = diag(A). Then (See [26]) E[xT Ax] = Tr(AΣ) + cT Ac (296) T 2 2 T 2 T 2 T Var[x Ax] = 2µ2 Tr(A ) + 4µ2 c A c + 4µ3 c Aa + (µ4 − 3µ2 )a a (297) Also, assume x to be a stochastic vector with mean m, and covariance M. Then (see [7]) E[(Ax + a)(Bx + b)T ] E[xxT ] E[xaT x] E[xT axT ] E[(Ax)(Ax)T ] E[(x + a)(x + a)T ] E[(Ax + a)T (Bx + b)] E[xT x] E[xT Ax] E[(Ax)T (Ax)] E[(x + a)T (x + a)] = = = = = = = = = = = AMBT + (Am + a)(Bm + b)T M + mmT (M + mmT )a aT (M + mmT ) A(M + mmT )AT M + (m + a)(m + a)T (298) (299) (300) (301) (302) (303) Tr(AMBT ) + (Am + a)T (Bm + b) Tr(M) + mT m Tr(AM) + mT Am Tr(AMAT ) + (Am)T (Am) Tr(M) + (m + a)T (m + a) (304) (305) (306) (307) (308) See [7]. Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 34 6.3 Weighted Scalar Variable 6.2.3 6 STATISTICS AND PROBABILITY Cubic Forms Assume x to be a stochastic vector with independent coordinates, mean m, covariance M and central moments v3 = E[(x − m)3 ]. Then (see [7]) E[(Ax + a)(Bx + b)T (Cx + c)] = Adiag(BT C)v3 +Tr(BMCT )(Am + a) +AMCT (Bm + b) +(AMBT + (Am + a)(Bm + b)T )(Cm + c) E[xxT x] = v3 + 2Mm + (Tr(M) + mT m)m E[(Ax + a)(Ax + a)T (Ax + a)] = Adiag(AT A)v3 +[2AMAT + (Ax + a)(Ax + a)T ](Am + a) +Tr(AMAT )(Am + a) E[(Ax + a)bT (Cx + c)(Dx + d)T ] = (Ax + a)bT (CMDT + (Cm + c)(Dm + d)T ) +(AMCT + (Am + a)(Cm + c)T )b(Dm + d)T +bT (Cm + c)(AMDT − (Am + a)(Dm + d)T ) 6.3 Weighted Scalar Variable Assume x ∈ Rn×1 is a random variable, w ∈ Rn×1 is a vector of constants and y is the linear combination y = wT x. Assume further that m, M2 , M3 , M4 denotes the mean, covariance, and central third and fourth moment matrix of the variable x. Then it holds that hyi h(y − hyi)2 i h(y − hyi)3 i h(y − hyi)4 i = = = = wT m wT M2 w wT M3 w ⊗ w wT M4 w ⊗ w ⊗ w (309) (310) (311) (312) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 35 7 7 MULTIVARIATE DISTRIBUTIONS Multivariate Distributions 7.1 Cauchy The density function for a Cauchy distributed vector t ∈ RP ×1 , is given by p(t|µ, Σ) = π −P/2 Γ( 1+P det(Σ)−1/2 2 ) Γ(1/2) 1 + (t − µ)T Σ−1 (t − µ)(1+P )/2 (313) where µ is the location, Σ is positive definite, and Γ denotes the gamma function. The Cauchy distribution is a special case of the Student-t distribution. 7.2 Dirichlet The Dirichlet distribution is a kind of “inverse” distribution compared to the multinomial distribution on the bounded continuous variate x = [x1 , . . . , xP ] [16, p. 44] P  P P Γ α p Y p p −1 xα p(x|α) = QP p Γ(α ) p p p 7.3 Normal The normal distribution is also known as a Gaussian distribution. See sec. 8. 7.4 Normal-Inverse Gamma 7.5 Gaussian See sec. 8. 7.6 Multinomial If the vector n contains counts, i.e. (n)i ∈ 0, 1, 2, ..., then the discrete multinomial disitrbution for n is given by d P (n|a, n) = where ai are probabilities, i.e. 0 ≤ ai ≤ 1 and 7.7 d X Y n! ani , n1 ! . . . n d ! i i ni = n (314) i P i ai = 1. Student’s t The density of a Student-t distributed vector t ∈ RP ×1 , is given by p(t|µ, Σ, ν) = (πν)−P/2 Γ( ν+P det(Σ)−1/2 2 )  Γ(ν/2) 1 + ν −1 (t − µ)T Σ−1 (t − µ)(ν+P )/2 (315) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 36 7.8 Wishart 7 MULTIVARIATE DISTRIBUTIONS where µ is the location, the scale matrix Σ is symmetric, positive definite, ν is the degrees of freedom, and Γ denotes the gamma function. For ν = 1, the Student-t distribution becomes the Cauchy distribution (see sec 7.1). 7.7.1 Mean E(t) = µ, 7.7.2 (316) Variance cov(t) = 7.7.3 ν>1 ν Σ, ν−2 ν>2 (317) Mode The notion mode meaning the position of the most probable value mode(t) = µ 7.7.4 (318) Full Matrix Version If instead of a vector t ∈ RP ×1 one has a matrix T ∈ RP ×N , then the Student-t distribution for T is p(T|M, Ω, Σ, ν) = π −N P/2 P Y Γ [(ν + P − p + 1)/2] × Γ [(ν − p + 1)/2] p=1 ν det(Ω)−ν/2 det(Σ)−N/2 ×  −(ν+P )/2 det Ω−1 + (T − M)Σ−1 (T − M)T (319) where M is the location, Ω is the rescaling matrix, Σ is positive definite, ν is the degrees of freedom, and Γ denotes the gamma function. 7.8 Wishart The central Wishart distribution for M ∈ RP ×P , M is positive definite, where m can be regarded as a degree of freedom parameter [16, equation 3.8.1] [8, section 2.5],[11] p(M|Σ, m) = 2mP/2 π P (P −1)/4 1 QP p Γ[ 12 (m + 1 − p)] det(Σ)−m/2 det(M)(m−P −1)/2 ×   1 −1 exp − Tr(Σ M) 2 7.8.1 × (320) Mean E(M) = mΣ (321) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 37 7.9 Wishart, Inverse 7.9 7 MULTIVARIATE DISTRIBUTIONS Wishart, Inverse The (normal) Inverse Wishart distribution for M ∈ RP ×P , M is positive definite, where m can be regarded as a degree of freedom parameter [11] p(M|Σ, m) = 2mP/2 π P (P −1)/4 1 QP p Γ[ 12 (m + 1 − p)] det(Σ)m/2 det(M)−(m−P −1)/2 ×   1 exp − Tr(ΣM−1 ) 2 7.9.1 × (322) Mean E(M) = Σ 1 m−P −1 (323) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 38 8 8 8.1 8.1.1 GAUSSIANS Gaussians Basics Density and normalization The density of x ∼ N (m, Σ) is p(x) = p   1 exp − (x − m)T Σ−1 (x − m) 2 det(2πΣ) 1 Note that if x is d-dimensional, then Integration and normalization   Z 1 T −1 exp − (x − m) Σ (x − m) dx 2   Z 1 T −1 T −1 exp − x Σ x + m Σ x dx 2   Z 1 T T exp − x Ax + c x dx 2 (324) det(2πΣ) = (2π)d det(Σ). = p det(2πΣ)   1 T −1 = det(2πΣ) exp m Σ m 2   p 1 T −T −1 = det(2πA ) exp c A c 2 p If X = [x1 x2 ...xn ] and C = [c1 c2 ...cn ], then     Z p n 1 1 T T T −1 −1 exp − Tr(X AX) + Tr(C X) dX = det(2πA ) exp Tr(C A C) 2 2 The derivatives of the density are ∂p(x) ∂x ∂2p ∂x∂xT 8.1.2 = −p(x)Σ−1 (x − m)   = p(x) Σ−1 (x − m)(x − m)T Σ−1 − Σ−1 (325) (326) Marginal Distribution Assume x ∼ Nx (µ, Σ) where     xa µa x= µ= xb µb  Σ= Σa ΣTc Σc Σb  (327) then p(xa ) p(xb ) = Nxa (µa , Σa ) = Nxb (µb , Σb ) (328) (329) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 39 8.1 Basics 8.1.3 8 GAUSSIANS Conditional Distribution Assume x ∼ Nx (µ, Σ) where     xa µa x= µ= xb µb  Σ= Σa ΣTc Σc Σb  (330) then p(xa |xb ) = Nxa (µ̂a , Σ̂a ) p(xb |xa ) = Nxb (µ̂b , Σ̂b ) n µ̂ = a Σ̂a = n µ̂ = b Σ̂b = µa + Σc Σ−1 b (xb − µb ) (331) T Σa − Σc Σ−1 b Σc µb + ΣTc Σ−1 a (xa − µa ) (332) Σb − ΣTc Σ−1 a Σc Note, that the covariance matrices are the Schur complement of the block matrix, see 9.1.5 for details. 8.1.4 Linear combination Assume x ∼ N (mx , Σx ) and y ∼ N (my , Σy ) then Ax + By + c ∼ N (Amx + Bmy + c, AΣx AT + BΣy BT ) Rearranging Means p det(2π(AT Σ−1 A)−1 ) p Nx [A−1 m, (AT Σ−1 A)−1 ] NAx [m, Σ] = det(2πΣ) (333) 8.1.5 8.1.6 (334) Rearranging into squared form If A is symmetric, then 1 1 1 − xT Ax + bT x = − (x − A−1 b)T A(x − A−1 b) + bT A−1 b 2 2 2 1 1 1 T T −1 T −1 − Tr(X AX) + Tr(B X) = − Tr[(X − A B) A(X − A B)] + Tr(BT A−1 B) 2 2 2 8.1.7 Sum of two squared forms In vector formulation (assuming Σ1 , Σ2 are symmetric) 1 − (x − m1 )T Σ−1 1 (x − m1 ) 2 1 − (x − m2 )T Σ−1 2 (x − m2 ) 2 1 = − (x − mc )T Σ−1 c (x − mc ) + C 2 (335) (336) (337) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 40 8.2 Moments Σ−1 c mc C 8 GAUSSIANS −1 = Σ−1 (338) 1 + Σ2 −1 −1 −1 −1 −1 = (Σ1 + Σ2 ) (Σ1 m1 + Σ2 m2 ) (339) 1 T −1 −1 −1 −1 −1 (m Σ + mT2 Σ−1 (Σ−1 = 2 )(Σ1 + Σ2 ) 1 m1 + Σ2 m2 )(340) 2 1 1  1 T −1 m + m Σ m (341) − mT1 Σ−1 1 2 2 1 2 2 In a trace formulation (assuming Σ1 , Σ2 are symmetric) 1 − Tr((X − M1 )T Σ−1 1 (X − M1 )) 2 1 − Tr((X − M2 )T Σ−1 2 (X − M2 )) 2 1 = − Tr[(X − Mc )T Σ−1 c (X − Mc )] + C 2 Σ−1 c Mc C 8.1.8 (342) (343) (344) −1 = Σ−1 (345) 1 + Σ2 −1 −1 −1 −1 −1 = (Σ1 + Σ2 ) (Σ1 M1 + Σ2 M2 ) (346) i 1 h −1 −1 −1 −1 −1 T = Tr (Σ1 M1 + Σ−1 (Σ−1 2 M2 ) (Σ1 + Σ2 ) 1 M1 + Σ2 M2 ) 2 1 T −1 − Tr(MT1 Σ−1 (347) 1 M1 + M2 Σ2 M2 ) 2 Product of gaussian densities Let Nx (m, Σ) denote a density of x, then Nx (m1 , Σ1 ) · Nx (m2 , Σ2 ) = cc Nx (mc , Σc ) cc mc Σc (348) = Nm1 (m2 , (Σ1 + Σ2 ))   1 1 = p exp − (m1 − m2 )T (Σ1 + Σ2 )−1 (m1 − m2 ) 2 det(2π(Σ1 + Σ2 )) −1 −1 −1 = (Σ−1 (Σ−1 1 + Σ2 ) 1 m1 + Σ2 m2 ) −1 −1 = (Σ−1 1 + Σ2 ) but note that the product is not normalized as a density of x. 8.2 8.2.1 Moments Mean and covariance of linear forms First and second moments. Assume x ∼ N (m, Σ) E(x) = m (349) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 41 8.2 Moments 8 GAUSSIANS Cov(x, x) = Var(x) = Σ = E(xxT ) − E(x)E(xT ) = E(xxT ) − mmT (350) As for any other distribution is holds for gaussians that E[Ax] = AE[x] Var[Ax] = AVar[x]AT Cov[Ax, By] = ACov[x, y]BT 8.2.2 (351) (352) (353) Mean and variance of square forms Mean and variance of square forms: Assume x ∼ N (m, Σ) E(xxT ) = Σ + mmT (354) E[xT Ax] = Tr(AΣ) + mT Am (355) Var(xT Ax) = Tr[AΣ(A + AT )Σ] + ... +mT (A + AT )Σ(A + AT )m (356) 0 T 0 0 T 0 E[(x − m ) A(x − m )] = (m − m ) A(m − m ) + Tr(AΣ) (357) If Σ = σ 2 I and A is symmetric, then Var(xT Ax) = 2σ 4 Tr(A2 ) + 4σ 2 mT A2 m (358) Assume x ∼ N (0, σ 2 I) and A and B to be symmetric, then Cov(xT Ax, xT Bx) = 2σ 4 Tr(AB) 8.2.3 Cubic forms E[xbT xxT ] 8.2.4 (359) = mbT (M + mmT ) + (M + mmT )bmT +bT m(M − mmT ) (360) Mean of Quartic Forms E[xxT xxT ] = 2(Σ + mmT )2 + mT m(Σ − mmT ) +Tr(Σ)(Σ + mmT ) E[xxT AxxT ] = (Σ + mmT )(A + AT )(Σ + mmT ) +mT Am(Σ − mmT ) + Tr[AΣ](Σ + mmT ) T T E[x xx x] = 2Tr(Σ2 ) + 4mT Σm + (Tr(Σ) + mT m)2 E[xT AxxT Bx] = Tr[AΣ(B + BT )Σ] + mT (A + AT )Σ(B + BT )m +(Tr(AΣ) + mT Am)(Tr(BΣ) + mT Bm) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 42 8.3 Miscellaneous 8 GAUSSIANS E[aT xbT xcT xdT x] = (aT (Σ + mmT )b)(cT (Σ + mmT )d) +(aT (Σ + mmT )c)(bT (Σ + mmT )d) +(aT (Σ + mmT )d)(bT (Σ + mmT )c) − 2aT mbT mcT mdT m E[(Ax + a)(Bx + b)T (Cx + c)(Dx + d)T ] = [AΣBT + (Am + a)(Bm + b)T ][CΣDT + (Cm + c)(Dm + d)T ] +[AΣCT + (Am + a)(Cm + c)T ][BΣDT + (Bm + b)(Dm + d)T ] +(Bm + b)T (Cm + c)[AΣDT − (Am + a)(Dm + d)T ] +Tr(BΣCT )[AΣDT + (Am + a)(Dm + d)T ] E[(Ax + a)T (Bx + b)(Cx + c)T (Dx + d)] = Tr[AΣ(CT D + DT C)ΣBT ] +[(Am + a)T B + (Bm + b)T A]Σ[CT (Dm + d) + DT (Cm + c)] +[Tr(AΣBT ) + (Am + a)T (Bm + b)][Tr(CΣDT ) + (Cm + c)T (Dm + d)] See [7]. 8.2.5 Moments E[x] X = ρk m k (361) k Cov(x) XX = k 8.3 8.3.1 ρk ρk0 (Σk + mk mTk − mk mTk0 ) (362) k0 Miscellaneous Whitening Assume x ∼ N (m, Σ) then z = Σ−1/2 (x − m) ∼ N (0, I) (363) Conversely having z ∼ N (0, I) one can generate data x ∼ N (m, Σ) by setting x = Σ1/2 z + m ∼ N (m, Σ) 1/2 1/2 Note that Σ means the matrix which fulfils Σ and is unique since Σ is positive definite. 8.3.2 1/2 Σ (364) = Σ, and that it exists The Chi-Square connection Assume x ∼ N (m, Σ) and x to be n dimensional, then z = (x − m)T Σ−1 (x − m) ∼ χ2n (365) where χ2n denotes the Chi square distribution with n degrees of freedom. Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 43 8.4 Mixture of Gaussians 8.3.3 8 GAUSSIANS Entropy Entropy of a D-dimensional gaussian Z p D H(x) = − N (m, Σ) ln N (m, Σ)dx = ln det(2πΣ) + 2 8.4 8.4.1 (366) Mixture of Gaussians Density The variable x is distributed as a mixture of gaussians if it has the density p(x) = K X k=1   1 exp − (x − mk )T Σ−1 (x − m ) k k 2 det(2πΣk ) ρk p 1 (367) where ρk sum to 1 and the Σk all are positive definite. 8.4.2 Derivatives P Defining p(s) = k ρk Ns (µk , Σk ) one get ∂ ln p(s) ∂ρj = = ∂ ln p(s) ∂µj = = ∂ ln p(s) ∂Σj = = ρj Ns (µj , Σj ) ∂ P ln[ρj Ns (µj , Σj )] (368) ∂ρ ρ N (µ , Σ ) j k s k k k ρj Ns (µj , Σj ) 1 P (369) k ρk Ns (µk , Σk ) ρj ρj Ns (µj , Σj ) ∂ P ln[ρj Ns (µj , Σj )] (370) k ρk Ns (µk , Σk ) ∂µj  ρj Ns (µj , Σj )  −1 P Σj (s − µj ) (371) k ρk Ns (µk , Σk ) ρj Ns (µj , Σj ) ∂ P ln[ρj Ns (µj , Σj )] (372) ρ N (µ , Σ ) ∂Σ k j k k k s  ρj Ns (µj , Σj ) 1  −1 T −1 P −Σ−1 j + Σj (s − µj )(s − µj ) Σj (373) 2 ρ N (µ , Σ ) k k k k s But ρk and Σk needs to be constrained. Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 44 9 9 9.1 SPECIAL MATRICES Special Matrices Block matrices Let Aij denote the ijth block of A. 9.1.1 Multiplication Assuming the dimensions of the blocks matches we have      A11 A12 B11 B12 A11 B11 + A12 B21 A11 B12 + A12 B22 = A21 A22 B21 B22 A21 B11 + A22 B21 A21 B12 + A22 B22 9.1.2 The Determinant The determinant can be expressed as by the use of = A11 − A12 A−1 22 A21 = A22 − A21 A−1 11 A12 C1 C2 as  det 9.1.3  A12 A22 A11 A21 (374) (375) = det(A22 ) · det(C1 ) = det(A11 ) · det(C2 ) The Inverse The inverse can be expressed as by the use of = A11 − A12 A−1 22 A21 = A22 − A21 A−1 11 A12 C1 C2 as   = 9.1.4 A12 A22 A11 A21 −1  = C−1 1 −1 −C2 A21 A−1 11 −1 −1 −1 A−1 11 + A11 A12 C2 A21 A11 −1 −1 −A22 A21 C1 (376) (377) −1 −A−1 11 A12 C2 −1 C2  −1 −C−1 1 A12 A22 −1 −1 −1 A22 + A22 A21 C1 A12 A−1 22  Block diagonal For block diagonal matrices we have   det A11 0 A11 0 0 A22 0 A22 −1  = (A11 )−1 0 0 (A22 )−1  (378)  = det(A11 ) · det(A22 ) (379) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 45 9.2 Discrete Fourier Transform Matrix, The 9.1.5 9 SPECIAL MATRICES Schur complement The Schur complement of the matrix  A11 A21 A12 A22  is the matrix A11 − A12 A−1 22 A21 that is, what is denoted C2 above. Using the Schur complement, one can rewrite the inverse of a block matrix −1  A11 A12 A21 A22     −1 I 0 0 (A11 − A12 A−1 I −A12 A−1 22 A21 ) 22 = −A−1 I 0 I 0 A−1 22 A21 22 The Schur complement is useful when solving linear systems of the form      A11 A12 x1 b1 = A21 A22 x2 b2 which has the following equation for x1 −1 (A11 − A12 A−1 22 A21 )x1 = b1 − A12 A22 b2 When the appropriate inverses exists, this can be solved for x1 which can then be inserted in the equation for x2 to solve for x2 . 9.2 Discrete Fourier Transform Matrix, The The DFT matrix is an N × N symmetric matrix WN , where the k, nth element is given by −j2πkn WNkn = e N (380) Thus the discrete Fourier transform (DFT) can be expressed as X(k) = N −1 X x(n)WNkn . (381) n=0 Likewise the inverse discrete Fourier transform (IDFT) can be expressed as x(n) = N −1 1 X X(k)WN−kn . N (382) k=0 The DFT of the vector x = [x(0), x(1), · · · , x(N − 1)]T can be written in matrix form as X = WN x, (383) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 46 9.3 Hermitian Matrices and skew-Hermitian 9 SPECIAL MATRICES where X = [X(0), X(1), · · · , x(N − 1)]T . The IDFT is similarly given as x = W−1 N X. (384) Some properties of WN exist: W−1 N WN W∗N W∗N If WN = e −j2π N 1 W∗ N N = NI = WH N = (385) (386) (387) , then [23] m+N/2 WN = −WNm (388) Notice, the DFT matrix is a Vandermonde Matrix. The following important relation between the circulant matrix and the discrete Fourier transform (DFT) exists TC = W−1 N (I ◦ (WN t))WN , (389) T where t = [t0 , t1 , · · · , tn−1 ] is the first row of TC . 9.3 Hermitian Matrices and skew-Hermitian A matrix A ∈ Cm×n is called Hermitian if AH = A For real valued matrices, Hermitian and symmetric matrices are equivalent. A is Hermitian ⇔ xH Ax ∈ R, A is Hermitian ⇔ eig(A) ∈ R ∀x ∈ Cn×1 (390) (391) Note that A = B + iC where B, C are hermitian, then B= 9.3.1 A + AH , 2 C= A − AH 2i Skew-Hermitian A matrix A is called skew-hermitian if A = −AH For real valued matrices, skew-Hermitian and skew-symmetric matrices are equivalent. A Hermitian ⇔ iA is skew-hermitian A skew-Hermitian ⇔ xH Ay = −xH AH y, ∀x, y A skew-Hermitian ⇒ eig(A) = iλ, λ ∈ R (392) (393) (394) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 47 9.4 Idempotent Matrices 9.4 9 SPECIAL MATRICES Idempotent Matrices A matrix A is idempotent if AA = A Idempotent matrices A and B, have the following properties An = A, forn = 1, 2, 3, ... I−A is idempotent AH is idempotent H I−A is idempotent If AB = BA ⇒ AB is idempotent rank(A) = Tr(A) A(I − A) = 0 (I − A)A = 0 A+ = A f (sI + tA) = (I − A)f (s) + Af (s + t) (395) (396) (397) (398) (399) (400) (401) (402) (403) (404) Note that A − I is not necessarily idempotent. 9.4.1 Nilpotent A matrix A is nilpotent if A2 = 0 A nilpotent matrix has the following property: f (sI + tA) 9.4.2 = If (s) + tAf 0 (s) (405) Unipotent A matrix A is unipotent if AA = I A unipotent matrix has the following property: f (sI + tA) 9.5 = [(I + A)f (s + t) + (I − A)f (s − t)]/2 (406) Orthogonal matrices If a square matrix Q is orthogonal, if and only if, QT Q = QQT = I and then Q has the following properties • Its eigenvalues are placed on the unit circle. Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 48 9.5 Orthogonal matrices 9 SPECIAL MATRICES • Its eigenvectors are unitary, i.e. have length one. • The inverse of an orthogonal matrix is orthogonal too. Basic properties for the orthogonal matrix Q Q−1 Q−T QQT QT Q det(Q) 9.5.1 = = = = = QT Q I I ±1 Ortho-Sym A matrix Q+ which simultaneously is orthogonal and symmetric is called an ortho-sym matrix [20]. Hereby QT+ Q+ Q+ = I = QT+ (407) (408) The powers of an ortho-sym matrix are given by the following rule Qk+ = = 9.5.2 1 + (−1)k 1 + (−1)k+1 I+ Q+ 2 2 1 + cos(kπ) 1 − cos(kπ) I+ Q+ 2 2 (409) (410) Ortho-Skew A matrix which simultaneously is orthogonal and antisymmetric is called an ortho-skew matrix [20]. Hereby QH − Q− Q− = I = −QH − (411) (412) The powers of an ortho-skew matrix are given by the following rule Qk− = = 9.5.3 ik + (−i)k ik − (−i)k I−i Q− 2 2 π π cos(k )I + sin(k )Q− 2 2 (413) (414) Decomposition A square matrix A can always be written as a sum of a symmetric A+ and an antisymmetric matrix A− A = A + + A− (415) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 49 9.6 Positive Definite and Semi-definite Matrices 9.6 9.6.1 9 SPECIAL MATRICES Positive Definite and Semi-definite Matrices Definitions A matrix A is positive definite if and only if xT Ax > 0, ∀x 6= 0 (416) A matrix A is positive semi-definite if and only if xT Ax ≥ 0, ∀x (417) Note that if A is positive definite, then A is also positive semi-definite. 9.6.2 Eigenvalues The following holds with respect to the eigenvalues: A pos. def. A pos. semi-def. 9.6.3 H ⇔ eig( A+A )>0 2 A+AH ⇔ eig( 2 ) ≥ 0 (418) Trace The following holds with respect to the trace: A pos. def. A pos. semi-def. 9.6.4 ⇒ Tr(A) > 0 ⇒ Tr(A) ≥ 0 (419) Inverse If A is positive definite, then A is invertible and A−1 is also positive definite. 9.6.5 Diagonal If A is positive definite, then Aii > 0, ∀i 9.6.6 Decomposition I The matrix A is positive semi-definite of rank r ⇔ there exists a matrix B of rank r such that A = BBT The matrix A is positive definite ⇔ there exists an invertible matrix B such that A = BBT 9.6.7 Decomposition II Assume A is an n × n positive semi-definite, then there exists an n × r matrix B of rank r such that BT AB = I. Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 50 9.7 Singleentry Matrix, The 9.6.8 9 Equation with zeros Assume A is positive semi-definite, then XT AX = 0 9.6.9 SPECIAL MATRICES ⇒ AX = 0 Rank of product Assume A is positive definite, then rank(BABT ) = rank(B) 9.6.10 Positive definite property If A is n × n positive definite and B is r × n of rank r, then BABT is positive definite. 9.6.11 Outer Product If X is n × r, where n ≤ r and rank(X) = n, then XXT is positive definite. 9.6.12 Small pertubations If A is positive definite and B is symmetric, then A − tB is positive definite for sufficiently small t. 9.6.13 Hadamard inequality If A is a positive definite or semi-definite matrix, then Y det(A) ≤ Aii i See [15, pp.477] 9.6.14 Hadamard product relation Assume that P = AAT and Q = BBT are semi positive definite matrices, it then holds that P ◦ Q = RRT where the columns of R are constructed as follows: ri+(j−1)NA = ai ◦ bj , for i = 1, 2, ..., NA and j = 1, 2, ..., NB . The result is unpublished, but reported by Pavel Sakov and Craig Bishop. 9.7 9.7.1 Singleentry Matrix, The Definition The single-entry matrix Jij ∈ Rn×n is defined as the matrix which is zero everywhere except in the entry (i, j) in which it is 1. In a 4 × 4 example one Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 51 9.7 Singleentry Matrix, The 9 SPECIAL MATRICES might have  J23 0  0 =  0 0 0 0 0 0 0 1 0 0  0 0   0  0 (420) The single-entry matrix is very useful when working with derivatives of expressions involving matrices. 9.7.2 Swap and Zeros Assume A to be n × m and Jij to be m × p  AJij = 0 0 . . . Ai ... 0  (421) i.e. an n × p matrix of zeros with the i.th column of A in place of the j.th column. Assume A to be n × m and Jij to be p × n   0  ..   .     0     (422) Jij A =   Aj   0     .   ..  0 i.e. an p × m matrix of zeros with the j.th row of A in the placed of the i.th row. 9.7.3 Rewriting product of elements Aki Bjl = (Aei eTj B)kl = (AJij B)kl (423) Aik Blj = (AT ei eTj BT )kl = (AT Jij BT )kl (424) Aik Bjl = Aki Blj = 9.7.4 T (A ei eTj B)kl (Aei eTj BT )kl = = T ij (A J B)kl ij T (AJ B )kl (425) (426) Properties of the Singleentry Matrix If i = j Jij Jij = Jij Jij (Jij )T = Jij (Jij )T (Jij )T = Jij (Jij )T Jij = Jij If i 6= j Jij Jij = 0 Jij (Jij )T = Jii (Jij )T (Jij )T = 0 (Jij )T Jij = Jjj Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 52 9.8 Symmetric, Skew-symmetric/Antisymmetric 9.7.5 9 SPECIAL MATRICES The Singleentry Matrix in Scalar Expressions Assume A is n × m and J is m × n, then Tr(AJij ) = Tr(Jij A) = (AT )ij (427) Assume A is n × n, J is n × m and B is m × n, then Tr(AJij B) = (AT BT )ij Tr(AJji B) = (BA)ij Tr(AJij Jij B) = diag(AT BT )ij (428) (429) (430) Assume A is n × n, Jij is n × m B is m × n, then xT AJij Bx = x AJij Jij Bx = T 9.7.6 (AT xxT BT )ij diag(AT xxT BT )ij (431) (432) Structure Matrices The structure matrix is defined by ∂A = Sij ∂Aij (433) Sij = Jij (434) Sij = Jij + Jji − Jij Jij (435) If A has no special structure then If A is symmetric then 9.8 9.8.1 Symmetric, Skew-symmetric/Antisymmetric Symmetric The matrix A is said to be symmetric if A = AT (436) Symmetric matrices have many important properties, e.g. that their eigenvalues are real and eigenvectors orthogonal. 9.8.2 Skew-symmetric/Antisymmetric The antisymmetric matrix is also known as the skew symmetric matrix. It has the following property from which it is defined A = −AT (437) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 53 9.9 Toeplitz Matrices 9 SPECIAL MATRICES Hereby, it can be seen that the antisymmetric matrices always have a zero diagonal. The n × n antisymmetric matrices also have the following properties. det(AT ) − det(A) = = det(−A) = (−1)n det(A) det(−A) = 0, if n is odd (438) (439) The eigenvalues of an antisymmetric matrix are placed on the imaginary axis and the eigenvectors are unitary. 9.8.3 Decomposition A square matrix A can always be written as a sum of a symmetric A+ and an antisymmetric matrix A− A = A+ + A− (440) Such a decomposition could e.g. be A= 9.9 A − AT A + AT + = A+ + A− 2 2 (441) Toeplitz Matrices A Toeplitz matrix T is a matrix where the elements of each diagonal is the same. In the n × n square case, it has the following structure:     t0 t1 · · · tn−1 t11 t12 · · · t1n  ..  ..   .. ..  t−1  t21 . . . . . . . . .  .      (442) T= .  = . . . . . .. .. .. .. t   ..  .. t1  12 t−(n−1) · · · t−1 t0 tn1 · · · t21 t11 A Toeplitz matrix is persymmetric. If a matrix is persymmetric (or orthosymmetric), it means that the matrix is symmetric about its northeast-southwest diagonal (anti-diagonal) [12]. Persymmetric matrices is a larger class of matrices, since a persymmetric matrix not necessarily has a Toeplitz structure. There are some special cases of Toeplitz matrices. The symmetric Toeplitz matrix is given by:   t0 t1 · · · tn−1  ..  .. ..  . . t1 .    T= (443)  .. . . . .  . . . t1  t−(n−1) · · · t1 t0 The circular Toeplitz matrix:  t0   TC =    tn .. . t1 t1 .. . .. ··· .. . .. . . · · · tn−1  tn−1 ..  .    t1  t0 (444) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 54 9.10 Transition matrices 9 The upper triangular Toeplitz matrix:  t0 t1 · · · tn−1  ..  0 ... ... .  TU =  . . . .. ..  .. t1 0 ··· 0 t0 and the lower triangular Toeplitz matrix:  t0 0 ···  . .. ...  t−1  TL =  .. .. ..  . . . t−(n−1) · · · t−1 9.9.1 SPECIAL MATRICES    ,   (445)  0 ..  .    0  t0 (446) Properties of Toeplitz Matrices The Toeplitz matrix has some computational advantages. The addition of two Toeplitz matrices can be done with O(n) flops, multiplication of two Toeplitz matrices can be done in O(n ln n) flops. Toeplitz equation systems can be solved in O(n2 ) flops. The inverse of a positive definite Toeplitz matrix can be found in O(n2 ) flops too. The inverse of a Toeplitz matrix is persymmetric. The product of two lower triangular Toeplitz matrices is a Toeplitz matrix. More information on Toeplitz matrices and circulant matrices can be found in [13, 7]. 9.10 Transition matrices A square matrix P is a transition matrix, also known as stochastic matrix or probability matrix, if X 0 ≤ (P)ij ≤ 1, (P)ij = 1 j The transition matrix usually describes the probability of moving from state i to j in one step and is closely related to markov processes. Transition matrices have the following properties Prob[i → j in 1 step] = (P)ij Prob[i → j in 2 steps] = (P2 )ij Prob[i → j in k steps] = (Pk )ij If all rows are identical ⇒ Pn = P αP = α, α is called invariant where α is a so-called stationary probability vector, i.e., 0 ≤ αi ≤ 1 and 1. (447) (448) (449) (450) (451) P i αi = Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 55 9.11 Units, Permutation and Shift 9.11 9 SPECIAL MATRICES Units, Permutation and Shift 9.11.1 Unit vector Let ei ∈ Rn×1 be the ith unit vector, i.e. the vector which is zero in all entries except the ith at which it is 1. 9.11.2 Rows and Columns = eTi A = Aej i.th row of A j.th column of A 9.11.3 (452) (453) Permutations Let P be some permutation  0 1 P= 1 0 0 0 matrix, e.g.  0  0  = e2 1  eT2 =  eT1  eT3  e1 e3  (454) For permutation matrices it holds that PPT = I and that (455)  eT2 A PA =  eT1 A  eT3 A  AP =  Ae2 Ae1 Ae3  (456) That is, the first is a matrix which has columns of A but in permuted sequence and the second is a matrix which has the rows of A but in the permuted sequence. 9.11.4 Translation, Shift or Lag Operators Let L denote the lag (or ’translation’ example by  0  1 L=  0 0 or ’shift’) operator defined on a 4 × 4 0 0 1 0 0 0 0 1  0 0   0  0 (457) i.e. a matrix of zeros with one on the sub-diagonal, (L)ij = δi,j+1 . With some signal xt for t = 1, ..., N , the n.th power of the lag operator shifts the indices, i.e. n 0 for t = 1, .., n (Ln x)t = (458) xt−n for t = n + 1, ..., N Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 56 9.12 Vandermonde Matrices 9 SPECIAL MATRICES A related but slightly different matrix is the ’recurrent shifted’ operator defined on a 4x4 example by   0 0 0 1  1 0 0 0   L̂ =  (459)  0 1 0 0  0 0 1 0 i.e. a matrix defined by (L̂)ij = δi,j+1 + δi,1 δj,dim(L) . On a signal x it has the effect (L̂n x)t = xt0 , t0 = [(t − n) mod N ] + 1 (460) That is, L̂ is like the shift operator L except that it ’wraps’ the signal as if it was periodic and shifted (substituting the zeros with the rear end of the signal). Note that L̂ is invertible and orthogonal, i.e. L̂−1 = L̂T 9.12 (461) Vandermonde Matrices A Vandermonde matrix has the form [15]  1 v1 v12 · · · v1n−1  1 v2 v22 · · · v n−1 2  V= . . .. .. . .  . . . . 1 vn vn2 · · · vnn−1    .  (462) The transpose of V is also said to a Vandermonde matrix. The determinant is given by [29] Y det V = (vi − vj ) (463) i>j Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 57 10 10 10.1 10.1.1 FUNCTIONS AND OPERATORS Functions and Operators Functions and Series Finite Series (Xn − I)(X − I)−1 = I + X + X2 + ... + Xn−1 10.1.2 (464) Taylor Expansion of Scalar Function Consider some scalar function f (x) which takes the vector x as an argument. This we can Taylor expand around x0 1 f (x) ∼ = f (x0 ) + g(x0 )T (x − x0 ) + (x − x0 )T H(x0 )(x − x0 ) 2 where g(x0 ) = 10.1.3 ∂f (x) ∂x H(x0 ) = x0 ∂ 2 f (x) ∂x∂xT (465) x0 Matrix Functions by Infinite Series As for analytical functions in one dimension, one can define a matrix function for square matrices X by an infinite series ∞ X f (X) = cn Xn (466) n=0 P assuming the limit exists and is finite. If the coefficients cn fulfils n cn xn < ∞, then one can prove that the above series exists and is finite, see [1]. Thus for any analytical function f (x) there exists a corresponding matrix function f (x) constructed by the Taylor expansion. Using this one can prove the following results: 1) A matrix A is a zero of its own characteristic polynomium [1]: X p(λ) = det(Iλ − A) = cn λn ⇒ p(A) = 0 (467) n 2) If A is square it holds that [1] A = UBU−1 ⇒ f (A) = Uf (B)U−1 (468) 3) A useful fact when using power series is that An → 0forn → ∞ if |A| < 1 (469) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 58 10.2 Kronecker and Vec Operator 10.1.4 10 FUNCTIONS AND OPERATORS Exponential Matrix Function In analogy to the ordinary scalar exponential function, one can define exponential and logarithmic matrix functions: eA ≡ ∞ X 1 1 n A = I + A + A2 + ... n! 2 n=0 (470) e−A ≡ ∞ X 1 1 (−1)n An = I − A + A2 − ... n! 2 n=0 (471) etA ≡ ∞ X 1 1 (tA)n = I + tA + t2 A2 + ... n! 2 n=0 (472) ∞ X (−1)n−1 n 1 1 A = A − A2 + A3 − ... n 2 3 n=1 (473) ln(I + A) ≡ Some of the properties of the exponential function are [1] eA eB (eA )−1 d tA e dt d Tr(etA ) dt det(eA ) 10.1.5 = eA+B = e−A if AB = BA = AetA = etA A, = t∈R Tr(AetA ) = eTr(A) (478) Trigonometric Functions cos(A) ≡ 10.2.1 (476) (477) ∞ X (−1)n A2n+1 1 1 = A − A3 + A5 − ... sin(A) ≡ (2n + 1)! 3! 5! n=0 10.2 (474) (475) ∞ X (−1)n A2n 1 1 = I − A2 + A4 − ... (2n)! 2! 4! n=0 (479) (480) Kronecker and Vec Operator The Kronecker Product The Kronecker product of an m × n matrix A and an r × q matrix B, is an mr × nq matrix, A ⊗ B defined as   A11 B A12 B ... A1n B  A21 B A22 B ... A2n B    A⊗B= (481)  .. ..   . . Am1 B Am2 B ... Amn B Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 59 10.2 Kronecker and Vec Operator 10 FUNCTIONS AND OPERATORS The Kronecker product has the following properties (see [19]) A ⊗ (B + C) A⊗B A ⊗ (B ⊗ C) (αA A ⊗ αB B) (A ⊗ B)T (A ⊗ B)(C ⊗ D) (A ⊗ B)−1 (A ⊗ B)+ rank(A ⊗ B) Tr(A ⊗ B) det(A ⊗ B) {eig(A ⊗ B)} {eig(A ⊗ B)} eig(A ⊗ B) A⊗B+A⊗C B⊗A in general (A ⊗ B) ⊗ C αA αB (A ⊗ B) A T ⊗ BT AC ⊗ BD A−1 ⊗ B−1 A + ⊗ B+ rank(A)rank(B) Tr(A)Tr(B) = Tr(ΛA ⊗ ΛB ) = det(A)rank(B) det(B)rank(A) = {eig(B ⊗ A)} if A, B are square T = {eig(A)eig(B) } if A, B are symmetric and square = eig(A) ⊗ eig(B) = 6= = = = = = = = = (482) (483) (484) (485) (486) (487) (488) (489) (490) (491) (492) (493) (494) (495) Where {λi } denotes the set of values λi , that is, the values in no particular order or structure, and ΛA denotes the diagonal matrix with the eigenvalues of A. 10.2.2 The Vec Operator The vec-operator applied on a matrix A stacks the columns into a vector, i.e. for a 2 × 2 matrix   A11    A21  A11 A12  A= vec(A) =   A12  A21 A22 A22 Properties of the vec-operator include (see [19]) vec(AXB) = (BT ⊗ A)vec(X) Tr(AT B) = vec(A)T vec(B) vec(A + B) = vec(A) + vec(B) vec(αA) = α · vec(A) (496) (497) (498) (499) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 60 10.3 Vector Norms 10.3 10.3.1 10 FUNCTIONS AND OPERATORS Vector Norms Examples ||x||1 ||x||22 ||x||p X |xi | (500) = x x " #1/p X p = |xi | (501) = i H (502) i ||x||∞ = max |xi | (503) i Further reading in e.g. [12, p. 52] 10.4 10.4.1 Matrix Norms Definitions A matrix norm is a mapping which fulfils ||A|| ≥ 0 ||A|| = 0 ⇔ A = 0 ||cA|| = |c|||A||, c∈R ||A + B|| ≤ ||A|| + ||B|| 10.4.2 (504) (505) (506) (507) Induced Norm or Operator Norm An induced norm is a matrix norm induced by a vector norm by the following ||A|| = sup{||Ax|| | ||x|| = 1} (508) where || · || ont the left side is the induced matrix norm, while || · || on the right side denotes the vector norm. For induced norms it holds that ||I|| = 1 ||Ax|| ≤ ||A|| · ||x||, ||AB|| ≤ ||A|| · ||B||, 10.4.3 for all A, x for all A, B (509) (510) (511) Examples ||A||1 ||A||2 ||A||p ||A||∞ X max |Aij | j i q = max eig(AH A) = ( max ||Ax||p )1/p ||x||p =1 X = max |Aij | = i (512) (513) (514) (515) j Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 61 10.5 Rank 10 ||A||F = sX |Aij |2 = FUNCTIONS AND OPERATORS q Tr(AAH ) (Frobenius) (516) ij ||A||max = max |Aij | ||A||KF = ||sing(A)||1 (517) ij (Ky Fan) (518) where sing(A) is the vector of singular values of the matrix A. 10.4.4 Inequalities E. H. Rasmussen has in yet unpublished material derived and collected the following inequalities. They are collected in a table as below, assuming A is an m × n, and d = rank(A) ||A||max ||A||max ||A||1 ||A||∞ ||A||2 ||A||F ||A||KF m √n mn √ √ mn mnd ||A||1 1 ||A||∞ 1 m √n n √ √n nd √ m √ √m md ||A||2 √1 √m n √ d d ||A||F √1 √m n 1 √ ||A||KF √1 √m n 1 1 d which are to be read as, e.g. ||A||2 ≤ √ m · ||A||∞ (519) 10.4.5 Condition Number p The 2-norm of A equals (max(eig(AT A))) [12, p.57]. For a symmetric, positive definite matrix, this reduces to max(eig(A)) The condition number based on the 2-norm thus reduces to kAk2 kA−1 k2 = max(eig(A)) max(eig(A−1 )) = 10.5 10.5.1 max(eig(A)) . min(eig(A)) (520) Rank Sylvester’s Inequality If A is m × n and B is n × r, then rank(A) + rank(B) − n ≤ rank(AB) ≤ min{rank(A), rank(B)} 10.6 (521) Integral Involving Dirac Delta Functions Assuming A to be square, then Z p(s)δ(x − As)ds = 1 p(A−1 x) det(A) (522) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 62 10.7 Miscellaneous 10 FUNCTIONS AND OPERATORS Assuming A to be ”underdetermined”, i.e. ”tall”, then ) ( Z √ 1 T p(A+ x) if x = AA+ x det(A A) p(s)δ(x − As)ds = 0 elsewhere (523) See [9]. 10.7 Miscellaneous For any A it holds that rank(A) = rank(AT ) = rank(AAT ) = rank(AT A) (524) It holds that A is positive definite ⇔ ∃B invertible, such that A = BBT (525) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 63 A A A.1 A.1.1 ONE-DIMENSIONAL RESULTS One-dimensional Results Gaussian Density (x − µ)2 p(x) = √ exp − 2σ 2 2πσ 2  1 A.1.2 A.1.3 Normalization Z √ (s−µ)2 e− 2σ2 ds = 2πσ 2 r   2 Z π b − 4ac −(ax2 +bx+c) e dx = exp a 4a  2  r Z π c1 − 4c2 c0 c2 x2 +c1 x+c0 e dx = exp −c2 −4c2 (526) (527) (528) (529) Derivatives ∂p(x) ∂µ ∂ ln p(x) ∂µ ∂p(x) ∂σ ∂ ln p(x) ∂σ A.1.4  (x − µ) σ2 (x − µ) = σ2   1 (x − µ)2 = p(x) − 1 σ σ2   1 (x − µ)2 = − 1 σ σ2 = p(x) (530) (531) (532) (533) Completing the Squares c2 x2 + c1 x + c0 = −a(x − b)2 + w −a = c2 or b= 1 c1 2 c2 w= 1 c21 + c0 4 c2 1 (x − µ)2 + d 2σ 2 −1 c2 σ2 = d = c0 − 1 2c2 4c2 c2 x2 + c1 x + c0 = − µ= A.1.5 −c1 2c2 Moments If the density is expressed by   1 (s − µ)2 p(x) = √ exp − 2σ 2 2πσ 2 or p(x) = C exp(c2 x2 + c1 x) (534) then the first few basic moments are Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 64 A.2 One Dimensional Mixture of Gaussians A ONE-DIMENSIONAL RESULTS hxi = µ = hx2 i = σ 2 + µ2 = hx3 i = 3σ 2 µ + µ3 = hx4 i = µ4 + 6µ2 σ 2 + 3σ 4 = −c1 2c2  2 −c1 −1 2c2 + h 2c2 i c21 c1 3 − 2 (2c ) 2c2  2 4  2 c1 c1 + 6 2c2 2c2  −1 2c2  +3  1 2c2 2 and the central moments are h(x − µ)i 2 h(x − µ) i h(x − µ)3 i 4 h(x − µ) i = = = = 0 σ 0 = 2 0h = = 3σ 4 −1 2c2 i 0 = 3 h 1 2c2 i2 A kind of pseudo-moments (un-normalized integrals) can easily be derived as  2  r Z π c1 2 n n exp(c2 x + c1 x)x dx = Zhx i = exp hxn i (535) −c2 −4c2 ¿From the un-centralized moments one can derive other entities like A.2 A.2.1 hx2 i − hxi2 hx3 i − hx2 ihxi = = σ2 2σ 2 µ = = hx4 i − hx2 i2 = 2σ 4 + 4µ2 σ 2 = h c2 1 − 4 2c12 i One Dimensional Mixture of Gaussians Density and Normalization p(s) = K X k A.2.2 −1 2c2 2c1 (2c2 )2 2 (2c2 )2   ρ 1 (s − µk )2 p k exp − 2 σk2 2πσk2 (536) Moments An useful fact of MoG, is that hxn i = X ρk hxn ik (537) k where h·ik denotes average with respect to the k.th component. We can calculate the first four moments from the densities   X 1 1 (x − µk )2 p(x) = ρk p exp − (538) 2 σk2 2πσk2 k X   p(x) = ρk Ck exp ck2 x2 + ck1 x (539) k as Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 65 A.2 One Dimensional Mixture of Gaussians A ONE-DIMENSIONAL RESULTS = P = P hx3 i = P hx4 i = hxi 2 hx i k ρk µk 2 k ρk (σk + µ2k ) 2 3 k ρk (3σk µk + µk ) P 4 2 2 4 k ρk (µk + 6µk σk + 3σk ) = P = P = P = k k ρk h −ck1  2ck2 −1 2ck2 ρk i +  −ck1 2ck2 2  h ii c2k1 ck1 3 − ρ 2 k k (2ck2 ) 2ck2   2  2 P c2k1 ck1 1 − 6 2ck2 + 3 k ρk 2ck2 2ck2 h If all the gaussians are centered, i.e. µk = 0 for all k, then hxi = 2 hx i hx3 i = = 4 hx i = 0 P 2 k ρk σ k 0 P 4 k ρk 3σk = = = = 0 P k ρk 0 P h k ρk 3 −1 2ck2 h i −1 2ck2 i2 ¿From the un-centralized moments one can derive other entities like  2  P 2 hx2 i − hxi2 = 0 ρk ρk 0 µk + σk − µk µk 0 k,k  2  P 3 2 2 hx3 i − hx2 ihxi = k,k0 ρk ρk0 3σk µk + µk − (σk + µk )µk0  P 4 2 2 4 2 2 2 2 hx4 i − hx2 i2 = k,k0 ρk ρk0 µk + 6µk σk + 3σk − (σk + µk )(σk0 + µk0 ) A.2.3 Derivatives P Defining p(s) = k ρk Ns (µk , σk2 ) we get for a parameter θj of the j.th component ρj Ns (µj , σj2 ) ∂ ln(ρj Ns (µj , σj2 )) ∂ ln p(s) =P (540) 2 ∂θj ∂θj k ρk Ns (µk , σk ) that is, ∂ ln p(s) ∂ρj = ∂ ln p(s) ∂µj = ∂ ln p(s) ∂σj = ρj Ns (µj , σj2 ) 1 P 2 k ρk Ns (µk , σk ) ρj (541) ρj Ns (µj , σj2 ) (s − µj ) P 2 σj2 k ρk Ns (µk , σk ) " # ρj Ns (µj , σj2 ) 1 (s − µj )2 P −1 2 σj2 k ρk Ns (µk , σk ) σj (542) (543) Note thatP ρk must be constrained to be proper ratios. Defining the ratios by ρj = erj / k erk , we obtain ∂ ln p(s) X ∂ ln p(s) ∂ρl = ∂rj ∂ρl ∂rj l where ∂ρl = ρl (δlj − ρj ) ∂rj (544) Petersen & Pedersen, The Matrix Cookbook, Version: November 14, 2008, Page 66 B B B.1 B.1.1 PROOFS AND DETAILS Proofs and Details Misc Proofs Proof of Equation 83 Essentially we need to calculate ∂(Xn )kl ∂Xij = ∂ ∂Xij X Xk,u1 Xu1 ,u2 ...Xun−1 ,l u1 ,...,un−1 = δk,i δu1 ,j Xu1 ,u2 ...Xun−1 ,l +Xk,u1 δu1 ,i δu2 ,j ...Xun−1 ,l .. . +Xk,u1 Xu1 ,u2 ...δun−1 ,i δl,j = n−1 X (Xr )ki (Xn−1−r )jl r=0 = n−1 X (Xr Jij Xn−1−r )kl r=0 Using the properties of the single entry matrix found in Sec. 9.7.4, the result follows easily. B.1.2 Details on Eq. 546 ∂ det(XH AX) = det(XH AX)Tr[(XH AX)−1 ∂(XH AX)] = det(XH AX)Tr[(XH AX)−1 (∂(XH )AX + XH ∂(AX))] = det(XH AX) Tr[(XH AX)−1 ∂(XH )AX]  +Tr[(XH AX)−1 XH ∂(AX)] = det(XH AX) Tr[AX(XH AX)−1 ∂(XH )]  +Tr[(XH AX)−1 XH A∂(X)] First, the derivative is found with respect to the real part of X ∂ det(XH AX) ∂