^{1}

^{1}

The evaluation of Gaussian functional integrals is essential on the application to statistical physics and the general calculation of path integrals of stochastic processes. In this work, we present an elementary extension of an usual result of the literature as well as an alternative new derivation.

In the present work, we apply theorems of Linear Algebra to derive and extend an usual result of the literature on evaluation of multidimensional Gaussian integrals of the form [

∫ − ∞ ∞ e − x T A x d x 1 ⋯ d x n

where x T is the transpose of every non-zero column vector x ∈ I R n and x T A x is a real positive definite quadratic form of n variables. In order to guarantee the convergence of the integrals, we should have

x T A x > 0 (1)

We can also write A as a sum of its symmetric and skew-symmetric components, A = ( A + A T 2 ) + ( A − A T 2 ) and we have

x T A x = x T ( A + A T 2 ) x > 0 (2)

since x T ( A − A T 2 ) x ≡ 0 .

From the Spectral Theorem of Linear Algebra [

We then apply an orthogonal transformation to the quadratic form x T ( A + A T 2 ) x :

x = θ y ; x T = y T θ T ; θ T θ = 1 l (3)

where the columns of the matrix θ are the orthonormal eigenvectors of the matrix ( A + A T 2 ) .

We then have

θ T ( A + A T 2 ) θ = ( A + A T 2 ) d (4)

where ( A + A T 2 ) d is the corresponding diagonal form.

From Equation (3) and Equation (4) we have:

det ( A + A T 2 ) = det ( A + A T 2 ) d = λ 1 n 1 λ 2 n 2 ⋯ λ l n l (5)

where λ 1 , ⋯ , λ l are the eigenvalues and n 1 , ⋯ , n l their algebraic multiplicities [

n 1 + n 2 + ⋯ + n l = n (6)

The transformation of the volume element is

d x 1 ⋯ d x n = det θ d y 1 ⋯ d y n (7)

and we can choose

det θ = 1 (8)

from Equation (3) and the adequate organization of the orthonormal eigenvectors as the columns of the matrix θ .

The quadratic form can then be written as

x T A x = y T ( A + A T 2 ) d y = λ 1 ∑ j = 1 n 1 y j 2 + ⋯ + λ l ∑ j = n − n l + 1 n y j 2 (9)

From Equation (8) and Equation (9), the multidimensional integral will result

∫ − ∞ ∞ e − x T A x d x 1 ⋯ d x n = ∫ − ∞ ∞ e − y T ( A + A T 2 ) d y d y 1 ⋯ d y n = ∏ j = 1 n 1 ∫ − ∞ ∞ e − λ 1 y j 2 d y j ⋯ ∏ j = n − n l + 1 n ∫ − ∞ ∞ e − λ l y j 2 d y j (10)

since each unidimensional integral is given by

∫ − ∞ ∞ e − λ k y j 2 d y j = ( π λ k ) 1 / 2 , k = 1 , ⋯ , l . (11)

We finally write, from Equations ((5), (10), (11)),

∫ − ∞ ∞ e − x T A x d x 1 ⋯ d x n = ( π n λ 1 n 1 λ 2 n 2 ⋯ λ l n l ) 1 / 2 = ( π n det ( A + A T 2 ) ) 1 / 2 (12)

and we see from Equation (12) that the original matrix A does not need to be diagonalizable [

We now present an alternative derivation of the result obtained above. We will show that there is no need to apply an orthogonal transformation to diagonalize a quadratic form in order to derive formula (12).

Let us write the I R n vectors:

x = ∑ j = 1 n x j e ^ j , b j = ∑ k = 1 n b j k e ^ k (13)

where e ^ j , j = 1 , ⋯ , n is an orthonormal basis,

e ^ j ⋅ e ^ k = δ j k (14)

We now define the matrices

B j × j = ( b 1 ⋅ e ^ 1 ⋯ b 1 ⋅ e ^ j − 1 b 1 ⋅ e ^ j ⋮ ⋱ ⋮ ⋮ b j ⋅ e ^ 1 ⋯ b j ⋅ e ^ j − 1 b j ⋅ e ^ j ) = ( b 11 ⋯ b 1 j − 1 b 1 j ⋮ ⋱ ⋮ ⋮ b j 1 ⋯ b j j − 1 b j j ) (15)

B x j × j = ( b 1 ⋅ e ^ 1 ⋯ b 1 ⋅ e ^ j − 1 b 1 ⋅ x ⋮ ⋱ ⋮ ⋮ b j ⋅ e ^ 1 ⋯ b j ⋅ e ^ j − 1 b j ⋅ x ) = ( b 11 ⋯ b 1 j − 1 b 1 ⋅ x ⋮ ⋱ ⋮ ⋮ b j 1 ⋯ b j j − 1 b j ⋅ x ) (16)

The first ( j − 1 ) terms of the expansion of x will produce null determinants of the B j × j x matrix. The j t h term will correspond to the determinant det B j × j times x j . The ( j + 1 ) t h term will lead to a determinant of a B j × j j + 1

matrix which is obtained by replacement of j t h column of the matrix B j × j by a column whose elements are b 1 j + 1 ⋯ b j j + 1 , times x j + 1 . The n t h term will correspond to the determinant of a B j × j n matrix which is obtained by replacement of the j t h column of the matrix B j × j by a column whose elements are b 1 n ⋯ b j n , times x n . We can then write,

det B j × j x = x j det B j × j + ∑ k = j + 1 n x k det B j × j k (17)

It should be noted that if B n × n = B is a symmetric matrix like B = A + A T 2 , ∀ A ,

the quadratic form x T B x can be written as

x T A x ≡ x T B x = ∑ j = 1 n ( det B j × j x ) 2 det B j − 1 × j − 1 det B j × j (18)

where

det B 0 × 0 = 1 , det B 1 × 1 = b 11 , det B 1 × 1 x = b 1 ⋅ x

From Equation (17), we can write Equation (18) as

x T A x = ∑ j = 1 n det B j × j det B j − 1 × j − 1 ( x j + 1 det B j × j ∑ k = j + 1 n x k det B j × j k ) 2 (19)

From Sylvester’s Criterion [

∫ − ∞ ∞ e − det B j × j det B j − 1 × j − 1 ( x j + 1 det B j × j ∑ k = j + 1 n x k det B j × j k ) 2 d x j = ( π det B j × j det B j − 1 × j − 1 ) 1 / 2 (20)

since the other variables x j + 1 , ⋯ , x n which are contained on the term 1 det B j × j ∑ k = j + 1 n x k det B j × j k do not contribute to unidimensional integrals of the form

∫ − ∞ ∞ e − α ( x j + f ( x j + 1 , ⋯ , x n ) ) 2 d x j = ( π α ) 1 / 2

where α is a real constant and f a generic function of its arguments.

We then have from Equation (19) and Equation (20):

∫ − ∞ ∞ e − x T A x d x 1 ⋯ d x n = ∏ j = 1 n ( π det B j × j det B j − 1 × j − 1 ) 1 / 2 = ( π n det B ) 1 / 2 = ( π n det ( A + A T 2 ) ) 1 / 2 q . e . d . (21)

Mondaini, R.P. and de Albuquerque Neto, S.C. (2017) Revisiting the Evaluation of a Multidimensional Gaussian Integral. Journal of Applied Mathematics and Physics, 5, 449-452. https://doi.org/10.4236/jamp.2017.52039