Chi-squared Distribution ($k=1$)
Theory
Given a standard random variable $X \sim \mathcal{N}(0,1)$ we would like to find the random variable $Y = X^2$. The analytic solution is known: $Y$ follows a chi-squared distribution with $k=1$ degree of freedom.
Using polynomial chaos expansion (PCE), the problem can be solved using Galerkin projection. Let $\{\phi_k \}_{k=0}^{n}$ be the monic orthogonal basis relative to the probability density of $X$, namely
Then, the PCE of $X$ is given by
with
To find the PCE coefficients $y_k$ for $Y = \sum_{k=}^n y_k \phi_k$, we apply Galerkin projection, which leads to
Hence, knowing the scalars $\langle \phi_m, \phi_m \rangle$, and $\langle \phi_i \phi_j, \phi_m \rangle$, the PCE coefficients $y_k$ can be obtained immediately. From the PCE coefficients we can get the moments and compare them to the closed-form expressions.
Notice: A maximum degree of 2 suffices to get the exact solution with PCE. In other words, increasing the maximum degree to values greater than 2 introduces nothing but computational overhead (and numerical errors, possibly).
Practice
First, we create a orthogonal basis relative to $f_X(x)$ of degree at most $d=2$ (deg below).
Notice that we consider a total of Nrec recursion coefficients, and that we also add a quadrature rule by setting addQuadrature = true.
using PolyChaos
k = 1
deg, Nrec = 2, 20
opq = GaussOrthoPoly(deg; Nrec=Nrec, addQuadrature=true);GaussOrthoPoly{Array{Float64,1},GaussMeasure,Quad{Float64,Array{Float64,1}}}(2, [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [1.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0], GaussMeasure(PolyChaos.w_gaussian, (-Inf, Inf), true), Quad{Float64,Array{Float64,1}}("golubwelsch", 19, [-7.382579024030429, -6.262891156513253, -5.3205363773360395, -4.465872626831033, -3.664416547450638, -2.898051276515753, -2.1555027613169346, -1.4288766760783724, -0.7120850440423797, 9.497049878180818e-17, 0.7120850440423805, 1.4288766760783733, 2.155502761316937, 2.898051276515753, 3.66441654745064, 4.465872626831031, 5.320536377336037, 6.262891156513249, 7.382579024030431], [7.482830054057322e-13, 1.2203708484474712e-9, 2.532220032092887e-7, 1.535114595466668e-5, 0.00037850210941427036, 0.004507235420342024, 0.02866669103011855, 0.10360365727614435, 0.22094171219914369, 0.28377319275152074, 0.220941712199144, 0.10360365727614416, 0.028666691030118426, 0.004507235420342064, 0.0003785021094142682, 1.535114595466658e-5, 2.532220032092883e-7, 1.220370848447473e-9, 7.482830054057216e-13]))What are the basis polynomials?
showbasis(opq; sym="ξ")1
ξ
ξ^2 - 1.0Note that the command showbasis is based on the more general showpoly:
showpoly(0:2:deg,opq)1
x^2 - 1.0Next, we define the PCE for $X$.
L = dim(opq)
mu, sig = 0., 1.
x = [ convert2affinePCE(mu, sig, opq); zeros(Float64,L-2) ]3-element Array{Float64,1}:
0.0
1.0
0.0With the orthogonal basis and the quadrature at hand, we can compute the tensors t2 and t3 that store the entries $\langle \phi_m, \phi_m \rangle$, and $\langle \phi_i \phi_j, \phi_m \rangle$, respectively.
t2 = Tensor(2, opq);
t3 = Tensor(3, opq)Tensor{GaussOrthoPoly{Array{Float64,1},GaussMeasure,Quad{Float64,Array{Float64,1}}}}(3, [1 ] = 1.0
[4 ] = 1.0
[7 ] = 2.0
[9 ] = 2.0
[15] = 8.0, PolyChaos.var"#getfun#45"{Int64,GaussOrthoPoly{Array{Float64,1},GaussMeasure,Quad{Float64,Array{Float64,1}}},SparseArrays.SparseVector{Float64,Int64}}(3, GaussOrthoPoly{Array{Float64,1},GaussMeasure,Quad{Float64,Array{Float64,1}}}(2, [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [1.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0], GaussMeasure(PolyChaos.w_gaussian, (-Inf, Inf), true), Quad{Float64,Array{Float64,1}}("golubwelsch", 19, [-7.382579024030429, -6.262891156513253, -5.3205363773360395, -4.465872626831033, -3.664416547450638, -2.898051276515753, -2.1555027613169346, -1.4288766760783724, -0.7120850440423797, 9.497049878180818e-17, 0.7120850440423805, 1.4288766760783733, 2.155502761316937, 2.898051276515753, 3.66441654745064, 4.465872626831031, 5.320536377336037, 6.262891156513249, 7.382579024030431], [7.482830054057322e-13, 1.2203708484474712e-9, 2.532220032092887e-7, 1.535114595466668e-5, 0.00037850210941427036, 0.004507235420342024, 0.02866669103011855, 0.10360365727614435, 0.22094171219914369, 0.28377319275152074, 0.220941712199144, 0.10360365727614416, 0.028666691030118426, 0.004507235420342064, 0.0003785021094142682, 1.535114595466658e-5, 2.532220032092883e-7, 1.220370848447473e-9, 7.482830054057216e-13])), [1 ] = 1.0
[4 ] = 1.0
[7 ] = 2.0
[9 ] = 2.0
[15] = 8.0), GaussOrthoPoly{Array{Float64,1},GaussMeasure,Quad{Float64,Array{Float64,1}}}(2, [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0], [1.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0, 11.0, 12.0, 13.0, 14.0, 15.0, 16.0, 17.0, 18.0, 19.0], GaussMeasure(PolyChaos.w_gaussian, (-Inf, Inf), true), Quad{Float64,Array{Float64,1}}("golubwelsch", 19, [-7.382579024030429, -6.262891156513253, -5.3205363773360395, -4.465872626831033, -3.664416547450638, -2.898051276515753, -2.1555027613169346, -1.4288766760783724, -0.7120850440423797, 9.497049878180818e-17, 0.7120850440423805, 1.4288766760783733, 2.155502761316937, 2.898051276515753, 3.66441654745064, 4.465872626831031, 5.320536377336037, 6.262891156513249, 7.382579024030431], [7.482830054057322e-13, 1.2203708484474712e-9, 2.532220032092887e-7, 1.535114595466668e-5, 0.00037850210941427036, 0.004507235420342024, 0.02866669103011855, 0.10360365727614435, 0.22094171219914369, 0.28377319275152074, 0.220941712199144, 0.10360365727614416, 0.028666691030118426, 0.004507235420342064, 0.0003785021094142682, 1.535114595466658e-5, 2.532220032092883e-7, 1.220370848447473e-9, 7.482830054057216e-13])))With the tensors at hand, we can compute the Galerkin projection.
y = [ sum( x[i]*x[j]*t3.get([i-1,j-1,m-1])/t2.get([m-1,m-1]) for i=1:L, j=1:L ) for m=1:L ]3-element Array{Float64,1}:
1.0
0.0
1.0000000000000016Let's compare the moments via PCE to the closed-form expressions.
moms_analytic(k) = [k, sqrt(2k), sqrt(8/k)]
function myskew(y)
e3 = sum( y[i]*y[j]*y[k]*t3.get([i-1,j-1,k-1]) for i=1:L, j=1:L, k=1:L )
μ = y[1]
σ = std(y,opq)
(e3-3*μ*σ^2-μ^3)/(σ^3)
end
print("Expected value:\t\t$(moms_analytic(k)[1]) = $(mean(y,opq))\n")
print("\t\t\terror = $(abs(mean(y,opq)-moms_analytic(k)[1]))\n")
print("Standard deviation:\t$(moms_analytic(k)[2]) = $(std(y,opq))\n")
print("\t\t\terror = $(moms_analytic(k)[2]-std(y,opq))\n")
print("Skewness:\t\t$(moms_analytic(k)[3]) = $(myskew(y))\n")
print("\t\t\terror = $(moms_analytic(k)[3]-myskew(y))\n")Expected value: 1.0 = 1.0
error = 0.0
Standard deviation: 1.4142135623730951 = 1.4142135623730971
error = -1.9984014443252818e-15
Skewness: 2.8284271247461903 = 2.828427124746197
error = -6.661338147750939e-15Let's plot the probability density function to compare results. We first draw samples from the measure with the help of sampleMeasure(), and then evaluate the basis at these samples and multiply times the PCE coefficients. The latter stop is done using evaluatePCE(). Finally, we compare the result agains the analytical PDF $\rho(t) = \frac{\mathrm{e}^{-0.5t}}{\sqrt{2 t} \, \Gamma(0.5)}$ of the chi-squared distribution with one degree of freedom.
using Plots
Nsmpl = 10000
# long way: ξ = sampleMeasure(Nsmpl,opq), ysmpl = evaluatePCE(y,ξ,opq)
ysmpl = samplePCE(Nsmpl, y, opq)
histogram(ysmpl; normalize=true, xlabel="t", ylabel="rho(t)")
import SpecialFunctions: gamma
ρ(t) = 1/(sqrt(2)*gamma(0.5))*1/sqrt(t)*exp(-0.5*t)
t = range(0.1; stop=maximum(ysmpl), length=100)
plot!(t, ρ.(t), w=4)