# 随机微积分作业代写stochastic calculus代考| GAUSSIAN PROCESSES

my-assignmentexpert™ 随机微积分stochastic calculus作业代写，免费提交作业要求， 满意后付款，成绩80\%以下全额退款，安全省心无顾虑。专业硕 博写手团队，所有订单可靠准时，保证 100% 原创。my-assignmentexpert™， 最高质量的随机微积分stochastic calculus作业代写，服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面，考虑到同学们的经济条件，在保障代写质量的前提下，我们为客户提供最合理的价格。 由于随机微积分stochastic calculus作业种类很多，难度波动比较大，同时其中的大部分作业在字数上都没有具体要求，因此随机微积分stochastic calculus作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。

my-assignmentexpert™ 为您的留学生涯保驾护航 在经济学作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的微积分calculus代写服务。我们的专家在随机微积分stochastic calculus 代写方面经验极为丰富，各种随机微积分stochastic calculus相关的作业也就用不着 说。

• 随机偏微分方程
• 随机控制
• Ito积分
• black-Scholes-Merton option pricing formula
• Fokker–Planck equation
• 布朗运动 Brownian motion

## 微积分作业代写calclulus代考|Gaussian random variables in Rk

1. The normal distribution $N=N\left(\mu, \sigma^{2}\right)$ on $R$ with mean $\mu$ and variance $\sigma^{2}$ is defined by
$$N(d x)=\frac{1}{\sigma \sqrt{2 \pi}} \exp \left(-\frac{(x-\mu)^{2}}{2 \sigma^{2}}\right) d x$$
The characteristic function (Fourier transform) of this distribution is given by
$$\hat{N}(t)=\int_{R} e^{i t x} N(d x)=\exp \left(i \mu t-\frac{1}{2} \sigma^{2} t^{2}\right), \quad t \in R$$
In the case of a mean zero normal distribution $N=N\left(0, \sigma^{2}\right)$ this becomes
$$N(d x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-x^{2} / 2 \sigma^{2}} d x, \quad \text { and } \quad \hat{N}(t)=e^{-\sigma^{2} t^{2} / 2}, \quad t \in R$$
and the standard normal distribution $N(0,1)$ satisfies
$$N(0,1)(d x)=\frac{1}{\sqrt{2 \pi}} e^{-x^{2} / 2} d x, \quad \text { and } \quad \widehat{N(0,1)}(t)=e^{-t^{2} / 2}, \quad t \in R .$$
For $\sigma^{2}=0$ the distribution $N\left(0, \sigma^{2}\right)=N(0,0)$ is not defined by the above density but is interpreted to be the point measure $N(0,0)=\epsilon_{0}$ concentrated at 0 . With

this interpretation the formula for the characteristic function $N(\widehat{0,0})(t)=\hat{\epsilon}_{0}(t)=$ $1=e^{-\sigma^{2} t^{2} / 2}$ holds in this case also.

The characteristic function of a random vector $X: \Omega \rightarrow R^{k}$ is defined to be the characteristic function of the distribution $P_{X}$ of $X$, that is, the function
$$F_{X}(t)=\hat{P}{X}(t)=\int{R^{k}} e^{i(t, x)} P_{X}(d x)=E\left(e^{i(t, X)}\right), \quad t \in R^{k} .$$
Recall that the components $X_{1}, \ldots, X_{k}$ of the random vector $X=\left(X_{1}, \ldots, X_{k}\right)^{\prime}$ are independent if and only if the joint distribution $P_{X}$ is the product measure $P_{X_{1}} \otimes P_{X_{2}} \otimes \ldots \otimes P_{X_{k}}$. This is easily seen to be equivalent with the factorization
$$F_{X}(t)=F_{X_{1}}\left(t_{1}\right) F_{X_{2}}\left(t_{2}\right) \ldots F_{X_{k}}\left(t_{k}\right), \quad \forall t=\left(t_{1}, t_{2}, \ldots, t_{k}\right)^{\prime} \in R^{k} .$$
Covariance matrix. The $k \times k$-matrix $C$ defined by $C_{i j}=E\left[\left(X_{i}-m_{i}\right)\left(X_{j}-m_{j}\right)\right]$, where $m_{i}=E X_{i}$, is called the covariance matrix $C$ of $X$. Here it is assumed that all relevant expectations exist. Set $m=\left(m_{1}, m_{2}, \ldots, m_{k}\right)^{\prime}$ and note that the matrix $\left(\left(X_{i}-m_{i}\right)\left(X_{j}-m_{j}\right)\right){i j}$ can be written as the product $(X-m)(X-m)^{\prime}$ of the column vector $(X-m)$ with the row vector $(X-m)^{\prime}$. Taking expectations entry by entry, we see that the covariance matrix $C$ of $X$ can also be written as $C=E\left[(X-m)(X-m)^{\prime}\right]$ in complete formal analogy to the covariance in the one dimensional case. Clearly $C$ is symmetric. Moreover, for each vector $t=$ $\left(t{1}, \ldots, t_{k}\right)^{\prime} \in R^{k}$ we have
$$0 \leq \operatorname{Var}\left(t_{1} X_{1}+\ldots+t_{k} X_{k}\right)=\sum_{i j} t_{i} t_{j} \operatorname{Cov}\left(X_{i} X_{j}\right)=\sum_{i j} C_{i j} t_{i} t_{j}=(C t, t)$$
and it follows that the covariance matrix $C$ is positive semidefinite. Let us note the effect of affine transformations on characteristic functions:

## 微积分作业代写calclulus代考|Theorem

1.b.0 Theorem. Let $T$ be an index set, $m: T \rightarrow R, C: T \times T \rightarrow R$ functions and assume that the matrix $C_{F}:=(C(s, t))_{s, t \in F}$ is selfadjoint and positive semidefinite, for each finite set $F \subseteq T$.

Then there exists a probability $P$ on the product space $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$ such that the coordinate maps $X_{t}: \omega \in \Omega \mapsto X_{t}(\omega)=\omega(t), t \in T$, form a Gaussian process $X=\left(X_{t}\right){t \in T}:(\Omega, \mathcal{F}, P) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ with mean function $E\left(X{t}\right)=m(t)$ and covariance function $\operatorname{Cov}\left(X_{s}, X_{t}\right)=C(s, t), s, t \in T$.

Remark. Our choice of $\Omega$ and $X_{t}$ implies that the process $X:(\Omega, \mathcal{F}) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ is the identity map, that is, the path $t \in T \mapsto X_{t}(\omega)$ is the element $\omega \in R^{T}=\Omega$ itself, for each $\omega \in \Omega$.

Proof. Fix any linear order on $T$ and use it to order vector components and matrix entries consistently. For finite subsets $F \subseteq G \subseteq T$ let
\begin{aligned} \pi_{F}: x &=\left(x_{t}\right){t \in T} \in \Omega=R^{T} \rightarrow\left(x{t}\right){t \in F} \in R^{F} \text { and } \ \pi{G F}: x &=\left(x_{t}\right){t \in G} \in R^{G} \rightarrow\left(x{t}\right){t \in F} \in R^{F} \end{aligned} denote the natural projections and set $$m{F}=(m(t)){t \in F} \in R^{F}, \quad C{F}=(C(s, t)){s, t \in F} \quad \text { and } \quad X{F}=\left(X_{t}\right){t \in F} .$$ Let $P$ be any probability on $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$. Since $X:(\Omega, \mathcal{F}, P) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ is the identity map, the distribution of $X$ on $\left(R^{T}, \mathcal{B}^{T}\right)$ is the measure $P$ itself and $\pi{F}(P)$ is the joint distribution of $X_{F}=\left(X_{t}\right){t \in F}$ on $R^{F}$. Thus $X$ is a Gaussian process with mean function $m$ and covariance function $C$ on the probability space $(\Omega, \mathcal{F}, P)$ if and only if the finite dimensional distribution $\pi{F}(P)$ is the Gaussian Law $N\left(m_{F}, C_{F}\right)$, for each finite subset $F \subseteq T$. By Kolmogoroff’s existence theorem (appendix D.5) such a probability measure on $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$ exists if and only if the system of Gaussian Laws $\left{N\left(m_{F}, C_{F}\right): F \subseteq T\right.$ finite $}$ satisfies the consistency condition
$$\pi_{G F}\left(N\left(m_{G}, C_{G}\right)\right)=N\left(m_{F}, C_{F}\right),$$

for all finite subsets $F \subseteq G \subseteq T$. To see that this is true, consider such sets $F$, $G$ and let $W$ be any random vector in $R^{G}$ such that $P_{W}=N\left(m_{G}, C_{G}\right)$. Then $\pi_{G F}\left(N\left(m_{G}, C_{G}\right)\right)=\pi_{G F}\left(P_{W}\right)=P_{\pi_{G F}(W)}$ and it will thus suffice to show that $Y=\pi_{G F}(W)$ is a Gaussian random vector with law $N\left(m_{F}, C_{F}\right)$ in $R^{F}$, that is, with characteristic function
$$F_{Y}(y)=\exp \left(i\left(y, m_{F}\right)-\frac{1}{2}\left(C_{F} y, y\right)\right), \quad y=\left(y_{t}\right){t \in F} \in R^{F} .$$ Since $W$ is a Gaussian random vector with law $N\left(m{G}, C_{G}\right)$ on $R^{G}$, we have
$$F_{W}(y)=\exp \left(i\left(x, m_{G}\right)-\frac{1}{2}\left(C_{G} x, x\right)\right), \quad x=\left(x_{t}\right){t \in G} \in R^{G},$$ and consequently (1.a.0), for $y \in R^{F}$, $$F{Y}(y)=F_{\pi_{G F}(W)}(y)=F_{W}\left(\pi_{G F}^{\prime} y\right)=\exp \left(i\left(\pi_{G F}^{\prime} y, m_{G}\right)-\frac{1}{2}\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)\right) .$$
Here $\pi_{G F}^{\prime}: R^{F} \rightarrow R^{G}$ is the adjoint map and so $\left(\pi_{G F}^{\prime} y, m_{G}\right)=\left(y, \pi_{G F} m_{G}\right)=$ $\left(y, m_{F}\right)$. Thus it remains to be shown only that $\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)=\left(C_{F} y, y\right)$. Let $y=\left(y_{t}\right){t \in F} \in R^{F}$. First we claim that $\pi{G F}^{\prime} y=z$, where the vector $z=\left(z_{t}\right){t \in G} \in$ $R^{G}$ is defined by $$z{t}=\left{\begin{array}{ll} y_{t} & \text { if } t \in F \ 0 & \text { if } t \in G \backslash F^{\prime} \end{array} \quad \forall y=\left(y_{t}\right){t \in F} \in R^{F} .\right.$$ Indeed, if $x=\left(x{t}\right){t \in G} \in R^{G}$ we have $\left(y, \pi{G F} x\right)=\sum_{t \in F} y_{t} x_{t}=\sum_{t \in G} z_{t} x_{t}=(z, x)$ and so $z=\pi_{G F}^{\prime} y$. Thus $\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)=\left(C_{G} z, z\right)=\sum_{s, t \in G} C(s, t) z_{s} z_{t}=$ $\sum_{s, t \in F} C(s, t) y_{s} y_{t}=\left(C_{F} y, y\right)$.

## 微积分作业代写calclulus代考|Gaussian random variables in Rk

1. 正态分布ñ=ñ(μ,σ2)在R平均μ和方差σ2定义为
ñ(dX)=1σ2圆周率经验⁡(−(X−μ)22σ2)dX

ñ^(吨)=∫R和一世吨Xñ(dX)=经验⁡(一世μ吨−12σ2吨2),吨∈R

ñ(dX)=1σ2圆周率和−X2/2σ2dX, 和 ñ^(吨)=和−σ2吨2/2,吨∈R

ñ(0,1)(dX)=12圆周率和−X2/2dX, 和 ñ(0,1)^(吨)=和−吨2/2,吨∈R.

$$F_{X}(t)=\hat{P} {X}(t)=\int {R^{k}} e^{i(t, x)} P_{X }(dx)=E\left(e^{i(t, X)}\right), \quad t \in R^{k} 。 R和C一种一世一世吨H一种吨吨H和C○米p○n和n吨sX1,…,X到○F吨H和r一种nd○米v和C吨○rX=(X1,…,X到)′一种r和一世nd和p和nd和n吨一世F一种nd○n一世和一世F吨H和j○一世n吨d一世s吨r一世b你吨一世○n磷X一世s吨H和pr○d你C吨米和一种s你r和磷X1⊗磷X2⊗…⊗磷X到.吨H一世s一世s和一种s一世一世和s和和n吨○b和和q你一世v一种一世和n吨在一世吨H吨H和F一种C吨○r一世和一种吨一世○n F_{X}(t)=F_{X_{1}}\left(t_{1}\right) F_{X_{2}}\left(t_{2}\right) \ldots F_{X_{k} }\left(t_{k}\right), \quad \forall t=\left(t_{1}, t_{2}, \ldots, t_{k}\right)^{\prime} \in R^ {k} 。$$

## 微积分作业代写calclulus代考|Theorem

1.b.0 定理。让吨是一个索引集，米:吨→R,C:吨×吨→R函数并假设矩阵CF:=(C(s,吨))s,吨∈F是自伴随和半正定的，对于每个有限集F⊆吨.

F和(和)=经验⁡(一世(和,米F)−12(CF和,和)),和=(和吨)吨∈F∈RF.自从在是一个有规律的高斯随机向量ñ(米G,CG)在RG， 我们有
F在(和)=经验⁡(一世(X,米G)−12(CGX,X)),X=(X吨)吨∈G∈RG,因此（1.a.0），对于和∈RF,F和(和)=F圆周率GF(在)(和)=F在(圆周率GF′和)=经验⁡(一世(圆周率GF′和,米G)−12(CG圆周率GF′和,圆周率GF′和)).