随机微积分作业代写stochastic calculus代考| GAUSSIAN PROCESSES

随机微积分(stochastic calculus),数学概念,是高等数学中研究函数的微分(Differentiation)、积分(Integration)以及有关概念和应用的数学分支。它是数学的一个基础学科,内容主要包括极限、微分学、积分学及其应用。微分学包括求导数的运算,是一套关于变化率的理论。它使得函数、速度、加速度和曲线的斜率等均可用一套通用的符号进行讨论。积分学,包括求积分的运算,为定义和计算面积、体积等提供一套通用的方法

my-assignmentexpert™ 随机微积分stochastic calculus作业代写,免费提交作业要求, 满意后付款,成绩80\%以下全额退款,安全省心无顾虑。专业硕 博写手团队,所有订单可靠准时,保证 100% 原创。my-assignmentexpert™, 最高质量的随机微积分stochastic calculus作业代写,服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面,考虑到同学们的经济条件,在保障代写质量的前提下,我们为客户提供最合理的价格。 由于随机微积分stochastic calculus作业种类很多,难度波动比较大,同时其中的大部分作业在字数上都没有具体要求,因此随机微积分stochastic calculus作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。

想知道您作业确定的价格吗? 免费下单以相关学科的专家能了解具体的要求之后在1-3个小时就提出价格。专家的 报价比上列的价格能便宜好几倍。

my-assignmentexpert™ 为您的留学生涯保驾护航 在经济学作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的微积分calculus代写服务。我们的专家在随机微积分stochastic calculus 代写方面经验极为丰富,各种随机微积分stochastic calculus相关的作业也就用不着 说。

我们提供的随机微积分stochastic calculus代写服务范围广, 其中包括但不限于:

  • 随机偏微分方程
  • 随机控制
  • Ito积分
  • black-Scholes-Merton option pricing formula
  • Fokker–Planck equation
  • 布朗运动 Brownian motion
随机微积分作业代写stochastic calculus代考

微积分作业代写calclulus代考|Gaussian random variables in Rk

1. The normal distribution $N=N\left(\mu, \sigma^{2}\right)$ on $R$ with mean $\mu$ and variance $\sigma^{2}$ is defined by
$$
N(d x)=\frac{1}{\sigma \sqrt{2 \pi}} \exp \left(-\frac{(x-\mu)^{2}}{2 \sigma^{2}}\right) d x
$$
The characteristic function (Fourier transform) of this distribution is given by
$$
\hat{N}(t)=\int_{R} e^{i t x} N(d x)=\exp \left(i \mu t-\frac{1}{2} \sigma^{2} t^{2}\right), \quad t \in R
$$
In the case of a mean zero normal distribution $N=N\left(0, \sigma^{2}\right)$ this becomes
$$
N(d x)=\frac{1}{\sigma \sqrt{2 \pi}} e^{-x^{2} / 2 \sigma^{2}} d x, \quad \text { and } \quad \hat{N}(t)=e^{-\sigma^{2} t^{2} / 2}, \quad t \in R
$$
and the standard normal distribution $N(0,1)$ satisfies
$$
N(0,1)(d x)=\frac{1}{\sqrt{2 \pi}} e^{-x^{2} / 2} d x, \quad \text { and } \quad \widehat{N(0,1)}(t)=e^{-t^{2} / 2}, \quad t \in R .
$$
For $\sigma^{2}=0$ the distribution $N\left(0, \sigma^{2}\right)=N(0,0)$ is not defined by the above density but is interpreted to be the point measure $N(0,0)=\epsilon_{0}$ concentrated at 0 . With

this interpretation the formula for the characteristic function $N(\widehat{0,0})(t)=\hat{\epsilon}_{0}(t)=$ $1=e^{-\sigma^{2} t^{2} / 2}$ holds in this case also.

The characteristic function of a random vector $X: \Omega \rightarrow R^{k}$ is defined to be the characteristic function of the distribution $P_{X}$ of $X$, that is, the function
$$
F_{X}(t)=\hat{P}{X}(t)=\int{R^{k}} e^{i(t, x)} P_{X}(d x)=E\left(e^{i(t, X)}\right), \quad t \in R^{k} .
$$
Recall that the components $X_{1}, \ldots, X_{k}$ of the random vector $X=\left(X_{1}, \ldots, X_{k}\right)^{\prime}$ are independent if and only if the joint distribution $P_{X}$ is the product measure $P_{X_{1}} \otimes P_{X_{2}} \otimes \ldots \otimes P_{X_{k}}$. This is easily seen to be equivalent with the factorization
$$
F_{X}(t)=F_{X_{1}}\left(t_{1}\right) F_{X_{2}}\left(t_{2}\right) \ldots F_{X_{k}}\left(t_{k}\right), \quad \forall t=\left(t_{1}, t_{2}, \ldots, t_{k}\right)^{\prime} \in R^{k} .
$$
Covariance matrix. The $k \times k$-matrix $C$ defined by $C_{i j}=E\left[\left(X_{i}-m_{i}\right)\left(X_{j}-m_{j}\right)\right]$, where $m_{i}=E X_{i}$, is called the covariance matrix $C$ of $X$. Here it is assumed that all relevant expectations exist. Set $m=\left(m_{1}, m_{2}, \ldots, m_{k}\right)^{\prime}$ and note that the matrix $\left(\left(X_{i}-m_{i}\right)\left(X_{j}-m_{j}\right)\right){i j}$ can be written as the product $(X-m)(X-m)^{\prime}$ of the column vector $(X-m)$ with the row vector $(X-m)^{\prime}$. Taking expectations entry by entry, we see that the covariance matrix $C$ of $X$ can also be written as $C=E\left[(X-m)(X-m)^{\prime}\right]$ in complete formal analogy to the covariance in the one dimensional case. Clearly $C$ is symmetric. Moreover, for each vector $t=$ $\left(t{1}, \ldots, t_{k}\right)^{\prime} \in R^{k}$ we have
$$
0 \leq \operatorname{Var}\left(t_{1} X_{1}+\ldots+t_{k} X_{k}\right)=\sum_{i j} t_{i} t_{j} \operatorname{Cov}\left(X_{i} X_{j}\right)=\sum_{i j} C_{i j} t_{i} t_{j}=(C t, t)
$$
and it follows that the covariance matrix $C$ is positive semidefinite. Let us note the effect of affine transformations on characteristic functions:

微积分作业代写calclulus代考|Theorem

1.b.0 Theorem. Let $T$ be an index set, $m: T \rightarrow R, C: T \times T \rightarrow R$ functions and assume that the matrix $C_{F}:=(C(s, t))_{s, t \in F}$ is selfadjoint and positive semidefinite, for each finite set $F \subseteq T$.

Then there exists a probability $P$ on the product space $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$ such that the coordinate maps $X_{t}: \omega \in \Omega \mapsto X_{t}(\omega)=\omega(t), t \in T$, form a Gaussian process $X=\left(X_{t}\right){t \in T}:(\Omega, \mathcal{F}, P) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ with mean function $E\left(X{t}\right)=m(t)$ and covariance function $\operatorname{Cov}\left(X_{s}, X_{t}\right)=C(s, t), s, t \in T$.

Remark. Our choice of $\Omega$ and $X_{t}$ implies that the process $X:(\Omega, \mathcal{F}) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ is the identity map, that is, the path $t \in T \mapsto X_{t}(\omega)$ is the element $\omega \in R^{T}=\Omega$ itself, for each $\omega \in \Omega$.

Proof. Fix any linear order on $T$ and use it to order vector components and matrix entries consistently. For finite subsets $F \subseteq G \subseteq T$ let
$$
\begin{aligned}
\pi_{F}: x &=\left(x_{t}\right){t \in T} \in \Omega=R^{T} \rightarrow\left(x{t}\right){t \in F} \in R^{F} \text { and } \ \pi{G F}: x &=\left(x_{t}\right){t \in G} \in R^{G} \rightarrow\left(x{t}\right){t \in F} \in R^{F} \end{aligned} $$ denote the natural projections and set $$ m{F}=(m(t)){t \in F} \in R^{F}, \quad C{F}=(C(s, t)){s, t \in F} \quad \text { and } \quad X{F}=\left(X_{t}\right){t \in F} . $$ Let $P$ be any probability on $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$. Since $X:(\Omega, \mathcal{F}, P) \rightarrow\left(R^{T}, \mathcal{B}^{T}\right)$ is the identity map, the distribution of $X$ on $\left(R^{T}, \mathcal{B}^{T}\right)$ is the measure $P$ itself and $\pi{F}(P)$ is the joint distribution of $X_{F}=\left(X_{t}\right){t \in F}$ on $R^{F}$. Thus $X$ is a Gaussian process with mean function $m$ and covariance function $C$ on the probability space $(\Omega, \mathcal{F}, P)$ if and only if the finite dimensional distribution $\pi{F}(P)$ is the Gaussian Law $N\left(m_{F}, C_{F}\right)$, for each finite subset $F \subseteq T$. By Kolmogoroff’s existence theorem (appendix D.5) such a probability measure on $(\Omega, \mathcal{F})=\left(R^{T}, \mathcal{B}^{T}\right)$ exists if and only if the system of Gaussian Laws $\left{N\left(m_{F}, C_{F}\right): F \subseteq T\right.$ finite $}$ satisfies the consistency condition
$$
\pi_{G F}\left(N\left(m_{G}, C_{G}\right)\right)=N\left(m_{F}, C_{F}\right),
$$

for all finite subsets $F \subseteq G \subseteq T$. To see that this is true, consider such sets $F$, $G$ and let $W$ be any random vector in $R^{G}$ such that $P_{W}=N\left(m_{G}, C_{G}\right)$. Then $\pi_{G F}\left(N\left(m_{G}, C_{G}\right)\right)=\pi_{G F}\left(P_{W}\right)=P_{\pi_{G F}(W)}$ and it will thus suffice to show that $Y=\pi_{G F}(W)$ is a Gaussian random vector with law $N\left(m_{F}, C_{F}\right)$ in $R^{F}$, that is, with characteristic function
$$
F_{Y}(y)=\exp \left(i\left(y, m_{F}\right)-\frac{1}{2}\left(C_{F} y, y\right)\right), \quad y=\left(y_{t}\right){t \in F} \in R^{F} . $$ Since $W$ is a Gaussian random vector with law $N\left(m{G}, C_{G}\right)$ on $R^{G}$, we have
$$
F_{W}(y)=\exp \left(i\left(x, m_{G}\right)-\frac{1}{2}\left(C_{G} x, x\right)\right), \quad x=\left(x_{t}\right){t \in G} \in R^{G}, $$ and consequently (1.a.0), for $y \in R^{F}$, $$ F{Y}(y)=F_{\pi_{G F}(W)}(y)=F_{W}\left(\pi_{G F}^{\prime} y\right)=\exp \left(i\left(\pi_{G F}^{\prime} y, m_{G}\right)-\frac{1}{2}\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)\right) .
$$
Here $\pi_{G F}^{\prime}: R^{F} \rightarrow R^{G}$ is the adjoint map and so $\left(\pi_{G F}^{\prime} y, m_{G}\right)=\left(y, \pi_{G F} m_{G}\right)=$ $\left(y, m_{F}\right)$. Thus it remains to be shown only that $\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)=\left(C_{F} y, y\right)$. Let $y=\left(y_{t}\right){t \in F} \in R^{F}$. First we claim that $\pi{G F}^{\prime} y=z$, where the vector $z=\left(z_{t}\right){t \in G} \in$ $R^{G}$ is defined by $$ z{t}=\left{\begin{array}{ll}
y_{t} & \text { if } t \in F \
0 & \text { if } t \in G \backslash F^{\prime}
\end{array} \quad \forall y=\left(y_{t}\right){t \in F} \in R^{F} .\right. $$ Indeed, if $x=\left(x{t}\right){t \in G} \in R^{G}$ we have $\left(y, \pi{G F} x\right)=\sum_{t \in F} y_{t} x_{t}=\sum_{t \in G} z_{t} x_{t}=(z, x)$ and so $z=\pi_{G F}^{\prime} y$. Thus $\left(C_{G} \pi_{G F}^{\prime} y, \pi_{G F}^{\prime} y\right)=\left(C_{G} z, z\right)=\sum_{s, t \in G} C(s, t) z_{s} z_{t}=$ $\sum_{s, t \in F} C(s, t) y_{s} y_{t}=\left(C_{F} y, y\right)$.

随机微积分作业代写stochastic calculus代考| GAUSSIAN PROCESSES

微积分作业代写calclulus代考|Gaussian random variables in Rk

1. 正态分布ñ=ñ(μ,σ2)在R平均μ和方差σ2定义为
ñ(dX)=1σ2圆周率经验⁡(−(X−μ)22σ2)dX
该分布的特征函数(傅立叶变换)由下式给出
ñ^(吨)=∫R和一世吨Xñ(dX)=经验⁡(一世μ吨−12σ2吨2),吨∈R
在均值零正态分布的情况下ñ=ñ(0,σ2)这变成
ñ(dX)=1σ2圆周率和−X2/2σ2dX, 和 ñ^(吨)=和−σ2吨2/2,吨∈R
和标准正态分布ñ(0,1)满足
ñ(0,1)(dX)=12圆周率和−X2/2dX, 和 ñ(0,1)^(吨)=和−吨2/2,吨∈R.
为了σ2=0分布ñ(0,σ2)=ñ(0,0)不是由上述密度定义但被解释为点测量ñ(0,0)=ε0集中在 0 。和

这解释了特征函数的公式ñ(0,0^)(吨)=ε^0(吨)= 1=和−σ2吨2/2在这种情况下也成立。

随机向量的特征函数X:Ω→R到被定义为分布的特征函数磷X的X, 即函数
$$
F_{X}(t)=\hat{P} {X}(t)=\int {R^{k}} e^{i(t, x)} P_{X }(dx)=E\left(e^{i(t, X)}\right), \quad t \in R^{k} 。
R和C一种一世一世吨H一种吨吨H和C○米p○n和n吨s$X1,…,X到$○F吨H和r一种nd○米v和C吨○r$X=(X1,…,X到)′$一种r和一世nd和p和nd和n吨一世F一种nd○n一世和一世F吨H和j○一世n吨d一世s吨r一世b你吨一世○n$磷X$一世s吨H和pr○d你C吨米和一种s你r和$磷X1⊗磷X2⊗…⊗磷X到$.吨H一世s一世s和一种s一世一世和s和和n吨○b和和q你一世v一种一世和n吨在一世吨H吨H和F一种C吨○r一世和一种吨一世○n
F_{X}(t)=F_{X_{1}}\left(t_{1}\right) F_{X_{2}}\left(t_{2}\right) \ldots F_{X_{k} }\left(t_{k}\right), \quad \forall t=\left(t_{1}, t_{2}, \ldots, t_{k}\right)^{\prime} \in R^ {k} 。
$$
协方差矩阵。这到×到-矩阵C被定义为C一世j=和[(X一世−米一世)(Xj−米j)], 在哪里米一世=和X一世,称为协方差矩阵C的X. 这里假设所有相关的期望都存在。放米=(米1,米2,…,米到)′并注意矩阵 $\left(\left(X_{i}-m_{i}\right)\left(X_{j}-m_{j}\right)\right) {ij}C一种nb和在r一世吨吨和n一种s吨H和pr○d你C吨(Xm)(Xm)^{\素数}○F吨H和C○一世你米nv和C吨○r(X米)在一世吨H吨H和r○在v和C吨○r(Xm)^{\素数}.吨一种到一世nG和Xp和C吨一种吨一世○ns和n吨r和b和和n吨r和,在和s和和吨H一种吨吨H和C○v一种r一世一种nC和米一种吨r一世XC○FXC一种n一种一世s○b和在r一世吨吨和n一种sC=E\left[(Xm)(Xm)^{\prime}\right]一世nC○米p一世和吨和F○r米一种一世一种n一种一世○G和吨○吨H和C○v一种r一世一种nC和一世n吨H和○n和d一世米和ns一世○n一种一世C一种s和.C一世和一种r一世和C一世ss和米米和吨r一世C.米○r和○v和r,F○r和一种CHv和C吨○rt=\left(t {1}, \ldots, t_{k}\right)^{\prime} \in R^{k}在和H一种v和0≤在哪里⁡(吨1X1+…+吨到X到)=∑一世j吨一世吨j这⁡(X一世Xj)=∑一世jC一世j吨一世吨j=(C吨,吨)一种nd一世吨F○一世一世○在s吨H一种吨吨H和C○v一种r一世一种nC和米一种吨r一世XC$ 是半正定的。让我们注意仿射变换对特征函数的影响:

微积分作业代写calclulus代考|Theorem

1.b.0 定理。让吨是一个索引集,米:吨→R,C:吨×吨→R函数并假设矩阵CF:=(C(s,吨))s,吨∈F是自伴随和半正定的,对于每个有限集F⊆吨.

那么存在一个概率磷关于产品空间(Ω,F)=(R吨,乙吨)使得坐标映射X吨:ω∈Ω↦X吨(ω)=ω(吨),吨∈吨, 形成一个高斯过程X=(X吨)吨∈吨:(Ω,F,磷)→(R吨,乙吨)具有平均函数和(X吨)=米(吨)和协方差函数这⁡(Xs,X吨)=C(s,吨),s,吨∈吨.

评论。我们的选择Ω和X吨意味着该过程X:(Ω,F)→(R吨,乙吨)是恒等映射,即路径吨∈吨↦X吨(ω)是元素ω∈R吨=Ω本身,对于每个ω∈Ω.

证明。修复任何线性顺序吨并使用它来一致地对向量分量和矩阵条目进行排序。对于有限子集F⊆G⊆吨让
圆周率F:X=(X吨)吨∈吨∈Ω=R吨→(X吨)吨∈F∈RF 和  圆周率GF:X=(X吨)吨∈G∈RG→(X吨)吨∈F∈RF表示自然投影和集合米F=(米(吨))吨∈F∈RF,CF=(C(s,吨))s,吨∈F 和 XF=(X吨)吨∈F.让磷是任何概率(Ω,F)=(R吨,乙吨). 自从X:(Ω,F,磷)→(R吨,乙吨)是恒等映射,分布X在(R吨,乙吨)是度量磷本身和圆周率F(磷)是联合分布XF=(X吨)吨∈F在RF. 因此X是具有均值函数的高斯过程米和协方差函数C在概率空间上(Ω,F,磷)当且仅当有限维分布圆周率F(磷)是高斯定律ñ(米F,CF), 对于每个有限子集F⊆吨. 根据 Kolmogoroff 的存在定理(附录 D.5),这样的概率测度(Ω,F)=(R吨,乙吨)当且仅当高斯定律系统存在\left{N\left(m_{F}, C_{F}\right): F \subseteq T\right.$ 有限 $}\left{N\left(m_{F}, C_{F}\right): F \subseteq T\right.$ 有限 $}满足一致性条件
圆周率GF(ñ(米G,CG))=ñ(米F,CF),

对于所有有限子集F⊆G⊆吨. 要知道这是真的,请考虑这样的集合F,G然后让在是任意随机向量RG这样磷在=ñ(米G,CG). 然后圆周率GF(ñ(米G,CG))=圆周率GF(磷在)=磷圆周率GF(在)因此足以证明和=圆周率GF(在)是一个有规律的高斯随机向量ñ(米F,CF)在RF,即具有特征函数
F和(和)=经验⁡(一世(和,米F)−12(CF和,和)),和=(和吨)吨∈F∈RF.自从在是一个有规律的高斯随机向量ñ(米G,CG)在RG, 我们有
F在(和)=经验⁡(一世(X,米G)−12(CGX,X)),X=(X吨)吨∈G∈RG,因此(1.a.0),对于和∈RF,F和(和)=F圆周率GF(在)(和)=F在(圆周率GF′和)=经验⁡(一世(圆周率GF′和,米G)−12(CG圆周率GF′和,圆周率GF′和)).
这里圆周率GF′:RF→RG是伴随地图,所以(圆周率GF′和,米G)=(和,圆周率GF米G)= (和,米F). 因此,只需要证明(CG圆周率GF′和,圆周率GF′和)=(CF和,和). 让和=(和吨)吨∈F∈RF. 首先我们声称圆周率GF′和=和, 其中向量和=(和吨)吨∈G∈ RG由 $$ z{t}=\left{ 定义和吨 如果 吨∈F 0 如果 吨∈G∖F′\quad \forall y=\left(y_{t}\right){t \in F} \in R^{F} .\right. $$ 确实,如果X=(X吨)吨∈G∈RG我们有(和,圆周率GFX)=∑吨∈F和吨X吨=∑吨∈G和吨X吨=(和,X)所以和=圆周率GF′和. 因此(CG圆周率GF′和,圆周率GF′和)=(CG和,和)=∑s,吨∈GC(s,吨)和s和吨= ∑s,吨∈FC(s,吨)和s和吨=(CF和,和).

随机微积分作业代写stochastic calculus代考| GAUSSIAN PROCESSES
微积分作业代写calclulus代考

微积分作业代写calclulus代考 请认准UprivateTA™. UprivateTA™为您的留学生涯保驾护航。

抽象代数Galois理论代写

偏微分方程代写成功案例

代数数论代考

组合数学代考

统计作业代写

集合论数理逻辑代写案例

凸优化代写

统计exam代考