随机微积分作业代写stochastic calculus代考| Ito’s Formula

随机微积分作业代写stochastic calculus代考| Ito’s Formula

随机微积分(stochastic calculus),数学概念,是高等数学中研究函数的微分(Differentiation)、积分(Integration)以及有关概念和应用的数学分支。它是数学的一个基础学科,内容主要包括极限、微分学、积分学及其应用。微分学包括求导数的运算,是一套关于变化率的理论。它使得函数、速度、加速度和曲线的斜率等均可用一套通用的符号进行讨论。积分学,包括求积分的运算,为定义和计算面积、体积等提供一套通用的方法

my-assignmentexpert™ 随机微积分stochastic calculus作业代写,免费提交作业要求, 满意后付款,成绩80\%以下全额退款,安全省心无顾虑。专业硕 博写手团队,所有订单可靠准时,保证 100% 原创。my-assignmentexpert™, 最高质量的随机微积分stochastic calculus作业代写,服务覆盖北美、欧洲、澳洲等 国家。 在代写价格方面,考虑到同学们的经济条件,在保障代写质量的前提下,我们为客户提供最合理的价格。 由于随机微积分stochastic calculus作业种类很多,难度波动比较大,同时其中的大部分作业在字数上都没有具体要求,因此随机微积分stochastic calculus作业代写的价格不固定。通常在经济学专家查看完作业要求之后会给出报价。作业难度和截止日期对价格也有很大的影响。

想知道您作业确定的价格吗? 免费下单以相关学科的专家能了解具体的要求之后在1-3个小时就提出价格。专家的 报价比上列的价格能便宜好几倍。

my-assignmentexpert™ 为您的留学生涯保驾护航 在经济学作业代写方面已经树立了自己的口碑, 保证靠谱, 高质且原创的微积分calculus代写服务。我们的专家在随机微积分stochastic calculus 代写方面经验极为丰富,各种随机微积分stochastic calculus相关的作业也就用不着 说。

我们提供的随机微积分stochastic calculus代写服务范围广, 其中包括但不限于:

  • 随机偏微分方程
  • 随机控制
  • Ito积分
  • black-Scholes-Merton option pricing formula
  • Fokker–Planck equation
  • 布朗运动 Brownian motion
随机微积分作业代写stochastic calculus代考

微积分作业代写calclulus代考|Itoís formula

3.a Ito’s formula. Let $X=\left(X^{1}, \ldots, X^{d}\right)$ be an $R^{d}$-valued process with continuously differentiable paths and consider the process $Y_{t}=f\left(X_{t}\right)$, where $f \in C^{2}\left(R^{d}\right)$. Let us write
$$
D_{j} f=\frac{\partial f}{\partial x_{j}} \quad \text { and } \quad D_{i j} f=\frac{\partial^{2} f}{\partial x_{i} \partial x_{j}}
$$
The process $Y$ has continuously differentiable paths with
$$
\frac{d}{d t} f\left(X_{t}(\omega)\right)=\sum_{j=1}^{d} D_{j} f\left(X_{t}(\omega)\right) \frac{d}{d t} X_{t}^{j}(\omega) .
$$
Fixing $\omega \in \Omega$ and integrating yields
$$
f\left(X_{t}(\omega)\right)-f\left(X_{0}(\omega)\right)=\sum_{j=1}^{d} \int_{0}^{t} D_{j} f\left(X_{s}(\omega)\right) \frac{d}{d s} X_{s}^{j}(\omega) d s
$$
where this integral is to be interpreted pathwise. Written as
$$
f\left(X_{t}\right)-f\left(X_{0}\right)=\sum_{j=1}^{d} \int_{0}^{t} D_{j} f\left(X_{s}\right) d X_{s}^{j}
$$
this equation remains true if $X$ is a continuous, bounded variation process. The situation becomes more complicated if the process $X$ is a continuous semimartingale and hence no longer has paths which are of bounded variation on finite intervals in general. Then a new term appears on the right hand side of (0) (Ito’s formula). We will give a very explicit derivation which shows clearly where the new term comes from.

(a) Assume first that $F, K$ are compact sets such that $F \subseteq K^{o} \subseteq K \subseteq G$ and the range of $X$ is contained in $F$. Fix $t \geq 0$ and let $\left(\Delta_{n}\right)$ be a sequence of partitions of the interval $[0, t]$ such that $\left|\Delta_{n}\right| \rightarrow 0$, as $n \uparrow \infty$. For $n \geq 1$ write $\Delta_{n}=\left{0=t_{0}^{n}0$ and
$$
\Omega_{m}=\left{\omega \in \Omega \mid\left|X_{t_{k}^{n}}(\omega)-X_{t_{k-1}^{n}}(\omega)\right|<\epsilon, \forall n \geq m, 1 \leq k \leq k_{n}\right} .
$$
If $\omega \in \Omega$, then the path $s \in[0, t] \rightarrow X_{s}(\omega)$ is uniformly continuous and so $\omega \in \Omega_{m}$, for some $m \geq 1$. Thus $\Omega_{m} \uparrow \Omega$, as $m \uparrow \infty$. It will thus suffice to show that (1) holds $P$-as. on the set $\Omega_{m}$, for each $m \geq 1$.

Fix $m \geq 1$. If $\omega \in \Omega_{m}$, then $X_{t_{k}^{n}}(\omega) \in B_{\epsilon}\left(X_{t_{k-1}^{n}}(\omega)\right)$ and hence the line segment from $X_{t_{k-1}^{n}}(\omega)$ to $X_{t_{k}^{n}}(\omega)$ is contained in the ball $B_{\epsilon}\left(X_{t_{k-1}^{n}}(\omega)\right) \subseteq K$, for all $n \geq m$ and all $1 \leq k \leq k_{n}$. Let $n \geq m$ and write
$$
f\left(X_{t}\right)-f\left(X_{0}\right)=\sum_{k=1}^{k_{n}}\left[f\left(X_{t_{k}^{n}}\right)-f\left(X_{t_{k-1}^{n}}\right)\right] .
$$
Consider $k \in\left{1, \ldots, k_{n}\right}$ and $\omega \in \Omega_{m}$. A second degree Taylor expansion for $f(x)$ centered at $x=X_{t_{k-1}^{n}}(\omega)$ yields
$$
\begin{aligned}
f\left(X_{t_{k}^{n}}\right)-f\left(X_{t_{k-1}^{n}}\right)=& \sum_{j=1}^{d} D_{j} f\left(X_{t_{k-1}^{n}}\right)\left(X_{t_{k}^{n}}^{j}-X_{t_{k-1}^{n}}^{j}\right) \
&+\frac{1}{2} \sum_{i, j=1}^{d} D_{i j} f\left(\xi_{n k}\right)\left(X_{t_{k}^{n}}^{i}-X_{t_{k-1}^{n}}^{i}\right)\left(X_{t_{k}^{n}}^{j}-X_{t_{k-1}^{n}}^{j}\right)
\end{aligned}
$$
where the point $\xi_{n k}=\xi_{n k}(\omega)$ is on the line segment from $X_{t_{k}^{n}}(\omega)$ to $X_{t_{k-1}^{n}}(\omega)$. Note that this line segment is contained in $K$ and that $D_{i j} f$ is uniformly continuous on $K$. Entering the above expansion into (3) and commuting the order of summation, we can write
$f\left(X_{t}\right)-f\left(X_{0}\right)=\sum_{j=1}^{d} A_{j}^{n}+\frac{1}{2} \sum_{i, j=1}^{d} B_{i j}^{n}$,
where $\quad A_{j}^{n}=\sum_{k=1}^{k_{n}} D_{j} f\left(X_{t_{k-1}^{n}}\right)\left(X_{t_{k}^{n}}^{j}-X_{t_{k-1}^{n}}^{j}\right)$
and $\quad B_{i j}^{n}=\sum_{k=1}^{k_{n}} D_{i j} f\left(\xi_{n k}\right)\left(X_{t_{k}^{n}}^{i}-X_{t_{k-1}^{n}}^{i}\right)\left(X_{t_{k}^{n}}^{j}-X_{t_{k-1}^{n}}^{j}\right)$,
at all points $\omega \in \Omega_{m}$. According to $2 . e .1$ we have $A_{j}^{n} \rightarrow \int_{0}^{t} D_{j} f\left(X_{s}\right) d X_{s}^{j}$ in probability, as $n \uparrow \infty$. Since limits in probability are uniquely determined $P$-as., it will now suffice to show that $B_{i j}^{n} \rightarrow \int_{0}^{t} D_{i j} f\left(X_{s}\right) d\left\langle X^{i}, X^{j}\right\rangle_{s}$ in probability on the set $\Omega_{m}$, as $n \uparrow \infty$. To see this we will compare $B_{i j}^{n}$ to the similar term
$$
\tilde{B}{i j}^{n}=\sum{k=1}^{k_{n}} D_{i j} f\left(X_{t_{k-1}^{n}}\right)\left(X_{t_{k}^{n}}^{i}-X_{t_{k-1}^{n}}^{i}\right)\left(X_{t_{k}^{n}}^{j}-X_{t_{k-1}^{n}}^{j}\right),
$$
which is known to converge to $\int_{0}^{t} D_{i j} f\left(X_{s}\right) d\left\langle X^{i}, X^{j}\right\rangle_{s}$ in probability (2.e.5).

微积分作业代写calclulus代考|Differential notation

3.b Differential notation. Let us introduce some purely symbolic but nonetheless useful notation. If $X \in \mathcal{S}$ we write $d Z_{t}=H_{t} d X_{t}$ or more briefly $d Z=H d X$ if and only if $H \in L(X)$ and $Z_{t}=Z_{0}+\int_{0}^{t} H_{s} d X_{s}$, for all $t \geq 0$, equivalently iff $H \in L(X)$ and $Z=Z_{0}+H \cdot X$.

The equality $d Z=0$ is to be interpreted as $d Z=0 d X$, for some $X \in \mathcal{S}$. Clearly then $d Z=0$ if and only if $Z_{t}=Z_{0}, t \geq 0$, that is, if $Z$ is a stochastic constant. By the associative law 2.d.2
$$
d Z=H d X \text { and } d X=K d Y \quad \Rightarrow \quad d Z=H K d Y .
$$
According to 2.d.1.(f), $H \in L(X), K \in L(Y), Z=H \bullet X$ and $W=K \bullet Y$ imply that $H K \in L_{l o c}^{1}(\langle X, Y\rangle)$ and $\langle H \bullet X, K \bullet Y\rangle_{t}=\int_{0}^{t} H_{s} K_{s} d\langle X, Y\rangle_{s}, t \geq 0$. In differential notation this can be written as
$$
d Z=H d X, \text { and } d W=K d Y \quad \Rightarrow \quad d\langle Z, W\rangle=H K d\langle X, Y\rangle .
$$
If we define the product $d Z d W$ of the stochastic differentials $d Z$ and $d W$ as
$$
d Z d W=d\langle Z, W\rangle
$$
then (1) assumes the form $d Z=H d X, d W=K d Y \Rightarrow d Z d W=H K d X d Y$. In particular $d Z=H d X \Rightarrow d\langle Z\rangle=(d Z)^{2}=H^{2}(d X)^{2}=H^{2} d\langle X\rangle$. There is no analogue for the differential products $d X d Y$ in classical integration theory on the line: If $X$ and $Y$ are locally of bounded variation then $\langle X, Y\rangle=0$.

The above can be generalized to vector valued integrators $X$. If $X \in \mathcal{S}^{d}$, then we write $d Z=H \cdot d X$, iff $H \in L(X)$ and $Z=Z_{0}+H \cdot X$, that is, $Z_{t}=$ $Z_{0}+\sum_{j=1}^{d} \int_{0}^{t} H_{s}^{j} d X_{s}^{j}$, for all $t \geq 0$. Note that then $Z$ is a scalar semimartingale. The associative law ( 0$)$ now assumes the form
$$
d Y=K d Z \text { and } d Z=H \cdot d X \Rightarrow d Y=(K H) \cdot d X,
$$
whenever $X \in \mathcal{S}^{d}, H \in L(X), K \in L(Z)=L(H \bullet X)$ (2.d.2). Here $X$ and $H$ are $R^{d}$-valued processes while $Z$ and $K$ are scalar processes. Thus $K H$ is an $R^{d}$-valued process also. Likewise 2.d.1 in differential notation yields:

随机微积分作业代写stochastic calculus代考| Ito’s Formula

随机微积分作业代写calclulus代考|Itoís formula

3.a 伊藤公式。令 $X=\left(X^{1}, \ldots, X^{d}\right)$ 为具有连续可微路径的 $R^{d}$ 值过程,并考虑过程 $Y_{t} =f\left(X_{t}\right)$,其中 $f \in C^{2}\left(R^{d}\right)$。让我们写
$$
D_{j} f=\frac{\partial f}{\partial x_{j}} \quad \text { 和 } \quad D_{ij} f=\frac{\partial^{2} f}{\partial x_{i} \部分 x_{j}}
$$
过程 $Y$ 具有连续可微的路径
$$
\frac{d}{dt} f\left(X_{t}(\omega)\right)=\sum_{j=1}^{d} D_{j} f\left(X_{t}(\omega )\right) \frac{d}{dt} X_{t}^{j}(\omega) 。
$$
固定 $\omega \in \Omega$ 并积分收益率
$$
f\left(X_{t}(\omega)\right)-f\left(X_{0}(\omega)\right)=\sum_{j=1}^{d} \int_{0}^{ t} D_{j} f\left(X_{s}(\omega)\right) \frac{d}{ds} X_{s}^{j}(\omega) ds
$$
这个积分将被解释为路径。写成
$$
f\left(X_{t}\right)-f\left(X_{0}\right)=\sum_{j=1}^{d} \int_{0}^{t} D_{j} f\左(X_{s}\右)d X_{s}^{j}
$$
如果 $X$ 是一个连续的、有界的变化过程,则该等式仍然成立。如果过程 $X$ 是一个连续的半鞅,则情况会变得更加复杂,因此通常不再具有在有限区间上具有有界变化的路径。然后一个新的项出现在(0)的右边(伊藤公式)。我们将给出一个非常明确的推导,清楚地表明新术语的来源。

(a) 首先假设$F, K$ 是紧集使得$F \subseteq K^{o} \subseteq K \subseteq G$ 并且$X$ 的范围包含在$F$ 中。修正 $t \geq 0$ 并令 $\left(\Delta_{n}\right)$ 是区间 $[0, t]$ 的分区序列,使得 $\left|\Delta_{n}\right | \rightarrow 0$,作为 $n \uparrow \infty$。对于 $n \geq 1$ 写 $\Delta_{n}=\left{0=t_{0}^{n}0$ 和
$$
\Omega_{m}=\left{\omega \in \Omega \mid\left|X_{t_{k}^{n}}(\omega)-X_{t_{k-1}^{n}}( \omega)\right|<\epsilon, \forall n \geq m, 1 \leq k \leq k_{n}\right} 。
$$
如果$\omega \in \Omega$,则路径$s \in[0, t] \rightarrow X_{s}(\omega)$是一致连续的,因此$\omega \in \Omega_{m}$,对于一些 $m \geq 1$。因此 $\Omega_{m} \uparrow \Omega$,作为 $m \uparrow \infty$。因此,证明 (1) 成立 $P$-as 就足够了。在集合 $\Omega_{m}$ 上,对于每个 $m \geq 1$。

修复 $m \geq 1$。如果 $\omega \in \Omega_{m}$,则 $X_{t_{k}^{n}}(\omega) \in B_{\epsilon}\left(X_{t_{k-1}^{ n}}(\omega)\right)$ 以及从 $X_{t_{k-1}^{n}}(\omega)$ 到 $X_{t_{k}^{n}}( \omega)$ 包含在球 $B_{\epsilon}\left(X_{t_{k-1}^{n}}(\omega)\right) \subseteq K$ 中,对于所有 $n \geq m $ 和所有 $1 \leq k \leq k_{n}$。让 $n \geq m$ 写
$$
f\left(X_{t}\right)-f\left(X_{0}\right)=\sum_{k=1}^{k_{n}}\left[f\left(X_{t_{k }^{n}}\right)-f\left(X_{t_{k-1}^{n}}\right)\right] 。
$$
考虑 $k \in\left{1, \ldots, k_{n}\right}$ 和 $\omega \in \Omega_{m}$。以 $x=X_{t_{k-1}^{n}}(\omega)$ 为中心的 $f(x)$ 的二阶泰勒展开式产生
$$
\开始{对齐}
f\left(X_{t_{k}^{n}}\right)-f\left(X_{t_{k-1}^{n}}\right)=& \sum_{j=1}^{ d} D_{j} f\left(X_{t_{k-1}^{n}}\right)\left(X_{t_{k}^{n}}^{j}-X_{t_{k -1}^{n}}^{j}\右)\
&+\frac{1}{2} \sum_{i, j=1}^{d} D_{ij} f\left(\xi_{nk}\right)\left(X_{t_{k}^{ n}}^{i}-X_{t_{k-1}^{n}}^{i}\right)\left(X_{t_{k}^{n}}^{j}-X_{t_ {k-1}^{n}}^{j}\右)
\end{对齐}
$$
其中点 $\xi_{nk}=\xi_{nk}(\omega)$ 位于从 $X_{t_{k}^{n}}(\omega)$ 到 $X_{t_{k 的线段上-1}^{n}}(\omega)$。注意这条线段包含在$K$ 中并且$D_{i j} f$ 在$K$ 上一致连续。将上述展开式代入(3),对求和的顺序,我们可以写成
$f\left(X_{t}\right)-f\left(X_{0}\right)=\sum_{j=1}^{d} A_{j}^{n}+\frac{1} {2} \sum_{i, j=1}^{d} B_{ij}^{n}$,
其中 $\quad A_{j}^{n}=\sum_{k=1}^{k_{n}} D_{j} f\left(X_{t_{k-1}^{n}}\right )\left(X_{t_{k}^{n}}^{j}-X_{t_{k-1}^{n}}^{j}\right)$
和 $\quad B_{ij}^{n}=\sum_{k=1}^{k_{n}} D_{ij} f\left(\xi_{nk}\right)\left(X_{t_{ k}^{n}}^{i}-X_{t_{k-1}^{n}}^{i}\right)\left(X_{t_{k}^{n}}^{j} -X_{t_{k-1}^{n}}^{j}\right)$,
在所有点 $\omega \in \Omega_{m}$。根据 $2 。 e .1$ 我们有 $A_{j}^{n} \rightarrow \int_{0}^{t} D_{j} f\left(X_{s}\right) d X_{s}^{j} $ 的概率,如 $n \uparrow \infty$。因为概率极限是唯一确定的 $P$-as.,所以现在证明 $B_{ij}^{n} \rightarrow \int_{0}^{t} D_{ij} f\left(X_ {s}\right) d\left\langle X^{i}, X^{j}\right\rangle_{s}$ 在集合 $\Omega_{m}$ 上的概率,如 $n \uparrow \infty美元。为了看到这一点,我们将 $B_{i j}^{n}$ 与类似的术语进行比较
$$
\tilde{B}{ij}^{n}=\sum{k=1}^{k_{n}} D_{ij} f\left(X_{t_{k-1}^{n}}\right )\left(X_{t_{k}^{n}}^{i}-X_{t_{k-1}^{n}}^{i}\right)\left(X_{t_{k}^ {n}}^{j}-X_{t_{k-1}^{n}}^{j}\right),
$$
已知收敛到 $\int_{0}^{t} D_{ij} f\left(X_{s}\right) d\left\langle X^{i}, X^{j}\right\ rangle_{s}$ 的概率(2.e.5)。

微作业积分代写微积分代考|微分表示法
3.b 微分表示法。让我们介绍一些纯符号但仍然有用的符号。如果 $X \in \mathcal{S}$ 我们 w

微积分作业代写calclulus代考| Iterated integration in R2
微积分作业代写calclulus代考

微积分作业代写calclulus代考 请认准UprivateTA™. UprivateTA™为您的留学生涯保驾护航。

抽象代数Galois理论代写

偏微分方程代写成功案例

代数数论代考

组合数学代考

统计作业代写

集合论数理逻辑代写案例

凸优化代写

统计exam代考