# 微积分网课代修|极限理论代写Limit Theory代考|MATH7710 Why Stable Convergence?

• 单变量微积分
• 多变量微积分
• 傅里叶级数
• 黎曼积分
• ODE
• 微分学

## 微积分网课代修|极限理论代写Limit Theory代考|Why Stable Convergence?

This chapter is of an introductory nature. We make the motivation for the study of stable convergence more precise and present an exposition of some of its features. With the exception of Example 1.2, no proofs are given, only references to later chapters where proofs may be found.

Our starting point is the classical central limit theorem. For this, let $\left(Z_{k}\right){k \geq 1}$ be a sequence of independent and identically distributed real random variables, defined on some probability space $(\Omega, \mathcal{F}, P)$. Assume $Z{1} \in \mathcal{L}^{2}(P)$ and set $\mu=E Z_{1}$ and $\sigma^{2}=\operatorname{Var} Z_{1}$. To exclude the trivial case of almost surely constant variables, assume also $\sigma^{2}>0$. Then the classical central limit theorem says that
$$\lim {n \rightarrow \infty} P\left(\frac{1}{n^{1 / 2}} \sum{k=1}^{n} \frac{Z_{k}-\mu}{\sigma} \leq x\right)=\Phi(x)=\int_{-\infty}^{x} \varphi(u) d u \text { for all } x \in \mathbb{R},$$
where $\varphi(u)=\frac{1}{\sqrt{2 \pi}} \exp \left(-\frac{1}{2} u^{2}\right), u \in \mathbb{R}$, denotes the density of the standard normal distribution. It is customary to write this convergence of probabilities in a somewhat more abstract way as convergence in distribution of random variables, i.e. as
$$\frac{1}{n^{1 / 2}} \sum_{k=1}^{n} \frac{Z_{k}-\mu}{\sigma} \stackrel{d}{\rightarrow} N(0,1) \quad \text { as } n \rightarrow \infty$$
where $N(0,1)$ denotes the standard normal distribution, or as
$$\frac{1}{n^{1 / 2}} \sum_{k=1}^{n} \frac{Z_{k}-\mu}{\sigma} \stackrel{d}{\rightarrow} N \quad \text { as } n \rightarrow \infty$$

## 微积分网课代修|极限理论代写Limit Theory代考|Weak Convergence of Markov Kernels

As indicated in the previous chapter, stable convergence of random variables can be seen as suitable convergence of Markov kernels given by conditional distributions. Let $(\Omega, \mathcal{F}, P)$ be a probability space and let $\mathcal{X}$ be a separable metrizable topological space equipped with its Borel $\sigma$-field $\mathcal{B}(\mathcal{X})$. In this chapter we briefly describe the weak topology on the set of Markov kernels (transition kernels) from $(\Omega, \mathcal{F})$ to $(\mathcal{X}, \mathcal{B}(\mathcal{X}))$

Let us first recall the weak topology on the set $\mathcal{M}^{1}(\mathcal{X})$ of all probability measures on $\mathcal{B}(\mathcal{X})$. It is the topology generated by the functions
$$\nu \mapsto \int h d \nu, \quad h \in C_{b}(\mathcal{X}),$$
where $C_{b}(\mathcal{X})$ denotes the space of all continuous, bounded functions $h: \mathcal{X} \rightarrow \mathbb{R}$ equipped with the sup-norm $|h|_{\text {sup }}:=\sup {x \in \mathcal{X}}|h(x)|$. The weak topology on $\mathcal{M}^{1}(\mathcal{X})$ is thus the weakest topology for which each function $\nu \mapsto \int h d \nu$ is continuous. Consequently, weak convergence of a net $\left(\nu{\alpha}\right){\alpha}$ in $\mathcal{M}^{1}(\mathcal{X})$ to $\nu \in \mathcal{M}^{1}(\mathcal{X})$ means $$\lim {\alpha} \int h d \nu_{\alpha}=\int h d \nu$$
for every $h \in C_{b}(\mathcal{X})$ (here and elsewhere we omit the directed set on which a net is defined from the notation). Because $\int h d \nu_{1}=\int h d \nu_{2}$ for $\nu_{1}, \nu_{2} \in \mathcal{M}^{1}(\mathcal{X})$ and every $h \in C_{b}(\mathcal{X})$ implies that $\nu_{1}=\nu_{2}$, this topology is Hausdorff and the limit is unique. Moreover, the weak topology is separable metrizable e.g. by the Prohorov metric, see e.g. [69], Theorem II.6.2, and polish if $\mathcal{X}$ is polish; see e.g. [69], Theorem II.6.5, [26], Corollary 11.5.5. The relatively compact subsets of $\mathcal{M}^{1}(\mathcal{X})$ are exactly the tight ones, provided $\mathcal{X}$ is polish, where $\Gamma \subset \mathcal{M}^{1}(\mathcal{X})$ is called tight if for every $\varepsilon>0$ there exists a compact set $A \subset \mathcal{X}$ such that $\sup _{\nu \in \Gamma} \nu(\mathcal{X} \backslash A) \leq \varepsilon$; see e.g. [69], Theorem II.6.7, [26], Theorem 11.5.4.

## 微积分网课代修|极限理论代写Limit Theory代考|Why Stable Convergence?

$$\lim n \rightarrow \infty P\left(\frac{1}{n^{1 / 2}} \sum k=1^{n} \frac{Z_{k}-\mu}{\sigma} \leq x\right)=\Phi(x)=\int_{-\infty}^{x} \varphi(u) d u \text { for all } x \in \mathbb{R}$$

$$\frac{1}{n^{1 / 2}} \sum_{k=1}^{n} \frac{Z_{k}-\mu}{\sigma} \stackrel{d}{\rightarrow} N(0,1) \quad \text { as } n \rightarrow \infty$$

$$\frac{1}{n^{1 / 2}} \sum_{k=1}^{n} \frac{Z_{k}-\mu}{\sigma} \stackrel{d}{\rightarrow} N \quad \text { as } n \rightarrow \infty$$

## 微积分网课代修|极限理论代写Limit Theory代考|Weak Convergence of Markov Kernels

$$\nu \mapsto \int h d \nu, \quad h \in C_{b}(\mathcal{X})$$

$|h|{\text {sup }}:=\sup x \in \mathcal{X}|h(x)|$. 弱拓扑 $\mathcal{M}^{1}(\mathcal{X})$ 因此是每个函数的最弱拓扑 $\nu \mapsto \int h d \nu$ 是连续的。因此，网络的弱收敛 $(\nu \alpha) \alpha$ 在 $\mathcal{M}^{1}(\mathcal{X})$ 至 $\nu \in \mathcal{M}^{1}(\mathcal{X})$ 方法 $$\lim \alpha \int h d \nu{\alpha}=\int h d \nu$$

$\nu_{1}=\nu_{2}$ ，这个拓扑是豪斯多夫，极限是唯一的。此外，弱拓扑是可分离的，例如通过
Prohorov 度量可度量，参见例如 [69]，定理 II.6.2，如果 $\mathcal{X}$ 是波兰语；参见例如
[69]，定理 II.6.5，[26]，推论 11.5.5。相对紧凑的子集 $\mathcal{M}^{1}(\mathcal{X})$ 正好是紧的，前提是
$\mathcal{X}$ 是波兰语，在哪里 $\Gamma \subset \mathcal{M}^{1}(\mathcal{X})$ 被称为紧，如果对于每个 $\varepsilon>0$ 存在一个紧集
$A \subset \mathcal{X}$ 这样 $\sup _{\nu \in \Gamma} \nu(\mathcal{X} \backslash A) \leq \varepsilon$; 参见例如 [69]、定理 II.6.7、 [26]、定理 11.5.4。