Friday, April 22, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Finding out the frequency of a sine wave

Posted: 22 Apr 2022 06:52 AM PDT

I'm required to have a signal x[n] which has 500 samples. and I want to generate it such as it consists of two sinusoids added together. The first sinusoid will have an amplitude of 1, and make 6 complete cycles in the 500 samples. The second sinusoid will have an amplitude of 0.5 and make 44 complete cycles in the 500 samples. Make of plot of this signal.

What I'm not really familiar with is the 6 complete cycles in 500 samples part. I assume I'm gonna be using $\sin(2\pi f)$? So I should have something like this $$x[n]= \sin(2\pi f) + 0.5 \sin(2\pi f)$$ I know that the periodic time is the time taken to make one complete cycle, so is the 500 samples considered as the time unit here?

Area near the singularities

Posted: 22 Apr 2022 06:47 AM PDT

Given a complex meromorphic function $f$ with only simple poles and $|f'|\neq0$ everywhere, we furthermore have the condition that $$\int_{\mathbb{R}^2}\frac{|f'|^2}{(1+|f|^2)^2}\text{d}x\text{d}y<\infty.$$ Then $f$ is meromorphic on the extended complex plane, which restricts $f$ to the form $az+b$.

My attempts: we define the metric $\text{d}\mu=\frac{1}{(1+|z|^2)^2}\text{d}x\text{d}y$. Then the integrable condition implies that the image of $f$ has finite area. I intend to move on with Picard's theorem and there are two critical steps that I cannot convince myself of:

  1. Picard's theorem requires that $\infty$ is an isolated singularity. If $f$ takes the form $\sum\limits_{n}2^{-n}(z-n)^{-1}$, then the isolated condition is violated.
  2. In regardless of 1., we have that the image of $f$ covers the extended complex plane infinitely many times (possibly except for one point). This intuitively leads to infinite area but I desire rigorous proof. Someone says that Banach indicatrix might work but all the materials I read mentioned the case that $f$ is defined on the real line.

How to prove Gauss map is differentiable?

Posted: 22 Apr 2022 06:46 AM PDT

I am studying differential geometry with do carmo's book. In his book, on the page136,after the definition of the Gauss map, he says it's straightforward to verify the Gauss map is differentiable. Actually, I have no idea how to prove this? Can someone tell me?

Cyclic quotient group as a subgroup?

Posted: 22 Apr 2022 06:48 AM PDT

There are many questions here along the lines of the following: Let $G$ be a finite group, and $H$ a normal subgroup. Does $G$ admit a subgroup which is isomorphic to $G/H$?

This is necessarily true if $G$ is abelian, and there are counterexamples if $G$ is not, the most common counterexample being the quaternion group of order 8 (see here), which has a quotient isomorphic to $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ (but no such subgroup).

My question is: if we additionally assume that the quotient $G/H$ is cyclic, then is it the case that $G$ must admit a subgroup isomorphic to $G/H$? Or are there counterexamples in this situation also?

My gut tells me it's the latter, that it's not necessarily true, but I so far haven't been able to find an example.

is $e^x$ uniformly continuous on the open $(0,\infty)?$

Posted: 22 Apr 2022 06:50 AM PDT

I tried coming up with two sequences by the non-uniform criterion such that $F(X_n)-F(Y_n)\neq0$ but I had no luck in that.

any hints I can follow?

Time series that are correlated in levels but not first differences

Posted: 22 Apr 2022 06:40 AM PDT

Is it correct to say that the two time series are correlated in levels, but not first differences?

$$x_{t}=\phi t+u_{t}\\ y_{t}=\rho t+\epsilon_{t}$$

where $\epsilon_{t}$ and $u_{t}$ are mutually orthogonal white noise processes, and $\phi$, $\rho>0$. The first differences of these time series are: $$\Delta x_{t}=\phi+\Delta u_{t}\\ \Delta y_{t}=\rho+\Delta \epsilon_{t} $$

The level series will clearly be correlated as they both have a deterministic time trend. The first differences, however, do not. Is this reasoning correct? Thank you.

Exact VS Inexact Line search

Posted: 22 Apr 2022 06:38 AM PDT

I can't catch what is the difference between Exact and Inexact Line search algorithms. Could you help me?

Thanks!

How to find the second derivative with the implicit function theorem?

Posted: 22 Apr 2022 06:45 AM PDT

$z(x,y)$ is a function defined implicitly by the equation $$F(x,y,z)=5x+2y+5z+5\cos(5z)+2=0$$

I'm trying to find $\frac{\partial^2z}{\partial x \partial y}$ at the point $(\frac{\pi}{5},\frac{3}{2},\frac{\pi}{5})$.

As far as I can tell I need to use the implicit function theorem, and I'm able to find both $\frac{\partial z}{\partial x}$ and $\frac{\partial z}{\partial y}$, but the second derivative becomes 0, which is incorrect.

How can I find $\frac{\partial^2z}{\partial x \partial y}$ ?

Evaluate a surface integral with the application of Divergence Theorem (Answer urgently needed)

Posted: 22 Apr 2022 06:36 AM PDT

Apply the Divergence theorem to evaluate the surface integral ∬_S▒〖F ⃗⋅dS ⃗ 〗, where F ⃗(x,y)=⟨1,2,z^2 ⟩ and on the surface S of solid bounded by the upper half of sphere x^2+y^2+z^2=4 and z=0.

$\phi: S_1\rightarrow S_2$ to be surjective, along with $(I)$ is enough to conclude $\phi$ is a local isometry?

Posted: 22 Apr 2022 06:34 AM PDT

Let $S_1$ and $S_2$ be two regular surfaces. An isometry is a diffeomorphism $\phi: S_1\rightarrow S_2$ such that

$$\langle d\phi_p(v_1), d\phi_p(v_2)\rangle=\langle v_1, v_2\rangle$$ for every $p\in S_1$ and $v_1, v_2\in T_p S_1$. A local isometry from $S_1$ to $S_2$ is a differentiable map $\phi: S_1\rightarrow S_2$ such that for every $p\in S_1$ there are open sets $V_1\subset S_1$, with $p\in V_1$, $V_2\subset S_2$ such that $\phi: V_1\rightarrow V_2$ is an isometry.

The notes which I'm following states that if $\phi: S_1\rightarrow S_2$ is a surjective differentiable map such that

$$\langle d\phi_p(v_1), d\phi_p(v_2)\rangle=\langle v_1, v_2\rangle\tag{I}$$

for every $p\in S_1$ and $v_1, v_2\in T_p S_1$, then $\phi$ is a local isometry.

I believe there is something missing, maybe the correct hypothesis should be $\phi$ is a surjective submersion, shouldn't it?

I think so, because $(I)$ implies $d\phi_p$ is injective for every $p\in S_1$. Under the hypotehesis $\phi$ is a submersion, $d\phi_p$ is surjetive, hence $d\phi_p$ is an isomorphism for every $p\in S_1$ and it is possible to apply the inverse function theorem in order to conclude $\phi$ is a local isometry.

In short, $\phi: S_1\rightarrow S_2$ to be surjective, along with $(I)$ is enough to conclude $\phi$ is a local isometry?

Prove 2 manifolds are homeomorphic iff they are diffeomorphic

Posted: 22 Apr 2022 06:34 AM PDT

Suppose $M_1$ and $M_2 $ each $C^{\infty}$ manifolds and admits an atlas with once chart. Prove $M_1$ and $M_2$ are homeomorphic iff they are $C^\infty$ diffeomorphic.

One direction is obvious. But given that those manifolds are homeomorphic I'm not sure how to prove that they are $C^\infty$ diffeomorphic. Given that they are indeed homeomorphic there exists a homeomorphism from each chart of the compatible atlas on the compatible manifold. Im not sure how can I conclude that this hommeomorphism is actually $\infty $ times differentialbe with $\infty$ times differentiable inverse.

Any help would be appreciated. Thanks in advance.

Inner product space and orthonormality

Posted: 22 Apr 2022 06:49 AM PDT

We have an orthonormal sequence $\{e_n\}$ on the inner product space $V=C([0,\,1],\,\Bbb R)$. Let $h\in V$ and assume $\langle h, e_n\rangle=0$ for integers $n\ge0$. Show $\langle h, p\rangle=0$ for all polynomials $p$.

Why are the row space and column space lattices of a boolean matrix inverses of each other?

Posted: 22 Apr 2022 06:31 AM PDT

We define the row/column space of a matrix $A$ to be all possible sums of rows/columns of $A$.

The row space ($V(A)$) and column space ($W(A)$) form a lattice.

See example matrix and respective row space lattice here

See example matrix and respective column space lattice here

All row and column space lattices are inverses of each other. Any ideas on a proof to this?

Suppose $x,y,z$ are mutually coprime positive integers that solve $x^2+y^2=z^2$

Posted: 22 Apr 2022 06:52 AM PDT

Suppose $x,y,z$ are mutually coprime positive integers that solve $x^2+y^2=z^2$.Assume that $x$ is odd and exactly one of $y$ or $z$ is even. Show that $z-y$ and $z+y$ are coprime.

So I started of by saying since $x$ is odd $x^2$ is odd $x^2=z^2-y^2=(z-y)(z+y)$ so both $z-y$ and $z+y$ are odd and then tried a few different things but got stuck. Any ideas?

Dose such GN-inequality for fractional p-Laplacian equation exist ? p not equal to 2

Posted: 22 Apr 2022 06:28 AM PDT

Dose such GN-inequality for fractional p-Laplacian equation exist? p not equal to 2, I have searched so many papers and books, but I had not to look out my need. Thanks your answer

Find a second linearly independent solution $y_2(t)$ (simplify and obtain an integral representation)

Posted: 22 Apr 2022 06:48 AM PDT

I have this exercise but I´m not really sure if what I´m doing is ok, as we haven´t seen it in class yet. Any hint would be appreciated, thanks!!

In quantum mechanics, the study of the Schrödinger equation for the hydrogen atom leads to a consideration of Laguerre´s equation: $$ xy''+(1-t)y'+λy=0 $$ where λ is a parameter, consider λ=2. Find a second linearly independent solution $y_2(t)$

This is what I have (it´s not much, but I don´t know if I´m going on the correct path).

$$y=vy_1$$ $$\Rightarrow y=v(t^2-4t+2)$$ $$\Rightarrow y=vt^2-4vt+2v$$

Sorry, I forgot to write the first solution. It´s $$y_1(t)=t^2-4t+2$$

Distributivity of intersection over addition in ideals

Posted: 22 Apr 2022 06:49 AM PDT

I have been trying to read ring theory on my own, from Dummit and Foote, and this question has been on my mind for a while:

Let $I,J,K$ be ideals of a ring $R$ (may be non commutative, but has a $1$). Can we say the following: $$I \cap (J + K) = I \cap J + I \cap K$$

I get that the RHS is a subset of the LHS, but I am not able to say anything about the other inclusion. Maybe someone can provide a counterexample?

Also, what about the case when $R$ does not have a $1$?

Step in the proof of Stepanov's theorem

Posted: 22 Apr 2022 06:32 AM PDT

I am reading Maly's proof of Stepanov's theorem which asserts the following:

Theorem Let $\Omega \subset \mathbb{R}^n$ be open, and let $f: \Omega \rightarrow \mathbb{R}^m$ be a function. Then $f$ is differentiable almost everywhere on $$L(f) : = \{x\in \Omega : \text{Lip}f(x) < \infty\}.$$

(for example, see Lectures on Lipschitz Analysis).

Proof: We may assume that $m=1$. Let $\{B_1, B_2, \ldots\}$ be the countable collection of balls contained in $\Omega$ such that each $B_i$ has rational center and radius, and $f_{|{B_i}}$ is bounded. In particular this collection covers $L(f)$. Define $$u_i(x): = \inf \{u(x) : u \, \text{is $i$-Lipschitz with $u\geq f$ on $B_i$}\} $$ and $$v_i(x): = \sup \{v(x) : u \, \text{is $i$-Lipschitz with $v\leq f$ on $B_i$}\}. $$ [...]

In the proof, we say that "Is it clear that $f$ is differentiable at every point $a$, where, for some $i$, both $u_i$ and $v_i$ are differentiable with $v_i(a) = u_i(a)$" (see page 24).

I would like to verify this claim.

If the derivative of $u_i$ and $v_i$ at point $a$ are equal, it seems to be clear, but what about in case that they are not equal?

Any suggestion? Thanks in advance!

What's the relationship between $|AD|$ and $|EC|$?

Posted: 22 Apr 2022 06:43 AM PDT

As shown in the picture, $\triangle ABC$ is a right triangle, and $\def\degree{{}^{\circ}} \angle ACB = 90\degree, \angle CAB = 30\degree$, $|AB|=2$ .
Point $D$ (not overlap with $A,B$) is on the line segment $AB$. Then make a perpendicular line to $CD$ through point $D$ which intersects ray $CA$ at point $E$.

Let $|AD|=x, |CE|=y$, then what is the relationship between $x$ and $y$ ?

By drawing some possible situations, I find that $y$ decreases and then increases as $x$ increases, and finally goes to infinity. But it's hard to me to find the exact relationship between them.

Organization of 10 different marbles to 6 different color boxes- combinatorics combinations

Posted: 22 Apr 2022 06:47 AM PDT

The question is:

There are $6$ boxes, with each box a different color, and $10$ different marbles. The marbles are scattered randomly in the boxes.

  1. What is the probability that all marbles are in the same box?
  2. What is the probability that all marbles are in $2$ boxes exactly?
  3. What is the probability that in the yellow box, red box, green box and blue box will be equal number of marbles in each box?

To solve Q1, I treat that the sample space will be $\binom{10}{6}$, meaning that there are $10$ marbles scattered in $6$ boxes.

I have trouble describing the event space. Should it be $\binom{10}{1}$, picking all $10$ marbles in the same one box? Or $\binom{6}{1}$, picking $1$ box for all the marbles?

About Q2 I have the same situation, about describing the event space. $\binom{10}{2}$, picking marbles, or $\binom{6}{2}$, picking boxes?

Pricing a call option in the Black Scholes Market - calculation steps

Posted: 22 Apr 2022 06:40 AM PDT

I am working on computing the price of a standard European call option under a Black-Scholes market.

Using knowledge of the payoff, I can split the calculation into:

$ e^{-rT}(E[S_t] \mathbb{1}_{S_T > K} - KE[\mathbb{1}_{S_T > K}] $

for which the expected value of the indicator function is simply:

$P[S_T >K]$

Using the formula for $S_T$ as a Brownian motion I can derive a $Z = \frac{lnS_T - (lnS_0 +r - \frac{\sigma^2}{2})T)}{\sigma \sqrt{T}} $ ~ $ N(0, 1)$

and thus the calculation for the probability (second term) leads to $N(d_\_)$

However, I am struggling to compute the first term. Doing

$E[S_T \mathbb{1}_{S_T >K}] = E [S_o exp(r-\frac{\sigma^2}{2})T + \sigma W_T) \mathbb{1}_{S_T > K}]$, in my notes leads to:

$E[S_o exp(r-\frac{\sigma^2}{2})T + \sigma \sqrt{T}Z) \mathbb{1}_{Z> -d_+ + \sigma \sqrt{T}}]$, which I can not follow where it comes from?

Solving the integral equation $y(x)=e^{-x^2/2}+\lambda\int^{+\infty}_{-\infty}e^{-i(x-z)}y(z)dz$

Posted: 22 Apr 2022 06:55 AM PDT

Could you please help me to solve the following integral equation?

$$y(x)=e^{-x^2/2}+\lambda\int^{+\infty}_{-\infty}e^{-i(x-z)}y(z)dz$$ I tried to turn the exponentiential term into its trigonometric form and tried to solve the equation obtain in term of new constants $c_1=\int^{+\infty}_{-\infty}cos(z)y(z)dz$ and $c_2=\int^{+\infty}_{-\infty}sinz(z)y(z)dz$. But I obtained for $c_1$ $$c_1=\int^{+\infty}_{-\infty}cos(z)e^{-z^2/2}dz+\lambda (c_1+ic_2)\int^{+\infty}_{-\infty}cos^2(z)dz+\lambda(c_2-c_1i)\times\int^{+\infty}_{-\infty}cos(z)sinzdz.$$ I can't go further.

I also applied the Fourier transform on both sides and obtained that this equation has a unique solution iff $$1-\lambda\hat{K}(\alpha)\neq 0$$ where $$\hat{K}(\alpha)$$ is the Fourier transform of $e^{-ix}$. But I think (I am not sure) the Fourier transform of such a function is the dirac function $\delta_{-\alpha}$, and I don't know how to proceed.

Please need help. Thanks in advance.

Exercise 14, Section 30 of Munkres’ Topology

Posted: 22 Apr 2022 06:40 AM PDT

Show that if $X$ is Lindelof and $Y$ is compact, then then $X \times Y$ is Lindelof.

My attempt: let $U=\{ U_\alpha \in \mathcal{T}_p | \alpha \in J\}$ be an open cover of $X \times Y$. Given $x_0 \in X$. Since $\{x_0\} \times Y \cong Y$ and $Y$ Is compact, $\{x_0\} \times Y$ is compact. $U$ is also an open cover of $\{x_0\} \times Y$. So $\exists \{U_{\alpha_1}, …,U_{\alpha_k}\}$ finite subset of $U$ such that $\{ x_0\} \times Y \subseteq \bigcup_{i=1}^{k} U_{\alpha_i}=N $. So $N\in \mathcal{T}_p$ and $\{x_0\} \times Y \subseteq N$. By lemma 26.8, $\exists W\in \mathcal{N}_{x_0}$ such that $W\times Y \subseteq N$. Thus $\forall x\in X$, $\exists W_x \in \mathcal{N}_x$ such that $W_x \times Y\subseteq N_x =\bigcup_{i=1}^{k_x}U_{x_i}$. $W'=\{ W_x |x\in X\}$ is an open cover of $X$. Since $X$ is lindelof, $\exists \{ W_{x_n}|n\in \Bbb{N}\}$ countable subcover of $W'$. $\{ W_{x_n}\times Y \in \mathcal{T}_p| n\in \Bbb{N}\}$ is an open cover of $X\times Y$. We need to show $X\times Y =\bigcup_{n\in \Bbb{N}}(W_{x_n} \times Y)$. Inclusion $\supseteq$ holds trivially. Conversely, let $x\times y \in X\times Y$. $\exists m\in \Bbb{N}$ such that $x\in W_{x_m}$. So $x\times y \in W_{x_m} \times Y \subseteq \bigcup_{n\in N}(W_{x_n}\times Y)$. Thus $X\times Y =\bigcup_{n\in \Bbb{N}}(W_{x_n} \times Y)$. We have $W_{X_n}\times Y\subseteq N_{x_n}, \forall n\in \Bbb{N}$. So $X\times Y= \bigcup_{n\in \Bbb{N}}(W_{x_n}\times Y)\subseteq \bigcup_{n\in \Bbb{N}} N_{x_n}$. Hence $X\times Y=\bigcup_{n\in \Bbb{N}}N_{x_n}$. Note $\bigcup_{n\in \Bbb{N}} N_{x_n} =\bigcup_{n\in \Bbb{N}} (\bigcup \{U_{x_{n_{i}}} | i\in J_{k_{x_{n}}}\})= \bigcup_{n\in \Bbb{N}, i\in J_{k_{x_n}}} U_{x_{n_{i}}} $. It is easy to check $\{ U_{x_{n_{i}}} \in \mathcal{T}_p| i\in J_{k_{x_{n}}} ,n\in \Bbb{N}\} \subseteq U$ is countable. So $X \times Y= \bigcup_{n\in \Bbb{N}} N_{x_n} =\bigcup_{n\in \Bbb{N}, i\in J_{k_{x_n}}} U_{x_{n_i}}$. Thus $\{ U_{x_{n_{i}}} \in \mathcal{T}_p| i\in J_{k_{x_{n}}} ,n\in \Bbb{N}\}$ is countable subcover of $U$. Is this proof correct? especially notation part.

This post is little bit different than Prob. 14, Sec. 30, in Munkres' TOPOLOGY, 2nd ed: If $X$ is Lindelof and $Y$ is compact, then $X\times Y$ is Lindelof post. Proof of this exercise is very similar to theorem 26.7.

Edit: I have edited my proof after realizing mistakes. $N_{x_n} =\bigcup_{i=1}^{k_{x_n}} U_{x_{n_{i}}}$ may look scary. $x_n \in X$, $\forall n\in \Bbb{N}$.

simple challenge on logarithm and order equivalency

Posted: 22 Apr 2022 06:33 AM PDT

is it correct that we say these three are equivalent from logarithm properties:

$O(\log (nm)) = O(\log (n+m))$ = $O(\log n + \log m)$?

Update $1$:

For learning purpose, I consider two cases:

Case $"1"$ : $n = m$

Case $"2"$ : $n \neq m$

Non-negative real part of an expression involving meromorphic functions

Posted: 22 Apr 2022 06:32 AM PDT

Let us consider meromorphic functions given by the power series expansions:

$$h(z) = z + \sum_{n=1}^\infty a_n z^{-n},\quad g(z) \sum_{n=1}^\infty b_n z^{-n}.$$

Fix $\gamma\in (0,1)$ and $|z|\geqslant 1$. In the paper I am reading (Theorem 2) it is claimed that

$$\Re \frac{(1-\gamma)z - \sum_{n=1}^\infty (n+\gamma) a_n z^{-n} - \sum_{n=1}^\infty (n-\gamma)b_n z^{-n}}{z + \sum_{n=1}^\infty a_n z^{-n} - \sum_{n=1}^\infty b_n z^{-n}} \geqslant 0 $$

assuming that $\sum_{n=1}^\infty |(n+\gamma) a_n + \sum_{n=1}^\infty (n-\gamma)b_n|\leqslant 1-\gamma$. I can't wrap my head around it.

Can one please explain why is this quantity non-negative?

Prove that the set is close

Posted: 22 Apr 2022 06:54 AM PDT

Given metric spaces $(X,d)$ and $(Y,d')$ and continuous mapping $S$ and $T$ from $X$ into $Y$, prove that the set $\{x \in X: Sx = Tx\}$ is closed in $(X,d)$.

I've run out of any ideas where I should start

Is nearest point to $\mu$ also nearest to all other points?

Posted: 22 Apr 2022 06:53 AM PDT

Suppose a set of points $\mathcal{X} = \{ x_1, …, x_n\} \subset \mathbb{R}^p$. Define

$$\mu = \frac1n \sum_i x_i$$

We know that $\mu$ is the point which minimizes (over $\mathbb{R}^p$) the sum of squared distances between it and all other points in $\mathcal{X}$.

Is it true that the point in $\mathcal{X}$ that is closest to $\mu$ is also the point in $\mathcal{X}$ that is closest to all other points in $\mathcal{X}$? In other words what I am asking is whether the following equality holds:

$$\arg\min_{x_i \in \mathcal{X}} \| \mu - x_i \|_2^2 = \arg\min_{x_j \in \mathcal{X}} \sum_i \| x_j - x_i \|_2^2$$

How can this be shown? I'm finding it difficult to write a proof due to the fact that $\mathcal{X}$ is a set of points (non-convex, disconnected, countable, etc)

If yes, does this still hold when we use other $p$-norms in place of $\ell_2$ to compute distance?

Voronoi relevant vectors VS shortest vector

Posted: 22 Apr 2022 06:49 AM PDT

Let $L$ be an $n$-dimensional lattice with its Voronoi cell $\mathcal{V}$ is then defined as the set $\{x\in \mathbb R^n:|x|\le |x-v|, \mbox{ for all }v\in L\}$. A vector $v$ is called a Voronoi relevant vector if the hyperplane $\{x:\in \mathbb R^n:\langle x,v \rangle=|v|^2/2\}$ has an intersection with an $(n-1)$-dimensional face of $\mathcal{V}$.

If we take an arbitrary point $\lambda \in L$ that is not Voronoi relevant, can we always find a Voronoi relevant vector $v$ such that $||v|| < ||\lambda||$?

enter image description here

Why can the exit distribution of a Brownian motion be found pretending it moves in a random straight line?

Posted: 22 Apr 2022 06:37 AM PDT

Let $U$ be a domain in the plane and denote by $\mu_{x_0,U}$ the distribution of a Brownian motion starting at $x_0\in U$ by the time it hits the boundary of $U$.

Why is it that for $U=\mathbb{H}$ the upper half plane and any $x_0\in\mathbb{H}$, the exit distribution $\mu_{x_0,\mathbb{H}}$ is a Cauchy distribution on the horizontal axis (centered at the first component of $x_0$, and with scale parameter equal to the second component)? Note that the Cauchy distribution on the horizontal axis is the distribution of the intersections between the horizontal axis and random straight lines through the starting point with uniformly random angle.

Obviously this uniform angle property also holds for the exiti distribution from $U=\mathbb{D}$ the unit disk when $x_0=0$ (in this case the exit distribution is just the Hausdorff measure on the circle).

The question title was written in an attention gathering fashion though: I know this uniform-angle propery can't be true for all domains, but rather I'd like to ask what makes it true for the two special cases above and what makes it false for other domains.

EDIT: In a lame sense, the answer is "it's true for the half plane because the Cayley transform preserves BM up to time reparametrization and takes the horizontal axis to the circle and has derivative proportional to $1/(z+i)^2$, the absolute value of which is the Cauchy distribution for real $z$". This isn't very revealing to what's going on though, at least to me. I still don't have a clear picture of whether the uniform angle property maybe is true for all, e.g., convex domains if only formulated well enough (e.g. first pick a random line, then simulate a one dimensional Brownian motion hitting the boundary intersected with that line)

What is a chance move?

Posted: 22 Apr 2022 06:41 AM PDT

I am studying the book Game Theory by Guillermo Owen and in the next paragraph I do not understand what means a chance move. Could anyone explain to me, please?

The elements of a game are seen here: condition $\alpha$ states that there is a starting point; $\beta$ gives a payoff function; $\gamma$ divides the moves into chance moves ($S_0$) and personal moves which correspond to the $n$ players ($S_1,\dots,S_n$); $\delta$ defines a...

The condition gamma reads:

By an n-person game in extensive form is meant:

($\gamma$) a partition of the nonterminal vertices of $\Gamma$ into $n+1$ sets $S_0,S_1,\dots,S_n$ called the player sets

No comments:

Post a Comment