Friday, December 31, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Extension Faa di Bruno formula

Posted: 31 Dec 2021 12:06 AM PST

Faa di Bruno's formula gives an expression for the $n$th derivative of a composite function,

$\frac{d^n}{dt^n}f(g(t))$,

thereby generalising the chain rule. I was wondering whether there is also a formula available for

$\frac{d^n}{dt^n}f(t, g(t))$,

where $t$ also is an argument of $f$.

Help me to Solve this calculus expression

Posted: 31 Dec 2021 12:03 AM PST

I am new to math.StackExchange. I don't know how to write questions about limits.How I Prove that this is equal to -1 lim (x−2÷|x−2|) x→0

Conditional probability of Brownian bridge

Posted: 30 Dec 2021 11:57 PM PST

Suppose $B_{0,a}^{T,b}(t)$ is a Brownian bridge such that $B_{0,a}^{T,b}(0)=a$ and $B_{0,a}^{T,b}(T) = b$. The probability density function of $B_{0,a}^{T,b}(t)$ is the conditional probability density function of the Brownian motion $B(t)$ given that $B(0)=a$ and $B(T)=b$. That is $$ f_{B_{0,a}^{T,b}(t)}(x) = f_{B(t)}(x|B(0)=a,B(T)=b).$$ It is equivalent to $$ \mathbb{P}(B_{0,a}^{T,b}(t) \leq x) = \mathbb{P}(B(t) \leq x |B(0)=a,B(T)=b).$$ I want to ask how to express this probability in term of $B(t)$ $$ \mathbb{P}\left(B_{0,a}^{T,b}(t) \leq x \middle| \min_{s \in [0,T]} B_{0,a}^{T,b}(t) \geq y \right) ?$$ Is it equal to $$ \mathbb{P}\left(B_{0,a}^{T,b}(t) \leq x \middle| \min_{s \in [0,T]} B_{0,a}^{T,b}(t) \geq y \right) =\mathbb{P}\left(B(t) \leq x \middle| B(0)=a,B(T)=b, \min_{s \in [0,T]} B(s) \geq y \right) ?$$ May I have some detailed explanation about it?

Solving a limit using Maclaurin series $ \lim_{x\to0}\frac{xe^{2x}+xe^{-2x}-4x^{3}-2x}{\sin^{2}x-x^{2}} $

Posted: 30 Dec 2021 11:48 PM PST

I need to find the limit $$ \lim_{x\to0}\frac{xe^{2x}+xe^{-2x}-4x^{3}-2x}{\sin^{2}x-x^{2}} $$ using Maclaurin series. I don't know to what degree am I supposed to develop each expression.

I think $\sin x$ in the denominator must be expanded to at least order $3$, but what about the numerator?

I would appreciate any help!

Choosing people based on height

Posted: 30 Dec 2021 11:45 PM PST

How many ways are there to pick a group of $n$ people from $100$ people (each of a different height) and then pick a second group of $m$ other people such that all people in the first group are taller than the people in the second group?

My attempt:

Let us select $m+n$ people among $100$. Arranging them in ascending order of height, we need to separate them into groups of $m$ and $n$ such that the group with $n$ is taller. There is only one way to do this. So answer is $$^{100}C_{m+n}$$

But the answer is given as $$\sum_{r=0}^{100-m} {}^{m+r}C_m \cdot {}^{100-m-r} \space C_n$$

I don't understand what I did wrong and how the given answer is derived.

Reference request: norm of completely positive map between C*-algebras is attained along approximate identity

Posted: 30 Dec 2021 11:49 PM PST

Let $A, B$ be C*-algebras, with $e_\lambda$ an approximate identity in $A$. I'm fairly certain that if $\varphi : A\to B$ is a (completely) positive map, then $\|\varphi\| = \lim_\lambda\|\varphi(e_\lambda)\|$. I could've sworn I had references for this result, but I can't seem to find it anywhere. I checked Blackadar's book "Operator Algebras", but it appears only to include the result that a unital linear map $\varphi : A\to B$ is positive if and only if it is contractive, which is not what I'm looking for.

I've found a reference for the unital case: if $\phi : A\to B$ is a positive map between unital C*-algebras, then $\|\phi\| = \|\phi(1)\|$. This can be found in Paulsen's "Completely Bounded Maps and Operator Algebras" (corollary 2.9), as a corollary of the Russo-Dye theorem. Have I perhaps misquoted the result in the first paragraph?

I'm also less certain that I've seen this similar result stated before: if $\psi : A\to B^*$ is (completely) positive, then $\|\psi\| = \lim_\lambda\|\psi(e_\lambda)\|$. Positivity in $B^*$ is defined as you might expect, giving us an order structure, and the order structure on $M_n(B^*)$ is obtained by the canonical identification of $M_n(B^*)$ with $M_n(B)^*$. I was hoping to find a proof of the result in the paragraph above and see what modifications needed to be made to prove this, but now I can't find either.

Irrational roots of $\frac{4x}{x^2 + x + 3} + \frac{5x}{x^2 - 5x + 3} = -\frac{3}{2}$

Posted: 31 Dec 2021 12:04 AM PST

Find the number of irrational roots of the equation $$\dfrac{4x}{x^2 + x + 3} + \dfrac{5x}{x^2 - 5x + 3} = -\dfrac{3}{2}$$ $$................................... $$

I got a solution : Divide both denominator and numerator by $x$ Let $x+\dfrac{3}{x} = y$, then the equation becomes $$\dfrac{4}{y + 1} + \dfrac{5}{y - 5} = \dfrac{3}{2}$$ Now simplifying this we get $y = -5, 3$. Finally $x+\dfrac{3}{x} = -5$ has 2 irrational roots and $x+\dfrac{3}{x} = 3$ has 2 imaginary roots $$................................... $$

But my question is, since this question is for a competitive exam, is there any other quick approach to solve this question?

how to solve this limit question [closed]

Posted: 31 Dec 2021 12:07 AM PST

$$\lim_{ x \rightarrow 0 } \left( \dfrac{ x-2 }{ | x-2 | } \right) $$

Confusing in limits of bivarite distribution and contradiction in related question

Posted: 30 Dec 2021 11:36 PM PST

This question is from "Mathematical Statistics with application by Wackerly and Mendelhall" $8th$ edition page $233$ ,question $5.5$ :

$f(y_1,y_2)=3y_1 , 0 \leq y_2 \leq y_1 \leq1$ and $0$ , elsewhere.

Find $F(1/2 ,1/3) =P(Y_1 \leq 1/2,Y_2 \leq 1/3)$.

I am beginner in statistics , so there are something i could not understand.I tried to solve this question ,but my answer and answer key is different.First of all , i want you to check my solution to see what i am missing.

$\mathbf{\text{MY ATTEMPT:}}$

$$\int_{0}^{1/2}\int_{0}^{1/3}3y_1dy_2dy_1 =\int_{0}^{1/2}\bigg(\int_{0}^{1/3}3y_1dy_2\bigg)dy_1$$

$$\bigg(\int_{0}^{1/3}3y_1dy_2\bigg) =3y_1\times(1/3) - 3y_1\times (0)=y_1$$

$$\int_{0}^{1/2}y_1dy_1 =\frac{y_1^2}{2} \rightarrow \frac{1}{8} - 0=0.125$$

However , the answer key says that answer is $0.1065$

What am i missing.

I want to add something such that when we solve continuous bivarite probablities , is $ 0 \leq y_2 \leq y_1 \leq1$ equal to $0 <y_2 < y_1 <1$ or $0 <y_2 \leq y_1 <1$ or $0 \leq y_2 < y_1 <1$ or $0 <y_2 < y_1 \leq1$ .I always confuse when i see $\leq$ and $<$ ,somebooks use $<$ , but somebooks use $\leq$.Can you also explain how to approach these signs in bivarite probability. Thanks in advance..

By the way , do not forget that i am a beginner , so please explain like explaining to an dummy..

Evaluating a limit when the variable is in the function

Posted: 30 Dec 2021 11:37 PM PST

If I have a continuous function $f(a)$, it makes sense that $$\lim_{x→0}f(x)=f(0)$$ however if we have the limit of a variable that is inside the function such as: $$\lim_{h→0}f(x+h)$$ would it be correct to say: $$\lim_{x→0}f(x+h)=f(0+h)=f(h)$$ no matter what the function $f(a)$ is?

And if so, what is the explanation for this?

Matching Cubic Bézier to match Cubic Polynomial

Posted: 30 Dec 2021 11:55 PM PST

I have a series of cubic polynomials that are being used to create a trajectory. Where some constraints can be applied to each polynomial, such that these 4 parameters are satisfied. -Initial Position -final Position -Initial Velocity -final Velocity

The polynomials are pieced together such that the ends of one polynomial are identical to the beginnings of the next to preserve continuity.

I instead want to represent these polynomials as cubic Bézier curves.

How would I find the x,y position of each control point for the cubic Bézier curves, such that it matches the curvature of the cubic polynomial.

Here is what I have so far, made in desmos.

https://www.desmos.com/calculator/agsywptfno

Currently the bezier curve is defined as a binomial, with a polynomial for X and or Y e.g. Bezier = (X(t), Y(t))

Probability problem using Bayes's theorem: Find the percentage of female regular smokers

Posted: 31 Dec 2021 12:01 AM PST

Can anyone help me with this exercise? Here is the exercise:

In one city, there are three types of people. They are:

  1. Regular Smoker
  2. Nonsmoker
  3. Irregular Smoker

From the survey conducted in this city, we can get 25% of people are regular smokers, 65% are nonsmokers, and 10% are irregular smokers. This survey also stated that 10% of regular smokers are female, 80% of irregular smokers are male, and 95% of nonsmokers are female. Find the percentage of female regular smokers.

Find the solution of the sistem $x''=2x+y$ and $y''=x+2y$

Posted: 30 Dec 2021 11:57 PM PST

I have to find the solution of the sistem $x''=2x+y$ and $y''=x+2y$ to which it applies $x(0)=0$, $x'(0)=2$, $y(0)=0$ and $y'(0)=0$.

First I wrote this two formulas in matrix like this $$\begin{bmatrix} x'' \\ y'' \end{bmatrix}=\begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}\begin{bmatrix} x\\ y \end{bmatrix}$$

Then I calculate eigenvalues of the matrix $\begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}$ where I get $\lambda_{1}=1$ and $\lambda_{2}=3$

For each eigenvalues we got eigenvectors $v_{1}=\begin{bmatrix} 1\\ -1 \end{bmatrix}$ and $v_{2}=\begin{bmatrix} 1 \\ 1 \end{bmatrix}$

For that we get the solution $$\begin{bmatrix} x'\\ y' \end{bmatrix}=\begin{bmatrix} e^{t} & e^{3t} \\ -e^{t} & e^{3t} \end{bmatrix} \begin{bmatrix} C_{1} \\ C_{2} \end{bmatrix}$$

We use $x'(0)=2$ and $y'(0)=0$ and we get $C_{1}=C_{2}=1$

Now I have to find solution for $$\begin{bmatrix} x'\\ y' \end{bmatrix}=\begin{bmatrix} e^{t} & e^{3t} \\ -e^{t} & e^{3t} \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}$$

I tried to find eigenvalues for that matrix but I can not find them.

Any help?

How can I evaluate the integral $\int \frac{\sin ^{7} x+\cos ^{7} x}{\sin ^{3} x+\cos ^{3} x} d x$? [duplicate]

Posted: 30 Dec 2021 11:34 PM PST

By division, we decrease the power in the numerator of the integrand. $$ I:=\int \frac{\sin ^{7} x+\cos ^{7} x}{\sin ^{3} x+\cos ^{3} x} d x=\int\left(\sin ^{4} x+\cos ^{4} x-\frac{\sin ^{3} x \cos ^{3} x(\sin x+\cos x)}{\sin ^{3} x+\cos ^{3} x}\right)dx $$ Simplifying the denominator yields $$ I= \underbrace{\int\left(\sin ^{4} x+\cos ^{4} x\right) d x}_{J} -\underbrace{\int\frac{\sin ^{3} x \cos ^{3} x}{1-\sin x \cos x} d x}_{K} $$ Let's start from the easier one $J$. $$ \begin{aligned} J &=\int\left[\left(\sin ^{2} x+\cos ^{2} x\right)^{2}-2 \sin ^{2} x \cos ^{2} x\right] d x \\ &=x-\frac{1}{2} \int \sin ^{2} 2 x d x \\ &=x-\frac{1}{2} \int\left(\frac{1-\cos 4 x}{2}\right) d x \\ &=\frac{3 x}{4}+\frac{1}{16} \sin 4 x+C \end{aligned} $$ From my answer, $$K=\frac{2}{\sqrt{3}} \tan ^{-1}\left(\frac{2 \tan x-1}{\sqrt{3}}\right)-\frac{9}{8} x+\frac{\cos 2 x}{4}+\frac{\sin 4 x}{32} +C$$

Now we can conclude that $$ I= \frac{15x}{8}+\frac{ \sin 4 x}{32}-\frac{\cos 2 x}{4}+\frac{2}{\sqrt{3}} \tan ^{-1}\left(\frac{1-2 \tan x}{\sqrt{3}}\right)+C $$ *checked by wolframalpha.

Is there any other simpler method? Share with us if you have any.

If $U$ is a real measurable function s.t. $\int_{B_1(x)}\frac{1}{U(y)} dy\to 0$, does $\int_{B_1(x)}\frac{1}{U(y)+a} dy\to 0$, too?

Posted: 31 Dec 2021 12:17 AM PST

Let $U:\mathbb{R}^N\to\mathbb{R}$ a measurable function such that $${\rm infess}_{\mathbb{R}^N} U(x)>-\infty.$$ Let $a\in\mathbb{R}$ be sufficiently large such that $${\rm infess}_{\mathbb{R}^N} (U(x)+a)>0.$$ Finally, assume that $$(I) \int_{B_1(x)}\frac{1}{U(y)} dy\to 0 \quad\mbox{ as } |x|\to +\infty.$$ My question is: assumption $(I)$ implies that also $$\int_{B_1(x)}\frac{1}{U(y)+a} dy\to 0 \quad\mbox{ as } |x|\to +\infty?$$

If yes, how to prove it? I've been stuck for a while, I hope someone could help.

Thank you in advance!

How to prove that $\arg\max_{t}\langle \sum w_i a_{t_i}, a_t\rangle$ is close to $t_1$?

Posted: 31 Dec 2021 12:17 AM PST

Let $a_t$ be a Gaussian type function with center $t\in \mathbb R^d$, i.e. $$a_t(x)=e^{-||x-t||^2}, \text{ for } x \in \mathbb R^d.$$

Let $b$ be a combination of these Gaussian functions $$b = \sum_{i=1}^s w_i a_{t_i}$$ where $t_i \in \mathbb R^d$ for all $i=1,..., s$ and $w_1 > .... > w_s>0$ is a finite sequence of decreasing positive numbers.

Define $$\hat t = \arg\max_{t\in \mathbb R^d} \langle b, a_t\rangle.$$

Here the inner product $\langle b, a_t\rangle$ is defined as $\int_{x\in \mathbb R^d} b(x) a_t(x)dx$.

Intuitively, we can easily see that if the Gaussian functions are far away of each others, then $\hat t$ will be close to $t_1$, i.e. we have

Claim. If $\Delta= \min_{i\neq j; i, j\leq s}||t_i-t_j||\rightarrow +\infty$ then $\hat t \rightarrow t_1$.

I am wondering how to prove the claim rigorously. I have no idea since the function $\langle b, a_t\rangle$ is not even convex. Is there any idea to tackle this problem? Thanks!

Understanding $a^a=n$

Posted: 31 Dec 2021 12:01 AM PST

I have been looking at a particular computational problem and finally reduced it down to a bound of $a^a=n$, I will try to explain what I mean by this first in one paragraph.

For example, suppose I am trying to find if an element exists in a sorted array. I can do binary search, which will require $a$ steps such that $2^a=n$. Hence we know that binary search requires $\Theta(\log n)$ steps, by solving the previous equation. Now similarly, I have a problem which upon solving I get the equation $a^a=n$ instead of $2^a=n$ which we had for binary search.

I have been informed that there is no explicit function to satisfy this. That is whatever explicit function $f(n)$ I take, $f(n)^{f(n)}$ is not equal to $n$. Not only this, I cannot write an explicit function satisfying this in the big $\Theta$ notation as well. By this what I mean is for all explicit functions $f(n)$, and for all functions $g(n)\in \Theta(n)$, we must have $f(n)^{f(n)}!=g(n)$. The second statement is a stronger statement but I have a feeling that it might be easy to prove the second statement from the first statement somehow. Also, I have no idea how to go about proving the first statement itself. Any insights regarding these would be appreciated.

Game of draughts, expected value of first move advantage, part 2

Posted: 31 Dec 2021 12:12 AM PST

This is a follow up to my question here: Game of draughts, expected value of first move advantage

Here's a question from my probability textbook:

$A$ and $B$ play at draughts and bet $\$1$ even on every game, the odds are $\mu : 1$ in favor of the player who has the first move. If it be agreed that the winner of each game have the first move in the next, show that the advantage of having the first move in the first game is $\$\mu-1$.

Here's the answer in the back of my book:

It is assumed that they go on playing indefinitely.

Let $E$ be the expectation of the first player; then since what one gains the other loses, the second player's expectation is $-E$.

At the first game the first player either wins $\$1$ and remains first player or he loses $\$1$ and becomes second player. And the chances of these two events are as $\mu:1$, so$$E = {{\mu(1 + E) + (-1 - E)}\over{\mu + 1}} = {{\mu - 1}\over{\mu + 1}}(1 + E);$$whence$$E = {{\mu - 1}\over2}, \quad -E = -{{\mu - 1}\over2};$$and the first player's advantage is $\mu - 1$ dollars.

I understand this first part which I typed out above. Here's the second part of the answer given in the back of my book, assuming a different interpretation of the question which assumes playing only a limited number of games:

But if the number of games be limited, let $E_n$ be the first player's expectation when $n$ more games are to be played. Then$$E_0 = 0 \text{ and } E_n = {{\mu - 1}\over{\mu + 1}}(1 + E_{n-1}) = k(1 + E_{n-1}).$$Whence we find$$E_1 = k, \quad E_2 = k + k^2, \quad E_3 = k + k^2 + k^3,$$and so on;$$E_n = {k\over{1 - k}}(1 - k^n) = {{\mu-1}\over2}\left(1 - \left({{\mu-1}\over{\mu+1}}\right)^n\right).$$And the first player's advantage is the double of this$$ = (\mu - 1)\left(1 - \left(1 - {2\over{\mu + 1}}\right)^n\right) = (\mu - 1)\left({{2n}\over{\mu + 1}} - \text{etc.}\right).$$

I don't understand how the book came up with looking at $E_0 = 0$, $E_n = {{\mu - 1}\over{\mu + 1}}(1 + E_{n-1})$ as the correct expression(s) to set up. Can anybody explain this in depth, why this is the correct setup? Once you assume that of course then solving the problem is straightforward, I understand everything that follows.

For a continuous function, how to show that the set of points of finite differentiability is contained in a certain Borel set?

Posted: 30 Dec 2021 11:58 PM PST

I am working on a more detailed version of a proof that has appeared on Math Stack before. Requests for a proof of the following theorem have been posted several times, but I am asking about the specific strategy suggested in (1) below.

Theorem.
Let $f$ be any real-valued, continuous function with domain $\mathbb{R}$. Then $\Delta(f) := \{x\in \mathbb{R}: f'(x) \in \mathbb{R}\}$ is Borel measurable.
In other words, the set of points of finite differentiability belongs to the sigma-algebra generated by the closed sets.

I am trying to flesh out the proof given in (1) Show that $\Delta(f)$ is a $F_{\sigma\delta}$-set, where it is suggested that $\Delta(f)$ is exactly $$S\,=\,\bigcap_{k=1}^{\infty} \;\bigcup_{n=1}^{\infty} \;\; \bigcap_{0 < |\eta| < \frac{1}{n}} \;\; \bigcap_{0 < |\delta| < \frac{1}{n}} \; \left\{x:\;\; \left| \frac{f(x + \eta) - f(x)}{\eta} \;\; - \;\; \frac{f(x + \delta) - f(x)}{\delta} \right| \; \leq \frac{1}{k}\right\}.$$

I have showed that $\Delta(f)$ is a subset of the above set; I outline the reasoning below. For any fixed and non-zero $\eta$ and $\delta$, the function $$x \mapsto \left| \frac{f(x + \eta) - f(x)}{\eta} \;\; - \;\; \frac{f(x + \delta) - f(x)}{\delta} \right|$$ is continuous, so that the set $\{x: \cdots\}$ appearing above is closed, being a continuous pre-image of a closed set. So if $S=\Delta(f)$ the theorem is proved.

What I am stuck on is showing $S\subset \Delta(f)$. Set $S$ doesn't "know" about the value of the derivative, which is why I think it is harder. It seems (from looking at other approaches posted) that we need to pass to the rationals (maybe something like this: https://math.stackexchange.com/a/1395360), or build a Cauchy sequence, or a sequence of nested compacts. It makes me wonder, more generally, about sufficient conditions for differentiability at $x_0$ that do not involve pre-knowledge of $f'(x_0)$. What is a good way to rigorously show the inclusion $S\subset \Delta(f)\,$?

Sketch of proof that $\Delta(f)\subset S$:
Let $x \in \Delta(f)$, and let $k\geqslant 1$ be fixed. Let $\epsilon = 1/k>0.$ Let $t=f'(x)$.
$\exists n\geqslant 1,$ $$\left| \frac{f(x + \eta) - f(x)}{\eta} -t \right| \; \leqslant \; \frac{\epsilon}{2} \;\text{ whenever }\; 0 \lt |\eta|\lt \frac1n.$$ From here it is just a matter of applying the triangle inequality: $$\left| \frac{f(x + \eta) - f(x)}{\eta} - \frac{f(x + \delta) - f(x)}{\delta} \right| $$ $$\leqslant \left| \frac{f(x + \eta) - f(x)}{\eta} - t\right| + \left| \frac{f(x + \delta) - f(x)}{\delta} -t \right| \leqslant \epsilon=\frac{1}{k}.$$

How to define a continuous 2D path?

Posted: 30 Dec 2021 11:48 PM PST

As per wikipedia, $p: [0,1] \rightarrow X$ is a path in a topological space $X$ if $p$ is continuous. However, $p$ is a 1-dimensional path, i.e., it has no width. My question: How can a 2-dimensional topological path (i.e., with width) be defined? E.g., how can one define a path in $[0,1]^2$ that starts at $y=0$ and ends at $y=1$ but also has a width of $1$ (and thus covers the whole $[0,1]^2$ space)?

confusion on determining integral limits in bivarite distribution

Posted: 30 Dec 2021 11:56 PM PST

$f(x_1,x_2)= 8x_1x_2 , 0<x_1<x_2<1 $ $\text{and}$ $0 , \text{elsewhere}$

I want to find $E(X_1X_2^2).$

According to the book: $$E(X_1X_2^2) = \int_{0}^{1}\int_{0}^{x_2} (x_1x_2^28x_1x_2)dx_1dx_2$$

My question is about the border of $x_2$. Accoding to book , integral limits of $x_2$ is $(0,1)$ ,but i think that it is wrong ,because the inequalities are strict.For example , if $x_1 =0.13$ , then how can we assume that $x_2$ can be any value less than $0.13$ ,so i think that the limits should have been $(x_1,1)$ .Hence ,the correct one must be : $$E(X_1X_2^2) = \int_{x_1}^{1}\int_{0}^{x_2} (x_1x_2^28x_1x_2)dx_1dx_2$$

Am i correct ? If not ,can you please explain clearly , i am a beginner in this topic. Thanks in advance

Solve $\int _{x=0}^{\infty }\int _{t=-\infty }^{\infty }\exp \left(\frac{-a t^2+i b t}{3 t^2+1}+i t x\right)\frac{x}{3 t^2+1}\mathrm{d}t\mathrm{d}x$

Posted: 30 Dec 2021 11:45 PM PST

How to solve this double integral?

$$f(a,b)=\int _{x=0}^{\infty }\int _{t=-\infty }^{\infty }\exp \left(\frac{-a t^2+i b t}{3 t^2+1}+i t x\right)\frac{x}{3 t^2+1}\mathrm{d}t\mathrm{d}x$$

$$\text{with }a>0,b\in \mathbb{R},i=\sqrt{-1}$$

Known special solution for $\mathbf{b=0}$

$$f(a,0)=\frac{\pi}{\sqrt{3}\, {\rm exp}\left(\frac{a}{6}\right)}\left[(a+3) I_0\left(\frac{a}{6}\right)+a I_1\left(\frac{a}{6}\right)\right]$$

where $I_0,I_1$ are Bessel functions of order $0$ and $1$ (proof).

What I tried

I followed the first steps given here. Substitution of $t\rightarrow t \sqrt{3},x\rightarrow x/\sqrt{3}$ removes the factors in the denominator

$$f(a,b)=\sqrt{3}\int _{x=0}^{\infty }\int _{t=-\infty }^{\infty } \exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}-\frac{a t^2}{3}\right)+i t x\right)\frac{x}{t^2+1}\mathrm{d}t\mathrm{d}x$$

$$=\sqrt{3}\int _{x=0}^{\infty }{\rm d}x \frac{x}{\text{exp}(x)}\int _{t=-\infty }^{\infty} \exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}-\frac{a t^2}{3}\right)+i x(t-i) \right)\frac{1}{t^2+1}\mathrm{d}t$$ $$=\frac{\sqrt{3}}{{\rm exp}(a/3)}\int _{x=0}^{\infty }{\rm d}x \frac{x}{\text{exp}(x)}\underbrace{\int _{t=-\infty }^{\infty} \exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}+\frac{a}{3}\right)+i x(t-i) \right)\frac{1}{t^2+1}\mathrm{d}t}_{I(x)}.$$

Now $I(x)$ can be closed in the upper half plane since the contribution along the arc vanishes. Then this $t$-integral encloses the single essential singularity in the upper half plane at $t=i$. Hence we have

$$I(x)=2\pi i \, {\rm Res} \left(\exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}+\frac{a}{3}\right)+i x(t-i) \right)\frac{1}{t^2+1}\right)\Bigg|_{t=i} \, $$

where

$$\exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}+\frac{a}{3}\right)+i x(t-i) \right)\frac{1}{t^2+1}$$ can be written as the series $$\sum_{n,m=0}^{\infty}\frac{(ix)^n}{n!}(t-i)^n\frac{1}{m!} \left(\frac{i b t}{\sqrt{3}}+\frac{a}{3}\right)^m\frac{1}{[(t+i)(t-i)]^{m+1}}$$

If $X \perp Y|Z$, does this mean that $X \perp Y|Z, W$? How about the other way around?

Posted: 30 Dec 2021 11:35 PM PST

Suppose $X, Y, Z, W$ are random variables. Let $\perp$ denote independence. $f$ denotes the probability density function. For example, $f(X|Z)$ is the conditional pdf of $X$ given $Z$.

Does $X \perp Y|Z \Rightarrow X \perp Y|Z, W$?

If $X \perp Y|Z$, then $f(X, Y|Z) = f(X|Z)f(Y|Z)$. Now I want to see if $X \perp Y|Z, W$. However, I don't think it's true that $f(X, Y|Z, W) = f(X|Z,W)f(Y|Z, W)$

Therefore, $X \perp Y|Z \nRightarrow X \perp Y|Z, W$.

Does $X \perp Y|Z, W \Rightarrow X \perp Y|Z$?

If $X \perp Y|Z, W$, then $f(X, Y|Z, W) = f(X|Z, W)f(Y|Z, W)$.

\begin{align*} f(X, Y|Z) &= \int_W f(X, Y|Z, W)f(W|Z) dW\\ &= \int_W f(X|Z, W) f(Y|Z, W)f(W|Z) dW\\ &\neq f(X|Z)f(Y|Z) \end{align*}

Therefore, $X \perp Y|Z, W \nRightarrow X \perp Y|Z$.

Is the above correct?

Proof Check: Non-existence of the inverse function in a given class of functions

Posted: 31 Dec 2021 12:09 AM PST

Are my conjecture and proof below mathematically and linguistically correct?
Are they well formulated?
How can they be improved?
How can they be shortened?
Is the conjecture obviously?

As far as I know, the theorem and its topic are new. A confirmation of usefulness and applicability of the theorem is at the end of this question.

Clearly, part a) of the conjecture is trivial, but it's good for the application of the theorem if part a) is written down together with the non-trivial cases.

Definition:
Let $S$ a set. A function $f$ is called a ''function in the set $S$'', if $f\colon\mathrm{dom}(f)\subseteq S\to S$.

Conjecture:
Let
$S_\alpha,S$ sets with $S_\alpha\subseteq S$,
$\alpha\in S_\alpha$,
$f$ a non-empty function in the set $S$,
$F$ a set of functions in the set $S$,
$F_{bij}(\alpha)$ the set of the function values of all bijective functions from $F$ defined at $\alpha$,
$z$ solution variable.
Let the equation $f(z)=\alpha$ be given.
a) If the equation has more than one solution, then $f$ has no inverse function.
b) If the equation has at least one solution $z_0\notin F_ {bij}(\alpha)$ for $z$, then $f$ has no inverse function in $F$.
c) If $\alpha$ is a function value of $f$ and the equation has no solution $z_0\in F_{bij}(\alpha)$ for $z$, then $f$ has no inverse function in $F$.

Proof:
We consider the equation from the conjecture and its solutions $z_0$:
$$f(z_0)=\alpha.\tag{1}$$ If $z_0\notin\mathrm{dom}(f)$, then $f(z_0)$ doesn't exist, then $z_0$ is not a solution to the equation.
If $\alpha\notin f(\mathrm{dom}(f))$, then $z_0\notin\mathrm{dom}(f)$, then $z_0$ is, according to the above, not a solution to the equation.
So, the equation only has solutions if $z_0\in\mathrm{dom}(f)$ and $\alpha\in f(\mathrm{dom}(f))$.
If the equation has solutions, then the solutions are arguments of $f$ and $\alpha $ is the image of the solutions under the function $f$.
Proof of part a) of the conjecture
According to the precondition of part a) of the conjecture, equation (1) has more than one solution. Then, according to what has just been said, $\alpha$ is the image of more than one argument of $f$. So $f$ is not injective and therefore not bijective. It follows that $f$ has no inverse function. This proves part a) of the conjecture.
Proof of part b) of the conjecture
According to the precondition of part b) of the conjecture, equation (1) has at least one solution.
If the equation has more than one solution, then $f$ has no inverse function, according to part a) of the conjecture. Then $f$ has no inverse function in $F$.
Equation (1) now has exactly one solution $z_0$.
We carry out a proof by contradiction.
Assume $f$ has an inverse function $f^{-1}$ in $F$. Then the function value $z_0=f^{-1}(\alpha)$ of the function $f^{-1}$ would exist, $f^{-1}$ would be bijective and therefore $z_0\in F_ {bij}(\alpha)$. This contradicts the assumption $z_0\notin F_ {bij}(\alpha)$ of the conjecture. So the assumption was wrong. It follows that $f$ has no inverse function in $F$.
This proves part b) of the conjecture.
Proof of part c) of the conjecture
According to the precondition of part c) of the conjecture, $\alpha$ is a function value of $f$.
We carry out a proof by contradiction.
Assume $f$ has an inverse function $f^{-1}$ in $F$. Then the function value $z_0=f^{-1}(\alpha)$ of the function $f^{-1}$ would exist. Because then $f(z_0)=\alpha$, $z_0$ would be a solution to equation (1). In addition, $f^{-1}$ would be bijective and therefore $z_0\in F_{bij}(\alpha)$. This contradicts the assumption of the conjecture that the equation has no solution $z_0\in F_{bij}(\alpha)$. So the assumption was wrong. It follows that $f$ has no inverse function in $F$. This proves part c) of the conjecture.
This proves the conjecture.

Justification of usefulness of the theorem:

The theorem can be applied for deciding the non-existence of inverses in a given class of functions, e.g. in the Elementary functions with the field of constants $\mathbb{Q}$ (see e.g. the definition of the Elementary functions in [Ritt 1925] or [Ritt 1948]) or in terms of some Special functions, e.g. in closed form.

I want to show that the non-existence of inverses in some classes of functions follows from the non-existence of particular solutions of particular equations in some cases.
(As far as I know, this knowledge is new.)

Start by fixing a set $F$ of functions.
Take as a first example e.g. the Elementary functions as defined in [Ritt 1925] for $F$, the Elementary numbers as defined in [Lin 1983] for $S_\alpha$, and $S=\mathbb{C}$.
Take the equation of the main theorem of [Lin 1983] and apply the above conjecture. Consider that the Elementary numbers are generated from the Integers by applying finite numbers of elementary functions and that the Elementary numbers are closed under application of finite numbers of elementary functions (elementary functions with the above mentioned definition). So, an elementary function maps elementary values to all of its elementary arguments. And the same holds for an elementary inverse.
$\ $

[Lin 1983] Ferng-Ching Lin: Schanuel's Conjecture Implies Ritt's Conjectures. Chin. J. Math. 11 (1983) (1) 41-50

[Ritt 1925] Ritt, J. F.: Elementary functions and their inverses. Trans. Amer. Math. Soc. 27 (1925) (1) 68-90

[Ritt 1948] Ritt, J. F.: Integration in finite terms. Liouville's theory of elementary methods. Columbia University Press, New York, 1948

About Hall's Theorem

Posted: 31 Dec 2021 12:00 AM PST

Let $G$ be a bipartite graph. In order to find a match in the $G$ diagram so that there are no unpaired elements in the set $A$, a necessary and sufficient condition is $|A-N(T)|\leq|B-T|$ that is for every $T\subset B$ . How can I prove this proposition?

Definitions and propositions that I know:

Corner points of graph $G$ that are not endpoints of any edge in a map $M$ are called unpaired vertices. The set formed by the neighbors of all vertices in $T$, $T\subset B$, is denoted by $N(T)$.

Hall Condition: Let $G=(A,B,E)$ be a two-set graph. For each $S\subset A$ set, $|S|\leq|N(S)|$ If so, $G$ is said to satisfy the Hall Condition.

I tried hard, I couldn't do it, please write if you can prove it.

Monty Hall: What is the "unconditional probability of success" and "conditional probability of success"?

Posted: 31 Dec 2021 12:12 AM PST

Can someone explain the highlighted paragraph from Blitzstein, Introduction to Probability (2019 2 edn), p 69? I'm not understanding the subtlety the author is trying to get at. What is the "unconditional probability of success"? How is it not conditional because we're conditioning on what door the car is behind to solve the law of total probability equation. What is the "conditional probability of success"? Are "unconditional probability of success" and "conditional probability of success" the same thing?

enter image description here

For the normal Monty Hall problem, it seems that the "unconditional probability of success" and "conditional probability of success" are the same; they output the same number (2/3). But they're different in op. cit. Exercise 40, p 91.

Consider the Monty Hall problem, except that Monty enjoys opening Door 2 more than he enjoys opening Door 3, and if he has a choice between opening these two doors, he opens Door 2 with probability p, where $\frac{1}{2} \leq p \leq 1$.

(a) Find the unconditional probability that the strategy of always switching succeeds (unconditional in the sense that we do not condition on which of Doors 2,3 Monty opens).

(b) Find the probability that the strategy of always switching succeeds, given that Monty opens Door 2.

(c) Find the probability that the strategy of always switching succeeds, given that Monty opens Door 3.

The answer is 2/3 for (a) because $0 \cdot 1/3 + 1 \cdot 1/3 + 1 \cdot 1/3 = 2/3$ by LOTP. Why this answer is the same as the normal Monty Hall problem, where Monty enjoys all three doors equally?

The answer for (b) is $\frac{1}{p+1}$ and for (c) it's $\frac{1}{2-p}$.

Residual analysis in Python

Posted: 31 Dec 2021 12:00 AM PST

when doing residual analysis do we first fit our model on our entire training set and calculate residuals between fitted values and actual values? Or do we first fit our model on the training+testing set?

I'm having trouble wrapping my head around the concept of residual variance. Does the variance mean that if we fit our linear regression model on multiple (varying) datasets, our residuals would vary according to the normal distribution with mean 0 and this variance?

When would we use prediction vs estimation? Predictions have more variance because of new data point, but it seems that we are always estimating/predicting new data points?

How do you deal with leverage points?

Does anybody know any good Python packages to do residual analysis?

What is the simplest way to get Bernoulli numbers?

Posted: 30 Dec 2021 11:39 PM PST

On paper, what is the simplest way to generate the Bernoulli fractions like $\frac{-1}{30} $and $\frac{7}{6}$?

Basically I'm trying to find and understand $B_n =$ (the stuff on this side) and I've seen something using $i$ and a contour integral with

$$\frac{z}{e^z-1}\frac{ dz}{z^{n+1}}$$

and I don't pretend to understand that at all.

$$B_n=\frac{n!}{2\pi \color{red}{i}}\color{red}{\oint}\color{red}{\frac{z}{\color{black}{e\color{red}{^z}}}}\color{red}{\frac{dz}{\color{black}{e^{n+1}}}}$$

$$B_n=\sum^n_{\color{red}{k}=0}\frac{1}{\color{red}{k}+1}\sum^\color{red}{k}_{\color{red}{v}=0}(-1)^\color{red}{v}\color{red}{\binom{k}{v}}\color{red}{v}^n$$

and then there's this variant I see used a lot called a Generating function where

$$B_n = \sum \frac{1}{k+1} \sum (-1)v (k\dots v)v^n$$

but I don't understand double sums either. I need to be able to reliably get Bernoulli numbers for Taylor series stuff like tangent and the hyperbolic variants. I've reached a limit of my understanding and made clear in red the parts I'm confused about. Like I get that the $i$ is imaginary and somehow related to rotation of $\pi$, probably positive and negative, but I don't know what the $d$ or $z$ mean in that equation.

In the second formula, I don't understand why it switched from $n$ to $k$, then from $k$ to $v$, though I suspect I'm supposed to increment $1:1$ increases in $n$ for $k$, which also increases my increments for $v$ by $1:1$ (creating a $\frac{+1}{-1}$ sequence) but I don't understand the vertical parentheses at all. I'm not even going to pretend I took a class above calculus 1.

Why isn't $\frac1{3p} =$ the probability that the strategy of always switching succeeds, given that Monty opens door 2?

Posted: 31 Dec 2021 12:13 AM PST

  1. Consider the Monty Hall problem, except that Monty enjoys opening door 2 more than he enjoys opening door 3, and if he has a choice between opening these two doors, he opens door 2 with probability p, where $1/2 \le p \le 1$.

To recap: there are three doors, behind one of which there is a car (which you want), and behind the other two of which there are goats (which you don't want). Initially, all possibilities are equally likely for where the car is. You choose a door, which for concreteness we assume is door 1. Monty Hall then opens a door to reveal a goat, and offers you the option of switching. Assume that Monty Hall knows which door has the car, will always open a goat door and offer the option of switching, and as above assume that if Monty Hall has a choice between opening door 2 and door 3, he chooses door 2 with probability p (with $1/2 \le p \le 1$).

(b) Find the probability that the strategy of always switching succeeds, given that Monty opens door 2.

Blitzstein, Introduction to Probability (2019 2 edn), Chapter 2, Exercise 40, p 91.

My wrong approach

Let $C_i$ be the event that car is behind the door $i$, $O_i$ be the event that Monty opened door $i$ and $X_i$ be the event that intially I chose door $i$. Here $i=1,2,3$.
$P(O_2|C_1, X_1) = p$
$P(O_2|C_2, X_1) = 0$
$P(O_2|C_3, X_1) = 1$

Now, we are to find $P(C_3|O_2, X_1)$.
$P(C_3|O_2, X_1) = \dfrac{P(O_2|C_3, X_1) . P(C_3|X_1)}{P(O_2|X_1)}$ $= \dfrac{1 . \frac{1}{3}}{p}$ $= \dfrac{1}{3p}$

This is not the correct answer. Can anybody comment on what I am missing in my approach here.

Intuitively, why does $\lim_{n \to \infty} \frac16 (p_{n - 1} + p_{n - 2} ... + p_{n - 6}) = 2/7$?

Posted: 31 Dec 2021 12:11 AM PST

Blitzstein, Introduction to Probability (2019 2 edn), Chapter 2, Exercise 48, p 94.

  1. A fair die is rolled repeatedly, and a running total is kept (which is, at each time, the total of all the rolls up until that time). Let $p_n$ be the probability that the running total is ever exactly n (assume the die will always be rolled enough times so that the running total will eventually exceed n, but it may or may not ever equal n).

(c) Give an intuitive explanation for the fact that $p_n \rightarrow 1/3.5 = 2/7 \quad as \quad n \rightarrow \infty$.

From p 17 in the publicly downloadable PDF of curbed solutions.

(c)An intuitive explanation is as follows. The average number thrown by the die is (total of dots)/6, which is 21/6 = 7/2, so that every throw adds on an average of 7/2. We can therefore expect to land on 2 out of every 7 numbers, and the probability of landing on any particular number is 2/7.

That's the line I don't get it, why we can transfer

No comments:

Post a Comment