Recent Questions - Mathematics Stack Exchange |
- Extension Faa di Bruno formula
- Help me to Solve this calculus expression
- Conditional probability of Brownian bridge
- Solving a limit using Maclaurin series $ \lim_{x\to0}\frac{xe^{2x}+xe^{-2x}-4x^{3}-2x}{\sin^{2}x-x^{2}} $
- Choosing people based on height
- Reference request: norm of completely positive map between C*-algebras is attained along approximate identity
- Irrational roots of $\frac{4x}{x^2 + x + 3} + \frac{5x}{x^2 - 5x + 3} = -\frac{3}{2}$
- how to solve this limit question [closed]
- Confusing in limits of bivarite distribution and contradiction in related question
- Evaluating a limit when the variable is in the function
- Matching Cubic Bézier to match Cubic Polynomial
- Probability problem using Bayes's theorem: Find the percentage of female regular smokers
- Find the solution of the sistem $x''=2x+y$ and $y''=x+2y$
- How can I evaluate the integral $\int \frac{\sin ^{7} x+\cos ^{7} x}{\sin ^{3} x+\cos ^{3} x} d x$? [duplicate]
- If $U$ is a real measurable function s.t. $\int_{B_1(x)}\frac{1}{U(y)} dy\to 0$, does $\int_{B_1(x)}\frac{1}{U(y)+a} dy\to 0$, too?
- How to prove that $\arg\max_{t}\langle \sum w_i a_{t_i}, a_t\rangle$ is close to $t_1$?
- Understanding $a^a=n$
- Game of draughts, expected value of first move advantage, part 2
- For a continuous function, how to show that the set of points of finite differentiability is contained in a certain Borel set?
- How to define a continuous 2D path?
- confusion on determining integral limits in bivarite distribution
- Solve $\int _{x=0}^{\infty }\int _{t=-\infty }^{\infty }\exp \left(\frac{-a t^2+i b t}{3 t^2+1}+i t x\right)\frac{x}{3 t^2+1}\mathrm{d}t\mathrm{d}x$
- If $X \perp Y|Z$, does this mean that $X \perp Y|Z, W$? How about the other way around?
- Proof Check: Non-existence of the inverse function in a given class of functions
- About Hall's Theorem
- Monty Hall: What is the "unconditional probability of success" and "conditional probability of success"?
- Residual analysis in Python
- What is the simplest way to get Bernoulli numbers?
- Why isn't $\frac1{3p} =$ the probability that the strategy of always switching succeeds, given that Monty opens door 2?
- Intuitively, why does $\lim_{n \to \infty} \frac16 (p_{n - 1} + p_{n - 2} ... + p_{n - 6}) = 2/7$?
Extension Faa di Bruno formula Posted: 31 Dec 2021 12:06 AM PST Faa di Bruno's formula gives an expression for the $n$th derivative of a composite function, $\frac{d^n}{dt^n}f(g(t))$, thereby generalising the chain rule. I was wondering whether there is also a formula available for $\frac{d^n}{dt^n}f(t, g(t))$, where $t$ also is an argument of $f$. |
Help me to Solve this calculus expression Posted: 31 Dec 2021 12:03 AM PST I am new to math.StackExchange. I don't know how to write questions about limits.How I Prove that this is equal to -1 lim (x−2÷|x−2|) x→0 |
Conditional probability of Brownian bridge Posted: 30 Dec 2021 11:57 PM PST Suppose $B_{0,a}^{T,b}(t)$ is a Brownian bridge such that $B_{0,a}^{T,b}(0)=a$ and $B_{0,a}^{T,b}(T) = b$. The probability density function of $B_{0,a}^{T,b}(t)$ is the conditional probability density function of the Brownian motion $B(t)$ given that $B(0)=a$ and $B(T)=b$. That is $$ f_{B_{0,a}^{T,b}(t)}(x) = f_{B(t)}(x|B(0)=a,B(T)=b).$$ It is equivalent to $$ \mathbb{P}(B_{0,a}^{T,b}(t) \leq x) = \mathbb{P}(B(t) \leq x |B(0)=a,B(T)=b).$$ I want to ask how to express this probability in term of $B(t)$ $$ \mathbb{P}\left(B_{0,a}^{T,b}(t) \leq x \middle| \min_{s \in [0,T]} B_{0,a}^{T,b}(t) \geq y \right) ?$$ Is it equal to $$ \mathbb{P}\left(B_{0,a}^{T,b}(t) \leq x \middle| \min_{s \in [0,T]} B_{0,a}^{T,b}(t) \geq y \right) =\mathbb{P}\left(B(t) \leq x \middle| B(0)=a,B(T)=b, \min_{s \in [0,T]} B(s) \geq y \right) ?$$ May I have some detailed explanation about it? |
Posted: 30 Dec 2021 11:48 PM PST I need to find the limit $$ \lim_{x\to0}\frac{xe^{2x}+xe^{-2x}-4x^{3}-2x}{\sin^{2}x-x^{2}} $$ using Maclaurin series. I don't know to what degree am I supposed to develop each expression. I think $\sin x$ in the denominator must be expanded to at least order $3$, but what about the numerator? I would appreciate any help! |
Choosing people based on height Posted: 30 Dec 2021 11:45 PM PST
My attempt: Let us select $m+n$ people among $100$. Arranging them in ascending order of height, we need to separate them into groups of $m$ and $n$ such that the group with $n$ is taller. There is only one way to do this. So answer is $$^{100}C_{m+n}$$ But the answer is given as $$\sum_{r=0}^{100-m} {}^{m+r}C_m \cdot {}^{100-m-r} \space C_n$$ I don't understand what I did wrong and how the given answer is derived. |
Posted: 30 Dec 2021 11:49 PM PST Let $A, B$ be C*-algebras, with $e_\lambda$ an approximate identity in $A$. I'm fairly certain that if $\varphi : A\to B$ is a (completely) positive map, then $\|\varphi\| = \lim_\lambda\|\varphi(e_\lambda)\|$. I could've sworn I had references for this result, but I can't seem to find it anywhere. I checked Blackadar's book "Operator Algebras", but it appears only to include the result that a unital linear map $\varphi : A\to B$ is positive if and only if it is contractive, which is not what I'm looking for. I've found a reference for the unital case: if $\phi : A\to B$ is a positive map between unital C*-algebras, then $\|\phi\| = \|\phi(1)\|$. This can be found in Paulsen's "Completely Bounded Maps and Operator Algebras" (corollary 2.9), as a corollary of the Russo-Dye theorem. Have I perhaps misquoted the result in the first paragraph? I'm also less certain that I've seen this similar result stated before: if $\psi : A\to B^*$ is (completely) positive, then $\|\psi\| = \lim_\lambda\|\psi(e_\lambda)\|$. Positivity in $B^*$ is defined as you might expect, giving us an order structure, and the order structure on $M_n(B^*)$ is obtained by the canonical identification of $M_n(B^*)$ with $M_n(B)^*$. I was hoping to find a proof of the result in the paragraph above and see what modifications needed to be made to prove this, but now I can't find either. |
Irrational roots of $\frac{4x}{x^2 + x + 3} + \frac{5x}{x^2 - 5x + 3} = -\frac{3}{2}$ Posted: 31 Dec 2021 12:04 AM PST Find the number of irrational roots of the equation $$\dfrac{4x}{x^2 + x + 3} + \dfrac{5x}{x^2 - 5x + 3} = -\dfrac{3}{2}$$ $$................................... $$ I got a solution : Divide both denominator and numerator by $x$ Let $x+\dfrac{3}{x} = y$, then the equation becomes $$\dfrac{4}{y + 1} + \dfrac{5}{y - 5} = \dfrac{3}{2}$$ Now simplifying this we get $y = -5, 3$. Finally $x+\dfrac{3}{x} = -5$ has 2 irrational roots and $x+\dfrac{3}{x} = 3$ has 2 imaginary roots $$................................... $$ But my question is, since this question is for a competitive exam, is there any other quick approach to solve this question? |
how to solve this limit question [closed] Posted: 31 Dec 2021 12:07 AM PST $$\lim_{ x \rightarrow 0 } \left( \dfrac{ x-2 }{ | x-2 | } \right) $$ |
Confusing in limits of bivarite distribution and contradiction in related question Posted: 30 Dec 2021 11:36 PM PST This question is from "Mathematical Statistics with application by Wackerly and Mendelhall" $8th$ edition page $233$ ,question $5.5$ :
I am beginner in statistics , so there are something i could not understand.I tried to solve this question ,but my answer and answer key is different.First of all , i want you to check my solution to see what i am missing. $\mathbf{\text{MY ATTEMPT:}}$ $$\int_{0}^{1/2}\int_{0}^{1/3}3y_1dy_2dy_1 =\int_{0}^{1/2}\bigg(\int_{0}^{1/3}3y_1dy_2\bigg)dy_1$$ $$\bigg(\int_{0}^{1/3}3y_1dy_2\bigg) =3y_1\times(1/3) - 3y_1\times (0)=y_1$$ $$\int_{0}^{1/2}y_1dy_1 =\frac{y_1^2}{2} \rightarrow \frac{1}{8} - 0=0.125$$ However , the answer key says that answer is $0.1065$ What am i missing. I want to add something such that when we solve continuous bivarite probablities , is $ 0 \leq y_2 \leq y_1 \leq1$ equal to $0 <y_2 < y_1 <1$ or $0 <y_2 \leq y_1 <1$ or $0 \leq y_2 < y_1 <1$ or $0 <y_2 < y_1 \leq1$ .I always confuse when i see $\leq$ and $<$ ,somebooks use $<$ , but somebooks use $\leq$.Can you also explain how to approach these signs in bivarite probability. Thanks in advance.. By the way , do not forget that i am a beginner , so please explain like explaining to an dummy.. |
Evaluating a limit when the variable is in the function Posted: 30 Dec 2021 11:37 PM PST If I have a continuous function $f(a)$, it makes sense that $$\lim_{x→0}f(x)=f(0)$$ however if we have the limit of a variable that is inside the function such as: $$\lim_{h→0}f(x+h)$$ would it be correct to say: $$\lim_{x→0}f(x+h)=f(0+h)=f(h)$$ no matter what the function $f(a)$ is? And if so, what is the explanation for this? |
Matching Cubic Bézier to match Cubic Polynomial Posted: 30 Dec 2021 11:55 PM PST I have a series of cubic polynomials that are being used to create a trajectory. Where some constraints can be applied to each polynomial, such that these 4 parameters are satisfied. -Initial Position -final Position -Initial Velocity -final Velocity The polynomials are pieced together such that the ends of one polynomial are identical to the beginnings of the next to preserve continuity. I instead want to represent these polynomials as cubic Bézier curves. How would I find the x,y position of each control point for the cubic Bézier curves, such that it matches the curvature of the cubic polynomial. Here is what I have so far, made in desmos. https://www.desmos.com/calculator/agsywptfno Currently the bezier curve is defined as a binomial, with a polynomial for X and or Y e.g. Bezier = (X(t), Y(t)) |
Probability problem using Bayes's theorem: Find the percentage of female regular smokers Posted: 31 Dec 2021 12:01 AM PST Can anyone help me with this exercise? Here is the exercise:
|
Find the solution of the sistem $x''=2x+y$ and $y''=x+2y$ Posted: 30 Dec 2021 11:57 PM PST I have to find the solution of the sistem $x''=2x+y$ and $y''=x+2y$ to which it applies $x(0)=0$, $x'(0)=2$, $y(0)=0$ and $y'(0)=0$. First I wrote this two formulas in matrix like this $$\begin{bmatrix} x'' \\ y'' \end{bmatrix}=\begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}\begin{bmatrix} x\\ y \end{bmatrix}$$ Then I calculate eigenvalues of the matrix $\begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}$ where I get $\lambda_{1}=1$ and $\lambda_{2}=3$ For each eigenvalues we got eigenvectors $v_{1}=\begin{bmatrix} 1\\ -1 \end{bmatrix}$ and $v_{2}=\begin{bmatrix} 1 \\ 1 \end{bmatrix}$ For that we get the solution $$\begin{bmatrix} x'\\ y' \end{bmatrix}=\begin{bmatrix} e^{t} & e^{3t} \\ -e^{t} & e^{3t} \end{bmatrix} \begin{bmatrix} C_{1} \\ C_{2} \end{bmatrix}$$ We use $x'(0)=2$ and $y'(0)=0$ and we get $C_{1}=C_{2}=1$ Now I have to find solution for $$\begin{bmatrix} x'\\ y' \end{bmatrix}=\begin{bmatrix} e^{t} & e^{3t} \\ -e^{t} & e^{3t} \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix}$$ I tried to find eigenvalues for that matrix but I can not find them. Any help? |
Posted: 30 Dec 2021 11:34 PM PST By division, we decrease the power in the numerator of the integrand. $$ I:=\int \frac{\sin ^{7} x+\cos ^{7} x}{\sin ^{3} x+\cos ^{3} x} d x=\int\left(\sin ^{4} x+\cos ^{4} x-\frac{\sin ^{3} x \cos ^{3} x(\sin x+\cos x)}{\sin ^{3} x+\cos ^{3} x}\right)dx $$ Simplifying the denominator yields $$ I= \underbrace{\int\left(\sin ^{4} x+\cos ^{4} x\right) d x}_{J} -\underbrace{\int\frac{\sin ^{3} x \cos ^{3} x}{1-\sin x \cos x} d x}_{K} $$ Let's start from the easier one $J$. $$ \begin{aligned} J &=\int\left[\left(\sin ^{2} x+\cos ^{2} x\right)^{2}-2 \sin ^{2} x \cos ^{2} x\right] d x \\ &=x-\frac{1}{2} \int \sin ^{2} 2 x d x \\ &=x-\frac{1}{2} \int\left(\frac{1-\cos 4 x}{2}\right) d x \\ &=\frac{3 x}{4}+\frac{1}{16} \sin 4 x+C \end{aligned} $$ From my answer, $$K=\frac{2}{\sqrt{3}} \tan ^{-1}\left(\frac{2 \tan x-1}{\sqrt{3}}\right)-\frac{9}{8} x+\frac{\cos 2 x}{4}+\frac{\sin 4 x}{32} +C$$ Now we can conclude that $$ I= \frac{15x}{8}+\frac{ \sin 4 x}{32}-\frac{\cos 2 x}{4}+\frac{2}{\sqrt{3}} \tan ^{-1}\left(\frac{1-2 \tan x}{\sqrt{3}}\right)+C $$ *checked by wolframalpha. Is there any other simpler method? Share with us if you have any. |
Posted: 31 Dec 2021 12:17 AM PST Let $U:\mathbb{R}^N\to\mathbb{R}$ a measurable function such that $${\rm infess}_{\mathbb{R}^N} U(x)>-\infty.$$ Let $a\in\mathbb{R}$ be sufficiently large such that $${\rm infess}_{\mathbb{R}^N} (U(x)+a)>0.$$ Finally, assume that $$(I) \int_{B_1(x)}\frac{1}{U(y)} dy\to 0 \quad\mbox{ as } |x|\to +\infty.$$ My question is: assumption $(I)$ implies that also $$\int_{B_1(x)}\frac{1}{U(y)+a} dy\to 0 \quad\mbox{ as } |x|\to +\infty?$$ If yes, how to prove it? I've been stuck for a while, I hope someone could help. Thank you in advance! |
How to prove that $\arg\max_{t}\langle \sum w_i a_{t_i}, a_t\rangle$ is close to $t_1$? Posted: 31 Dec 2021 12:17 AM PST Let $a_t$ be a Gaussian type function with center $t\in \mathbb R^d$, i.e. $$a_t(x)=e^{-||x-t||^2}, \text{ for } x \in \mathbb R^d.$$ Let $b$ be a combination of these Gaussian functions $$b = \sum_{i=1}^s w_i a_{t_i}$$ where $t_i \in \mathbb R^d$ for all $i=1,..., s$ and $w_1 > .... > w_s>0$ is a finite sequence of decreasing positive numbers. Define $$\hat t = \arg\max_{t\in \mathbb R^d} \langle b, a_t\rangle.$$ Here the inner product $\langle b, a_t\rangle$ is defined as $\int_{x\in \mathbb R^d} b(x) a_t(x)dx$. Intuitively, we can easily see that if the Gaussian functions are far away of each others, then $\hat t$ will be close to $t_1$, i.e. we have
I am wondering how to prove the claim rigorously. I have no idea since the function $\langle b, a_t\rangle$ is not even convex. Is there any idea to tackle this problem? Thanks! |
Posted: 31 Dec 2021 12:01 AM PST I have been looking at a particular computational problem and finally reduced it down to a bound of $a^a=n$, I will try to explain what I mean by this first in one paragraph. For example, suppose I am trying to find if an element exists in a sorted array. I can do binary search, which will require $a$ steps such that $2^a=n$. Hence we know that binary search requires $\Theta(\log n)$ steps, by solving the previous equation. Now similarly, I have a problem which upon solving I get the equation $a^a=n$ instead of $2^a=n$ which we had for binary search. I have been informed that there is no explicit function to satisfy this. That is whatever explicit function $f(n)$ I take, $f(n)^{f(n)}$ is not equal to $n$. Not only this, I cannot write an explicit function satisfying this in the big $\Theta$ notation as well. By this what I mean is for all explicit functions $f(n)$, and for all functions $g(n)\in \Theta(n)$, we must have $f(n)^{f(n)}!=g(n)$. The second statement is a stronger statement but I have a feeling that it might be easy to prove the second statement from the first statement somehow. Also, I have no idea how to go about proving the first statement itself. Any insights regarding these would be appreciated. |
Game of draughts, expected value of first move advantage, part 2 Posted: 31 Dec 2021 12:12 AM PST This is a follow up to my question here: Game of draughts, expected value of first move advantage Here's a question from my probability textbook:
Here's the answer in the back of my book:
I understand this first part which I typed out above. Here's the second part of the answer given in the back of my book, assuming a different interpretation of the question which assumes playing only a limited number of games:
I don't understand how the book came up with looking at $E_0 = 0$, $E_n = {{\mu - 1}\over{\mu + 1}}(1 + E_{n-1})$ as the correct expression(s) to set up. Can anybody explain this in depth, why this is the correct setup? Once you assume that of course then solving the problem is straightforward, I understand everything that follows. |
Posted: 30 Dec 2021 11:58 PM PST I am working on a more detailed version of a proof that has appeared on Math Stack before. Requests for a proof of the following theorem have been posted several times, but I am asking about the specific strategy suggested in (1) below.
I am trying to flesh out the proof given in (1) Show that $\Delta(f)$ is a $F_{\sigma\delta}$-set, where it is suggested that $\Delta(f)$ is exactly $$S\,=\,\bigcap_{k=1}^{\infty} \;\bigcup_{n=1}^{\infty} \;\; \bigcap_{0 < |\eta| < \frac{1}{n}} \;\; \bigcap_{0 < |\delta| < \frac{1}{n}} \; \left\{x:\;\; \left| \frac{f(x + \eta) - f(x)}{\eta} \;\; - \;\; \frac{f(x + \delta) - f(x)}{\delta} \right| \; \leq \frac{1}{k}\right\}.$$ I have showed that $\Delta(f)$ is a subset of the above set; I outline the reasoning below. For any fixed and non-zero $\eta$ and $\delta$, the function $$x \mapsto \left| \frac{f(x + \eta) - f(x)}{\eta} \;\; - \;\; \frac{f(x + \delta) - f(x)}{\delta} \right|$$ is continuous, so that the set $\{x: \cdots\}$ appearing above is closed, being a continuous pre-image of a closed set. So if $S=\Delta(f)$ the theorem is proved. What I am stuck on is showing $S\subset \Delta(f)$. Set $S$ doesn't "know" about the value of the derivative, which is why I think it is harder. It seems (from looking at other approaches posted) that we need to pass to the rationals (maybe something like this: https://math.stackexchange.com/a/1395360), or build a Cauchy sequence, or a sequence of nested compacts. It makes me wonder, more generally, about sufficient conditions for differentiability at $x_0$ that do not involve pre-knowledge of $f'(x_0)$. What is a good way to rigorously show the inclusion $S\subset \Delta(f)\,$?
|
How to define a continuous 2D path? Posted: 30 Dec 2021 11:48 PM PST As per wikipedia, $p: [0,1] \rightarrow X$ is a path in a topological space $X$ if $p$ is continuous. However, $p$ is a 1-dimensional path, i.e., it has no width. My question: How can a 2-dimensional topological path (i.e., with width) be defined? E.g., how can one define a path in $[0,1]^2$ that starts at $y=0$ and ends at $y=1$ but also has a width of $1$ (and thus covers the whole $[0,1]^2$ space)? |
confusion on determining integral limits in bivarite distribution Posted: 30 Dec 2021 11:56 PM PST
According to the book: $$E(X_1X_2^2) = \int_{0}^{1}\int_{0}^{x_2} (x_1x_2^28x_1x_2)dx_1dx_2$$ My question is about the border of $x_2$. Accoding to book , integral limits of $x_2$ is $(0,1)$ ,but i think that it is wrong ,because the inequalities are strict.For example , if $x_1 =0.13$ , then how can we assume that $x_2$ can be any value less than $0.13$ ,so i think that the limits should have been $(x_1,1)$ .Hence ,the correct one must be : $$E(X_1X_2^2) = \int_{x_1}^{1}\int_{0}^{x_2} (x_1x_2^28x_1x_2)dx_1dx_2$$ Am i correct ? If not ,can you please explain clearly , i am a beginner in this topic. Thanks in advance |
Posted: 30 Dec 2021 11:45 PM PST How to solve this double integral? $$f(a,b)=\int _{x=0}^{\infty }\int _{t=-\infty }^{\infty }\exp \left(\frac{-a t^2+i b t}{3 t^2+1}+i t x\right)\frac{x}{3 t^2+1}\mathrm{d}t\mathrm{d}x$$ $$\text{with }a>0,b\in \mathbb{R},i=\sqrt{-1}$$ Known special solution for $\mathbf{b=0}$ $$f(a,0)=\frac{\pi}{\sqrt{3}\, {\rm exp}\left(\frac{a}{6}\right)}\left[(a+3) I_0\left(\frac{a}{6}\right)+a I_1\left(\frac{a}{6}\right)\right]$$ where $I_0,I_1$ are Bessel functions of order $0$ and $1$ (proof). What I tried I followed the first steps given here. Substitution of $t\rightarrow t \sqrt{3},x\rightarrow x/\sqrt{3}$ removes the factors in the denominator $$f(a,b)=\sqrt{3}\int _{x=0}^{\infty }\int _{t=-\infty }^{\infty } \exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}-\frac{a t^2}{3}\right)+i t x\right)\frac{x}{t^2+1}\mathrm{d}t\mathrm{d}x$$ $$=\sqrt{3}\int _{x=0}^{\infty }{\rm d}x \frac{x}{\text{exp}(x)}\int _{t=-\infty }^{\infty} \exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}-\frac{a t^2}{3}\right)+i x(t-i) \right)\frac{1}{t^2+1}\mathrm{d}t$$ $$=\frac{\sqrt{3}}{{\rm exp}(a/3)}\int _{x=0}^{\infty }{\rm d}x \frac{x}{\text{exp}(x)}\underbrace{\int _{t=-\infty }^{\infty} \exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}+\frac{a}{3}\right)+i x(t-i) \right)\frac{1}{t^2+1}\mathrm{d}t}_{I(x)}.$$ Now $I(x)$ can be closed in the upper half plane since the contribution along the arc vanishes. Then this $t$-integral encloses the single essential singularity in the upper half plane at $t=i$. Hence we have $$I(x)=2\pi i \, {\rm Res} \left(\exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}+\frac{a}{3}\right)+i x(t-i) \right)\frac{1}{t^2+1}\right)\Bigg|_{t=i} \, $$ where $$\exp \left(\frac{1}{t^2+1}\left(\frac{i b t}{\sqrt{3}}+\frac{a}{3}\right)+i x(t-i) \right)\frac{1}{t^2+1}$$ can be written as the series $$\sum_{n,m=0}^{\infty}\frac{(ix)^n}{n!}(t-i)^n\frac{1}{m!} \left(\frac{i b t}{\sqrt{3}}+\frac{a}{3}\right)^m\frac{1}{[(t+i)(t-i)]^{m+1}}$$ |
If $X \perp Y|Z$, does this mean that $X \perp Y|Z, W$? How about the other way around? Posted: 30 Dec 2021 11:35 PM PST Suppose $X, Y, Z, W$ are random variables. Let $\perp$ denote independence. $f$ denotes the probability density function. For example, $f(X|Z)$ is the conditional pdf of $X$ given $Z$. Does $X \perp Y|Z \Rightarrow X \perp Y|Z, W$? If $X \perp Y|Z$, then $f(X, Y|Z) = f(X|Z)f(Y|Z)$. Now I want to see if $X \perp Y|Z, W$. However, I don't think it's true that $f(X, Y|Z, W) = f(X|Z,W)f(Y|Z, W)$ Therefore, $X \perp Y|Z \nRightarrow X \perp Y|Z, W$. Does $X \perp Y|Z, W \Rightarrow X \perp Y|Z$? If $X \perp Y|Z, W$, then $f(X, Y|Z, W) = f(X|Z, W)f(Y|Z, W)$. \begin{align*} f(X, Y|Z) &= \int_W f(X, Y|Z, W)f(W|Z) dW\\ &= \int_W f(X|Z, W) f(Y|Z, W)f(W|Z) dW\\ &\neq f(X|Z)f(Y|Z) \end{align*} Therefore, $X \perp Y|Z, W \nRightarrow X \perp Y|Z$. Is the above correct? |
Proof Check: Non-existence of the inverse function in a given class of functions Posted: 31 Dec 2021 12:09 AM PST Are my conjecture and proof below mathematically and linguistically correct? As far as I know, the theorem and its topic are new. A confirmation of usefulness and applicability of the theorem is at the end of this question. Clearly, part a) of the conjecture is trivial, but it's good for the application of the theorem if part a) is written down together with the non-trivial cases. Definition: Conjecture: Proof: Justification of usefulness of the theorem: The theorem can be applied for deciding the non-existence of inverses in a given class of functions, e.g. in the Elementary functions with the field of constants $\mathbb{Q}$ (see e.g. the definition of the Elementary functions in [Ritt 1925] or [Ritt 1948]) or in terms of some Special functions, e.g. in closed form. I want to show that the non-existence of inverses in some classes of functions follows from the non-existence of particular solutions of particular equations in some cases. Start by fixing a set $F$ of functions. [Ritt 1948] Ritt, J. F.: Integration in finite terms. Liouville's theory of elementary methods. Columbia University Press, New York, 1948 |
Posted: 31 Dec 2021 12:00 AM PST Let $G$ be a bipartite graph. In order to find a match in the $G$ diagram so that there are no unpaired elements in the set $A$, a necessary and sufficient condition is $|A-N(T)|\leq|B-T|$ that is for every $T\subset B$ . How can I prove this proposition? Definitions and propositions that I know: Corner points of graph $G$ that are not endpoints of any edge in a map $M$ are called unpaired vertices. The set formed by the neighbors of all vertices in $T$, $T\subset B$, is denoted by $N(T)$. Hall Condition: Let $G=(A,B,E)$ be a two-set graph. For each $S\subset A$ set, $|S|\leq|N(S)|$ If so, $G$ is said to satisfy the Hall Condition. I tried hard, I couldn't do it, please write if you can prove it. |
Posted: 31 Dec 2021 12:12 AM PST Can someone explain the highlighted paragraph from Blitzstein, Introduction to Probability (2019 2 edn), p 69? I'm not understanding the subtlety the author is trying to get at. What is the "unconditional probability of success"? How is it not conditional because we're conditioning on what door the car is behind to solve the law of total probability equation. What is the "conditional probability of success"? Are "unconditional probability of success" and "conditional probability of success" the same thing? For the normal Monty Hall problem, it seems that the "unconditional probability of success" and "conditional probability of success" are the same; they output the same number (2/3). But they're different in op. cit. Exercise 40, p 91.
The answer is 2/3 for (a) because $0 \cdot 1/3 + 1 \cdot 1/3 + 1 \cdot 1/3 = 2/3$ by LOTP. Why this answer is the same as the normal Monty Hall problem, where Monty enjoys all three doors equally? The answer for (b) is $\frac{1}{p+1}$ and for (c) it's $\frac{1}{2-p}$. |
Posted: 31 Dec 2021 12:00 AM PST when doing residual analysis do we first fit our model on our entire training set and calculate residuals between fitted values and actual values? Or do we first fit our model on the training+testing set? I'm having trouble wrapping my head around the concept of residual variance. Does the variance mean that if we fit our linear regression model on multiple (varying) datasets, our residuals would vary according to the normal distribution with mean 0 and this variance? When would we use prediction vs estimation? Predictions have more variance because of new data point, but it seems that we are always estimating/predicting new data points? How do you deal with leverage points? Does anybody know any good Python packages to do residual analysis? |
What is the simplest way to get Bernoulli numbers? Posted: 30 Dec 2021 11:39 PM PST
Basically I'm trying to find and understand $B_n =$ (the stuff on this side) and I've seen something using $i$ and a contour integral with $$\frac{z}{e^z-1}\frac{ dz}{z^{n+1}}$$ and I don't pretend to understand that at all. $$B_n=\frac{n!}{2\pi \color{red}{i}}\color{red}{\oint}\color{red}{\frac{z}{\color{black}{e\color{red}{^z}}}}\color{red}{\frac{dz}{\color{black}{e^{n+1}}}}$$ $$B_n=\sum^n_{\color{red}{k}=0}\frac{1}{\color{red}{k}+1}\sum^\color{red}{k}_{\color{red}{v}=0}(-1)^\color{red}{v}\color{red}{\binom{k}{v}}\color{red}{v}^n$$ and then there's this variant I see used a lot called a Generating function where $$B_n = \sum \frac{1}{k+1} \sum (-1)v (k\dots v)v^n$$ but I don't understand double sums either. I need to be able to reliably get Bernoulli numbers for Taylor series stuff like tangent and the hyperbolic variants. I've reached a limit of my understanding and made clear in red the parts I'm confused about. Like I get that the $i$ is imaginary and somehow related to rotation of $\pi$, probably positive and negative, but I don't know what the $d$ or $z$ mean in that equation. In the second formula, I don't understand why it switched from $n$ to $k$, then from $k$ to $v$, though I suspect I'm supposed to increment $1:1$ increases in $n$ for $k$, which also increases my increments for $v$ by $1:1$ (creating a $\frac{+1}{-1}$ sequence) but I don't understand the vertical parentheses at all. I'm not even going to pretend I took a class above calculus 1. |
Posted: 31 Dec 2021 12:13 AM PST
Blitzstein, Introduction to Probability (2019 2 edn), Chapter 2, Exercise 40, p 91. My wrong approachLet $C_i$ be the event that car is behind the door $i$, $O_i$ be the event that Monty opened door $i$ and $X_i$ be the event that intially I chose door $i$. Here $i=1,2,3$. Now, we are to find $P(C_3|O_2, X_1)$. This is not the correct answer. Can anybody comment on what I am missing in my approach here. |
Intuitively, why does $\lim_{n \to \infty} \frac16 (p_{n - 1} + p_{n - 2} ... + p_{n - 6}) = 2/7$? Posted: 31 Dec 2021 12:11 AM PST Blitzstein, Introduction to Probability (2019 2 edn), Chapter 2, Exercise 48, p 94.
From p 17 in the publicly downloadable PDF of curbed solutions.
That's the line I don't get it, why we can transfer |
You are subscribed to email updates from Recent Questions - Mathematics Stack Exchange. To stop receiving these emails, you may unsubscribe now. | Email delivery powered by Google |
Google, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States |
No comments:
Post a Comment