Saturday, June 19, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Probability that intersection of three subsets has cardinality $1$.

Posted: 19 Jun 2021 10:10 PM PDT

Given randomly selected sets $S_1, S_2, S_3\subseteq S$, compute the probability $|S_1\cap S_2\cap S_3|=1$ in terms of $|S|$.

Fix the element $\omega=S_1\cap S_2\cap S_3$. There are $2^{3|S|-3}$ ways to choose the $3$ set then, and $|S|$ ways to choose $\omega$. So the probability must be $\frac{|S|}{8}$, which is absurd for $|S|>8$. Where am I going wrong?

Uniform convergence of a sequence of functions, there is something missing in the hypothesis?

Posted: 19 Jun 2021 10:05 PM PDT

I was reading an Analysis book and saw the following problem:

Let $X \subseteq \mathbb{R}$ and $f_n:X \to \mathbb{R}$ be an sequence of functions such that $|f_n(x)| \neq 0$ for all $n \in \mathbb{N}$ and all $x \in X$ and such that $\left|\frac{f_{n+1}(x)}{f_n(x)}\right|\leq c <1$ for all $x \in X$ and for $n$ sufficiently large. Show that $\sum |f_n(x)|$ and $\sum f_n(x)$ are both uniformly convergent.

This seem a very easy problem if at least one $|f_n|$ is bounded for sufficiently large $n$, but as you all can see, this hypothesis does not exist in the description of the problem.

My question is: this additional hypothesis is required for this stamement to become true?

Purpose of the Unnormalized Directional Derivative

Posted: 19 Jun 2021 10:01 PM PDT

There are 2 definitions of the directional derivative:

Here is the more popular version: $$\nabla _{\vec v}f = \nabla f \cdot \vec{v} $$

The Normalized version: $$\nabla _{\vec v}f = \frac{\nabla f \cdot \vec{v}}{||\vec{v}||} $$

Also, if you restric the vector to already being normal, these are equivalent. What I am wondering is the purpose of this first definition, if the derivative is the rate of change, and the directional derivatvie is the rate of change in a certain direction, then shouldn't only the second definition valid?

The second tells us how $f$ changes with respect to a tiny nudge in the direction of $\vec{v}$, and this is irrespective of the length of $\vec{v}$ (all evaluated at the input variables).

I am finding it difficult to understand why we would want to use the first one, it only tells us the rate of change of $f$ with respect to a tiny nudge in the direction of $\vec{v}$, but also respective to the length of $\vec{v}$. If we march a tiny amount: $\vec{\Delta v}$ (I apologize if this is an abuse of notation) in $\vec{v}$ 's direction, then the change in $f$ would be approximated by the directional derivative (first definition) multiplied by the ratio of the lengths of; $\vec{\Delta v}$ to $\vec{v}$.

Here is what I am trying to say:

$$ \Delta f \approx (\nabla f \cdot \vec{v}) \frac{||\vec{\Delta v}||}{||\vec{v}||} $$

Using the second the definition, all we have to do is multiply by the length of change in the inputs:

$$ \Delta f \approx \left( \frac{\nabla f \cdot \vec{v}}{||\vec{v}||}\right)||\vec{\Delta v}|| $$

So wouldn't it be much easier to use the second definition?

Thank you for reading this question, even if you didn't answer. I am sorry if I have missed something or if there is a mistake in my thought process, thinking about rates of change of multiple variables is new to me.

Question about recursive formula for carry bit in modular addition mod $2^n$

Posted: 19 Jun 2021 09:56 PM PDT

I am reading a paper and there authors defined recursively the carry bit of modular addition $\mod 2^n$. Specifically, suppose we are summing $x+y=z \mod 2^n$ and suppose $C[i]$ is the carry bit of the i-th bit in $x+y$ then $C[i+1]=x[i]\cdot y[i] \oplus(x[i]\oplus y[i])\cdot C[i]$. My question is how they reach this formula? here in Section 3 other paper with the same definition.

Are there "ordinal numbers objects"?

Posted: 19 Jun 2021 09:53 PM PDT

I've been reading Hagino's thesis introducing CPL, and they have a gem of a definition on p123:

left object ord with pro is      ozero : 1 → ord      sup : exp(nat, ord) → ord  end object  

To translate from CPL into something more familiar, let us work in a Cartesian closed category with a natural numbers object $\Bbb{N}$, and have $O$ be an object equipped with arrows $z : 1 \to O$ and $s : O^\Bbb{N} \to O$, such that for any $f : 1 \to X$ and $g : X^\Bbb{N} \to X$, $\exists!k : O \to X$ making the following diagram commute:

$\require{AMScd}$ \begin{CD} 1 @> z >> O @< s << O^\Bbb{N} \\ @| @V k VV @V k \circ - VV \\ 1 @> f >> X @< g << X^\Bbb{N} \end{CD}

Hagino calls $k$ the $\mathrm{pro}(f,g)$, which might stand for "primitive recursion on ordinals", but I'm not sure exactly what this is. Could this be a categorical definition of an ordinal numbers object?

Geodesics and isometries

Posted: 19 Jun 2021 09:51 PM PDT

Let $M$ and $N$ be two semi-Riemannian manifolds and $ϕ ∈ C^∞(M, N) $an isometry. Show that a curve $c : I → M$ is a geodesic in $M $if and only if$ ϕ $$c$ is a geodesic in $N$.

I know now that isometry implies the diffeomorphism, but for further prove, I got no clue at all. Would you show me how to prove this?

Why this determinant can be factored to (x-y)(y-z)(z-x)?

Posted: 19 Jun 2021 09:53 PM PDT

Why the determinant $\begin{vmatrix} 1 & 1 &1 \\ x & y & z \\ x^2 & y^2 &z^2 \\ \end{vmatrix}$ can be factored to the form $(x-y)(y-z)(z-x)$ ?


Proof:

Subtracting column 1 from column 2, and putting that in column 2,

\begin{equation*} \begin{vmatrix} 1 & 1 &1 \\ x & y & z \\ x^2 & y^2 &z^2 \\ \end{vmatrix} = \begin{vmatrix} 1 & 0 &1 \\ x & y-x & z \\ x^2 & y^2-x^2 &z^2 \\ \end{vmatrix} \end{equation*}

$ = z^2(y-x)-z(y^2-x^2)+x(y^2-x^2)-x^2(y-x) $

rearranging the terms,

$ =z^2(y-x)-x^2(y-x)+x(y^2-x^2)-z(y^2-x^2) $

taking out the common terms $(y-x)$ and $(y^2-x^2)$,

$ =(y-x)(z^2-x^2)+(y^2-x^2)(x-z) $

expanding the terms $(z^2-x^2)$ and $(y^2-x^2)$

$ =(y-x)(z-x)(z+x)+(y-x)(y+x)(x-z) $

$ =(y-x)(z-x)(z+x)-(y-x)(z-x)(y+x) $

taking out the common term (y-x)(z-x)

$ =(y-x)(z-x) [z+x-y-x] $

$ =(y-x)(z-x)(z-y) $

$ =(x-y)(y-z)(z-x) $


The determinant of this matrix is the volume of a parallelopiped with sides as vectors whose tail is at the origin and head at x,y,z coordinates being equal to the columns(or rows) of the matrix.$^{[1]}$

So is the volume of this parallelopiped equals $(x-y)(y-z)(z-x)$ in any obvious geometric way?


References

[1] Nykamp DQ, "The relationship between determinants and area or volume." From Math Insight. http://mathinsight.org/relationship_determinants_area_volume

Is this a formal definition of the maximal element of partially ordered set?

Posted: 19 Jun 2021 10:02 PM PDT

Is this a formal definition of the maximal element of a partially ordered set $S$?

$\forall S,m:[m\in S \\\land\forall a\in S: a\leq a \\ \land \forall a,b\in S:[a\leq b ~\land~ b\leq a \implies a=b]\\ \land \forall a,b,c\in S:[a\leq b ~\land ~b\leq c \implies a\leq c]\\ \implies [Maximal(S,m) \iff \forall a\in S:a\leq m ~\land ~ \forall a\in S:[ \forall b\in S: b\leq a \implies a=m]]]$

where $Maximal(S,m)$ means the maximal element of $S$ is $m$.

Question on combinatorics telling the difference between distribution and division

Posted: 19 Jun 2021 09:35 PM PDT

Ques) A double-decker bus carry (u+l) passengers, u in the upper deck and l in the lower deck. Find the number of ways in which the (u+l) passengers can be distributed in the two decks, if r (less then or equal to l) particular passengers refuse to go in the upper deck and s (less than or equal to u) refuse to sit in the lower deck.

My Approach :

Since s particular persons are fixed to sit in the upper deck and r particular persons are fixed to sit in the lower deck therefore we need to select (u-s) persons from the remaining (u+l-r-s) persons in $^{u+l-r-s}C_{u-s}$ ways and arranging them in (u!.l!) ways therefore answer should be ($^{u+l-r-s}C_{u-s}$.u!.l!).

Answer given in book: $^{u+l-r-s}C_{u-s}$

I think the word distributed means arrangement but in the answer it has just given selection

Proving the non-existence of a polynomial function [duplicate]

Posted: 19 Jun 2021 09:20 PM PDT

Prove: There is no such polynomial function in $\mathbb{Z}[x]$ s.t. $f(7)=5$ and $f(15)=10$.

My first idea was to think $f(7)=5$ and $f(15)=10$ as points $(7,5)$ and $(15,10)$, but later on I couldn't go further.

The next idea is to construct two other functions $g(x)$ and $h(x)$ s.t. $g(x)=f(x)-5$, $h(x)=f(x)-10$. Then, $g(7)=0$, $h(15)=0$. Let $\displaystyle{f(x)=\sum_{k=0}^{n}a_kx^k}\;\;(\forall k\in\{0,1,2,\cdots,n\}, a_k\in \mathbb{Z}, a_n=1)$. By the rational root theorem, we have $$a_0-5\equiv0 \;\;(\operatorname{mod} 7)$$ $$a_0-10\equiv0\;\; (\operatorname{mod} 15).$$ Let $a_0=7a+5=15b+10\;(a,b\in\mathbb{Z})$. Then, $7a=15b+5$.

What am I supposed to do after this assumption? Am I going the right direction? Is there another way to interpret this problem?

Power of a point in triangle proof

Posted: 19 Jun 2021 10:04 PM PDT

I've tried this problem for a while now, and I'm stuck on it.

Let $D$ be the foot of altitude $AD$ on side $BC$ of triangle $ABC$. Denote a point $N$ on altitude $AD$. Prove that this point $N$ is the orthocenter of triangle $ABC$ if and only if $DB \cdot DC = DN \cdot DA$.

This looks an awful lot like the power of a point, however there are no circles in the problem, which leads me to suspect there is a cyclic quadrilateral or something I am not seeing. Any suggestions?

Perron root of convex combination of nonnegative matrices

Posted: 19 Jun 2021 10:00 PM PDT

Given (irreducible) nonnegative matrices $A_i$ and a convex weighting $\hat{w}$ (i.e. a nonnegative set of reals ${w}$ s.t. $\sum_i w_i$) what can we deduce about the Perron root of $\sum_i w_i A_i$, in terms of the Perron roots of $A_i$? The classical theorems require $A_i$ to all be symmetric (Hermitian) or diagonal.

What can we say about bounds for this Perron root? Approximations?

Is $\mathbb{Q} ({\sqrt[3] 2})$ is isomorphics to $\mathbb{Q} (\omega{\sqrt[3] 2})?$ Yes/No

Posted: 19 Jun 2021 09:29 PM PDT

Is $\mathbb{Q} ({\sqrt[3] 2})$ is isomorphics to $\mathbb{Q} (\omega{\sqrt[3] 2})$ where $\omega =e^{2\pi i/3}$

My attempt : I found the answer here. But i think this answer is wrong

Here

$\mathbb{Q} ({\sqrt[3] 2})=\{a + b\sqrt[3] 2 +c(\sqrt[3] 2)^2 |a,b,c \in \mathbb{Q} \}$

$\mathbb{Q} (\omega{\sqrt[3] 2})=\{a + b(\omega\sqrt[3] 2) +c(\omega\sqrt[3] 2)^2 |a,b,c \in \mathbb{Q} \}$

Suppose $f:\mathbb{Q} ({\sqrt[3] 2})\to\mathbb{Q} (\omega{\sqrt[3] 2})$ is an isormorphism then

$f ({\sqrt[3] 2})=a + b\omega\sqrt[3] 2 +c(\omega\sqrt[3] 2)^2\implies f(2)=(a + b\omega\sqrt[3] 2 +c(\omega\sqrt[3] 2)^2)^3$ but $f(2)= 2 f(1)=2$

$\implies2\neq(a + b\omega\sqrt[3] 2 +c(\omega\sqrt[3] 2)^2)^3$ . So our assumption is a Contradiction

Therefore $\mathbb{Q} ({\sqrt[3] 2})$ is not isomorphics to $\mathbb{Q} (\omega{\sqrt[3] 2})$

Does there exists a rigorous definition of "random experiment" when defining a sample space?

Posted: 19 Jun 2021 09:39 PM PDT

I find that sometimes in probability (geared towards non-math majors), the definition of the sample space will involve things such as:

"a set of possible outcomes of a random experiment".

I guess here "possible outcomes" and "random experiment" are not rigorous definitions.

A similar one can be found on Wikipedia: (https://en.wikipedia.org/wiki/Sample_space)

"the sample space of an experiment or random trial is the set of all possible outcomes or results of that experiment."

It just seems a bit odd to me that probability theory rests upon a common-place/colloquial understanding of what a "random experiment" is.

Can this "sample space" and in particular "random experiment/trial" definition be more precise (or is this a Principia Mathematica type of scenario, i.e. not worth defining it rigorously)?

How to prove that the projection length $\frac{x^T y}{\|x\|}$ is normally distributed if $x,y\sim N(0,I_p)$?

Posted: 19 Jun 2021 09:28 PM PDT

$x,y\in\mathbb{R}^p$ are independently drawn from the standard multivariate Gaussian distribution $N(0,I_p)$. How do you prove that the projection length $z=\frac{x^T y}{\|x\|}$ is normally distributed?

If $x$ is fixed, then $z$ is just a linear combination of the entries of $y$, so it is normally distributed. And we can easily calculate the mean and variance, so $z|x\sim N(0,1)$ for any $x$. Is this enough to conclude that $z\sim N(0,1)$ unconditionally?

Can't prove vector equation (I've try attact my answer)

Posted: 19 Jun 2021 10:14 PM PDT

Given a vector function $\mathbf{R} = x\mathbf{i}+ y\mathbf{j}+z\mathbf{k}$ and $r = |\mathbf{R}| = \sqrt{x^2 + y^2 + z^2}$ then prove that for any vector constant $\mathbf{A}$ holds $\nabla\times\Big(\frac{1}{r}(\mathbf{A}\times\mathbf{R})\Big)=\frac{1}{r}\mathbf{A}+\frac{\mathbf{A}\cdot\mathbf{R}}{r^2}\mathbf{R}$

This is my friend answer but i dont understand Is he right?

$\nabla\times\Big(\frac{1}{r}(\mathbf{A}\times\mathbf{R})\Big)$ \begin{align*} ={}&\Big(\frac{\partial}{\partial x}i+\frac{\partial}{\partial y}j+\frac{\partial}{\partial z}k\Big)\times\Big(\frac{1}{r}(A_2z-A3_y)i+(A_3x-A_1z)j\\&+(A_1y-A_2x)k\Big)\\ ={}& \left| \begin{array}{ccc} i & j & k \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ \frac{1}{r}(A_2z-A3_y) & \frac{1}{r}(A_3x-A_1z) & \frac{1}{r}(A_1y-A_2x) \end{array} \right|\\ ={}& \Big(\frac{\partial}{\partial y}\Big(\frac{1}{r}(A_1y-A_2x)\Big)-\frac{\partial}{\partial z}\Big(\frac{1}{r}(A_2z-A3_y)\Big)\Big)i + \\&\Big(\frac{\partial}{\partial z}\Big(\frac{1}{r}(A_2z-A3_y)\Big)-\frac{\partial}{\partial x}\Big(\frac{1}{r}(A_1y-A_2x)\Big)\Big)j\\& + \Big(\frac{\partial}{\partial x}\Big(\frac{1}{r}(A_3x-A_1z)\Big)-\frac{\partial}{\partial y}\Big(\frac{1}{r}(A_2z-A3_y)\Big)\Big)k\\ ={}&\Big(\frac{\partial}{\partial y}\Big(\frac{1}{\sqrt{x^2 + y^2+z^2}}(A_1y-A_2x)\Big)-\frac{\partial}{\partial z}\Big(\frac{1}{\sqrt{x^2 + y^2+z^2}}(A_2z-A3_y)\Big)\Big)i\\& +\Big(\frac{\partial}{\partial z}\Big(\frac{1}{\sqrt{x^2 + y^2+z^2}}(A_2z-A3_y)\Big)-\frac{\partial}{\partial x}\Big(\frac{1}{\sqrt{x^2 + y^2+z^2}}(A_1y-A_2x)\Big)\Big)j\\& + \Big(\frac{\partial}{\partial x}\Big(\frac{1}{\sqrt{x^2 + y^2+z^2}}(A_3x-A_1z)\Big)-\frac{\partial}{\partial y}\Big(\frac{1}{\sqrt{x^2 + y^2+z^2}}(A_2z-A3_y)\Big)\Big)k\\ ={}& \Big(\Big((A_1y-A_2x)\frac{\partial}{\partial y}\Big(\frac{1}{\sqrt{x^2 +y^2+z^2}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)\frac{\partial}{\partial y}(A_1y-A_2x)\Big)-\\& \Big(\Big((A_3x-A_1z)\frac{\partial}{\partial z}\Big(\frac{1}{\sqrt{x^2 +y^2+z^2}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)\frac{\partial}{\partial z}(A_3x-A_1z)\Big)\Big)i\\& +\Big(\Big((A_2z-A_3y)\frac{\partial}{\partial z}\Big(\frac{1}{\sqrt{x^2 +y^2+z^2}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)\frac{\partial}{\partial z}(A_2z-A_3y)\Big)-\\& \Big(\Big((A_1y-A_2x)\frac{\partial}{\partial x}\Big(\frac{1}{\sqrt{x^2 +y^2+z^2}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)\frac{\partial}{\partial x}(A_1y-A_2x)\Big)\Big)j+\\& \Big(\Big((A_3x-A_1z)\frac{\partial}{\partial z}\Big(\frac{1}{\sqrt{x^2 +y^2+z^2}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)\frac{\partial}{\partial z}(A_3x-A_1z)\Big)-\\& \Big(\Big((A_2z-A_3y)\frac{\partial}{\partial y}\Big(\frac{1}{\sqrt{x^2 +y^2+z^2}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)\frac{\partial}{\partial y}(A_2z-A_3y)\Big)\Big)k\\ ={}& \Big(\Big((A_1y-A_2x)\Big(\frac{-y}{(x^2 +y^2+z^2)^{\frac{3}{2}}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)A_1\Big)+\\& \Big(\Big((A_3x-A_1z)\Big(\frac{z}{(x^2 +y^2+z^2)^{\frac{3}{2}}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)A_1\Big)\Big)i\\& +\Big(\Big((A_2z-A_3y)\Big(\frac{-z}{(x^2 +y^2+z^2)^{\frac{3}{2}}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)A_2\Big)+\\& \Big(\Big((A_1y-A_2x)\Big(\frac{x}{(x^2 +y^2+z^2)^{\frac{3}{2}}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)A_2\Big)\Big)j\\& +\Big(\Big((A_3x-A_1z)\Big(\frac{-x}{(x^2 +y^2+z^2)^{\frac{3}{2}}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)A_3\Big)+\\& \Big(\Big((A_2z-A_3y)\Big(\frac{y}{(x^2 +y^2+z^2)^{\frac{3}{2}}}\Big)+\Big(\frac{1}{\sqrt{x^2+y^2+z^2}}\Big)A_3\Big)\Big)k\\ ={}& \frac{A_1}{\sqrt{x^2+y^2+z^2}}i+\frac{A_2}{\sqrt{x^2+y^2+z^2}}j+\frac{A_3}{\sqrt{x^2+y^2+z^2}}k\\& +\frac{\Big((-A_1y^2+A_2xy)+(A_3xz-A_1xz^2)\Big)}{(x^2+y^2+z^2)^\frac{3}{2}}i+\frac{A_1}{\sqrt{x^2+y^2+z^2}}i\\& +\frac{\Big((-A_2z^2+A_3yz)+(A_1xy-A_2x^2)\Big)}{(x^2+y^2+z^2)^\frac{3}{2}}j+\frac{A_2}{\sqrt{x^2+y^2+z^2}}j\\& +\frac{\Big((-A_3x^2+A_1xz)+(A_2yz-A_3y^2)\Big)}{(x^2+y^2+z^2)^\frac{3}{2}}k+\frac{A_3}{\sqrt{x^2+y^2+z^2}}k\\ ={}&\frac{A_1}{\sqrt{x^2+y^2+z^2}}i+\frac{A_2}{\sqrt{x^2+y^2+z^2}}j+\frac{A_3}{\sqrt{x^2+y^2+z^2}}k\\& +\frac{A_1x+A_2y+A_3z}{\sqrt{x^2+z^2+z^2}^\frac{4}{2}}(xi+yj+zk)\\ ={}&\frac{A_1i+A_2j+A_3k}{\sqrt{x^2+y^2+z^2}}+\frac{A_1x+A_2y+A_3z}{\Big(\sqrt{x^2+y^2+z^2}\Big)^2}(xi+yj+zk)\\ ={}&\frac{1}{\sqrt{x^2+y^2+z^2}}\Big(A_1i+A_2j+A_3k\Big)+\frac{(A_1i+A_2j+A_3k)(xi+yj+zk)}{\Big(\sqrt{x^2+y^2+z^2}\Big)^2}(xi+yj+zk)\\ ={}&\frac{1}{r}A+\frac{AR}{r^2}R \end{align*}

Special case of Baire's Theorem - Rudin ex 2.30

Posted: 19 Jun 2021 09:29 PM PDT

The question I have is regarding exercise 2.30 from Rudin's principles of mathematical analysis:

Prove the following result: If $\mathbb{R}^k = \cup_{n=1}^\infty F_n$ where each $F_n$ is a closed subset of $\mathbb{R}^k$, then at least one $F_n$ has a nonempty interior. Equivalent statement: If $G_n$ is a dense open subset of $\mathbb{R}^k$ for $n = 1,2,3,..,$ then $\cap_{n=1}^\infty G_n$ is not empty (in fact, it is dense in $\mathbb{R}^k$).

In the book, the statements are said to be equivalent with no further explanation but unfortunately I could not find a straightforward proof of this fact. The most obvious thing to do seemed to assume either result, and take complements, but for instance; given a sequence of closed sets as in the hypothesis, taking their complement does not appear guarantee me a sequence of dense open sets (if every closed set $F_n = \mathbb{R}^k$ for instance). Similarly, given a sequence of dense open subsets of $\mathbb{R}^k$, taking their complement does not appear to guarantee me a sequence of closed subsets whose union will be $\mathbb{R}^k$. If someone could explain how the statements are equivalent it would be a great help.

On the other hand, I think I do have a proof of the result using the statement of dense open subsets and would appreciate comments on the completeness of my argument. Let $G_n$ be a dense open subset of $\mathbb{R}^k$ for $n = 1,2,...$ and take an arbitrary point $x$ of $G_1$ to start.

$G_1$ is open, and so it has a compact neighbourhood $V_1$ contained in $G_1$. If this point is in $G_2$, we can look at $G_3$, otherwise it is a limit point of $G_2$, and so $V_1$ contains a point $y$ of $G_2$. By taking a neighbourhood small enough around $y$, we can obtain a compact neighbourhood $V_2$ such that $V_2 \subset V_1$. Proceeding in this manner we obtain a sequence of compact neighbourhoods $V_1 \supset V_2 \supset...$ and it is given in a theorem of the chapter that the intersection of such a sequence is nonempty. This intersection is a subset of $\cap_{n=1}^\infty G_n$ and proves the result.

Now, consider the intersection of $G_1$ with an arbitrary nonempty open set $O \subset \mathbb{R}^k$. If this intersection is empty, then there is an interior point of $O$, whose neighbourhood has empty intersection with $G_1$. This point is not in $G_1$ and cannot be a limit point of $G_1$ contradicting $G_1$ being dense. Therefore the intersection is nonempty.

Using this; consider an arbitrary point $z$ in $\mathbb{R}^k$. If $z$ is not in $\cap_{n=1}^\infty G_n$, every neighbourhood of $z$, has nonempty intersection with $G_1$ and we can let an arbitrary point in that nonempty intersection be our starting point $x$ above. By choosing a sufficiently small $V_1$, we can argue that this neighbourhood of $z$ has nonempty intersection with $\cap_{n=1}^\infty G_n$. It follows that $z$ is either in $\cap_{n=1}^\infty G_n$ or $z$ is a limit point of $\cap_{n=1}^\infty G_n$, and so $\cap_{n=1}^\infty G_n$ is dense in $\mathbb{R}^k$.

Derangement Question: last person has correct seat

Posted: 19 Jun 2021 09:57 PM PDT

I have encountered a derangement problem described as follow:

n people will take n seats indexed as 1,2,3...,n; they choose seat one by one starting from person 1. For first n-1 people, they are not allowed to take the seat with same number as him and choose unoccupied seat randomly, eg. person 1 is not allowed to sit in seat 1, he can choose other seats randomly from 2 to n. Then person 2 choose under same rule: person 2 will choose unoccupied ones other than seat 2. What is the probability that the last person n take seat number n?

For example:

p(n=2) = 0, there is no way person 2 can sit in seat 2. Person 1 will have no choice but to choose seat 2.

If you try n = 3, p(n=3) = 1/4 since there is only one way for person 3 sit in seat 3: person 1 choose seat 2 with probability = 1/2, then person 2 choose seat 1 with probability = 1/2, thus meets the desired case. However, $\frac{D_2}{D_2+D_3}$ = 1/3(Derangement number). They do not agreee with each other.

n=4, p(n=4) = $1/3*1/3*1/2 + 1/3 * 1/2 * 1/2 = 5/36$

There should be some tricks here about derangement number or we should find other methods.

A lemma on lifting vectors to make them orthogonal

Posted: 19 Jun 2021 09:41 PM PDT

For a paper in progress, I am trying to come up with a quick proof of the following claim:

Claim. Let $V$ be a real inner product space of dimension $n$ (if you like you may take $\mathbb{R}^n$ with its usual Euclidean inner product), and let $v_1, \dots, v_k \in V$ be an arbitrary finite list of vectors in $V$. Then there exists another real inner product space $W$ of dimension at most $n+k$ (or thereabouts), a list of orthogonal vectors $w_1, \dots, w_k \in W$, and a surjective partial isometry $T : W \to V$ such that $T w_i = v_i$ for all $i=1,\dots, k$.

Recall that surjective partial isometry means that $T : W \to V$ is a linear mapping which maps the orthogonal complement of its kernel isometrically onto $V$. That is, if $y_1, y_2 \perp \ker T$, then $\langle T y_1, T y_2 \rangle_V = \langle y_1, y_2 \rangle_W$.

Please note that there are no assumptions at all on the given vectors $v_1, \dots, v_k$. In particular, they may or may not be linearly independent. They need not be distinct, and some of them could be zero. Or, some or all of them might already be orthogonal. Ideally the proof would not have to treat these cases separately.

It's not so important that the bound on the dimension of $W$ should be exactly $n+k$; something like $n+k+1$ or $n+k+2$ would also be okay. I'd rather have a short proof than a sharp bound. Actually, for the specific application at hand, I really only need to cover $n=k=3$, and it would be all right if the dimension of $W$ is merely bounded by some constant. But I would rather prove the claim for general $n,k$.

I do have a proof of this statement, but I don't really like it. The idea is to suppose $V = \mathbb{R}^n$ and let $W = \mathbb{R}^{n+k}$, with their usual Euclidean inner products, and $T$ the orthogonal projection onto the first $n$ coordinates. Then we construct the vectors $w_i$ by setting their first $n$ coordinates equal to those of the $v_i$ (so that $T w_i = v_i$), and compute their remaining $k$ coordinates by induction on $i$ so as to make them orthogonal at each step. This requires solving a system of $i$ linear equations at the $i$th step, and a bit of care to ensure that the systems remain consistent. So it proves the statement, but it's messier than I think should be necessary, and I would prefer not to have to work in coordinates. I feel like there should be a more elegant argument.

I'd also be happy with a reference to any reasonably simple theorem that implies this one, or an explanation of why it's an obvious consequence of some well known fact.

Solve $Af =\lambda f$

Posted: 19 Jun 2021 09:27 PM PDT

If $A = e^{i \theta} V + e^{-i \theta} V^*$ where $V(f)(t) = \int_0^t f(s) ds$ then solve $Af =\lambda f$ for $f \not =0$ and $\lambda \not=0$

Simple algebras

Posted: 19 Jun 2021 09:43 PM PDT

The following is taken from Dales' Banach Algebras and Automatic Continuity;

enter image description here

For reference, $A^\bullet=\{a\in A:a\neq 0\}$, $A^2=\{a^2:a\in A\}$, $AA=\{ab:a,b\in A\}$, $aA=\{ab:b\in A\}$, etc.

None of the algebras are assumed to be unital, unless stated otherwise.

I'm struggling to follow the proof of 1.3.52(i):

  • It's claimed that $A^2\neq A$ implies $I\neq A$. Are we using the definition of simplicity here? Should it perhaps be $A^2\neq 0$? Or should the definition of simplicity be amended to $A^2\neq A$?

  • Why does $AaA\neq 0$ imply $AaA=A$? If this is a direct appeal to simplicity, I don't see why $AaA$ is an ideal.

$p$-adic valuation in $\mathbb{Q}$ existence

Posted: 19 Jun 2021 10:03 PM PDT

For a rational number $r \in \mathbb{Q}$, the claim is that we can write $$ r = p^{e}\cdot\frac{a}{b}, \,\,\, e,a,b \in \mathbb{Z}, b \neq 0 $$ with the properties that $\gcd(a,b)=1$ and $p$ does not divide $a$ and $b$.

My question is, how can we verify that this representation is always possible?

My approach was to write $r :=\frac{c}{d}$ and use the unique prime factorization for $c$ and $d$. Then $e$ is just either $0$ if it does not appear in the prime factorization of $c$ or $d$ or in the other case, just factor out $p$ of the prime factorizations and rewrite $\frac{a}{b}$ without this $p$. But how can we now be sure that $\gcd(a,b)=1$ and $p$ does not divide $a$ and $b$?

Already, thank you very much for any help!

Why is $\lim\limits_{x\to\infty}\frac{\sum_{i=1}^x(\sum_{j=1}^i\frac1j-\ln i-\gamma)}{\sum_{i=1}^x\frac1i}=\frac12$?

Posted: 19 Jun 2021 10:15 PM PDT

Why is $\lim\limits_{x\to\infty}\frac{\sum_{i=1}^x(\sum_{j=1}^i\frac1j-\ln i-\gamma)}{\sum_{i=1}^x\frac1i}=\frac12$?

I learnt Euler's Constant $\gamma$ before, and I want to know the sum of $H_k-\ln k-\gamma$. As Wikipedia says, $H_k=\ln k+\gamma+\varepsilon_k$, where $\varepsilon_k\sim\frac1{2k}$. But I wonder how to prove it. By the following Mathematica program, I can check that this limit is $\frac12$. But I cannot prove it. Why is it not anything below or above $\frac12$?

Limit[Sum[Sum[1/j, {j, 1, i}] - Log[i] - EulerGamma, {i, 1, x}]/Sum[1/i, {i, 1, x}], x -> Infinity]  

$3\sum_{sym}x^2y^2z+\sum_{sym}x^4y\ge 4\sum_{sym}x^3yz$

Posted: 19 Jun 2021 09:17 PM PDT

Prove that
$$3\sum_{sym}x^2y^2z+\sum_{sym}x^4y\ge 4\sum_{sym}x^3yz,\quad\forall x,y,z>0,$$ where $\sum_{sym}$ is the symmetric sum notation.

Context: I was reading about the Muirhead inequality and I was wondering if I take a convex combination between 2 symmetric sums $S_0$ and $S_1$: $S_{\alpha}=\alpha S_1+(1-\alpha)S_0$, where $S1\ge S\ge S_0$ and $S$ is also a symmetric sum, then is there a tactic to prove that $S_{\alpha}\le S$ or $S_{\alpha}\ge S$? I took the particular case in the problem and I tested it in a program, by generating many random triples, and I didn't find a counterexample. I tried to use AM-GM or to rewrite the inequality to use the Schur inequality, but without an improvement. I believe this type of inequalities are of interest for those who participate in math contests.

Irrationality of certain lacunary series

Posted: 19 Jun 2021 10:07 PM PDT

This question is motivated primarily by the following postings:

  1. A series of rational number converges to an irrational number
  2. The irrationality of rapidly converging series
  3. Is $\sum\limits_{n=1}^\infty\frac1{a_n}$ irrational?
    among others, which concern the irrationality of series of the form $\sum^\infty_{k=1}\frac{1}{n_k}$ where $n_k$ is is a very lacunary monotone increasing sequence of of positive integers.

My precise question is as follows. Suppose $(\varepsilon_k:k\in\mathbb{N})$ is a sequence taking values in $\{-1,1\}$, and $n_k$ is lacunary monotone increasing sequence of integers. Is $$x:=\sum^\infty_{k=1}\frac{\varepsilon_k}{n_k}$$ irrational? The answer in general is no unless the gaps between the terms $n_k$ and $n_{k+1}$ satisfy some additional conditions. Probably, the easiest instance of this problem is when $$\begin{align} n_k:=2^{m_k},\quad\text{where}\quad \limsup_k(m_{k+1}-m_k)=\infty\tag{0}\label{zero}\end{align}$$ in which case $x$ is irrational. Other cases of interest are when $n_k$ is such that either $$\begin{align} \liminf_k\frac{n_1\cdot\ldots\cdot n_k}{n_{k+1}}=0,\quad\text{and}\quad n_{k+1}\geq C n_k\tag{1}\label{one}\end{align}$$ and $$ \begin{align}\limsup_k\sqrt[2^k]{n_k}=\infty,\quad\text{and}\quad n_{k+1}\geq C n_k\tag{2}\label{two}\end{align}$$ for some content $C>1$. That $x$ is irrational in this cases got registered in notes I took long time ago during a talk about lacunary random trigonometric series (I am sure it is a well known fact for analytic number theorists) and just resurfaced a few days ago while organizing papers.


As a general method, It is enough to show that there are infinitely pairs of integers $(p_k,q_k)$, with $q_k>0$, such that $0<|q_k x -p_k|$ and $\lim_{k\rightarrow\infty}|q_k x- p_k|=0$. (This is known to be a necessary and sufficient condition for irrationality).

For example, under assumption \eqref{one}, if $s_k=\sum^n_{j=1}\frac{\varepsilon_k}{n_k}=\frac{p_k}{n_1\cdot\ldots\cdot n_k}=\frac{p_k}{q_k}$, then $$|q_k x - p_k|\leq q_k\sum^\infty_{j=k+1}\frac{1}{n_j}\leq \frac{n_1\cdot\ldots\cdot n_k}{n_{k+1}}\sum^\infty_{n=1}C^{-n}\xrightarrow{k\rightarrow\infty}0$$ along a subsequence $k$. Thus, in this case $x$ is irrational.


Edit: Originally I had typed condition \eqref{two} as $$ \begin{align}\limsup_k\sqrt[k]{n_k}=\infty,\quad\text{and}\quad n_{k+1}\geq C n_k\tag{2'}\label{twop}\end{align}$$ As it was pointed out by Erick Wong, that condition it is not enough on its own. On reviewing my notes again, I realized that the right root was $2^k$, not $k$. Incidentally, in Eric Wong's example the lacunary sequence has the $2^k$ growth threshold.

Still with this correction I am no able to put an end to the case \eqref{two}. Any hints and/or solutions are welcome. If someone even can provide conclusion on whether the types of series described above are transcendental or not, even better!

Help with Arithmetic Progressions of Perfect Squares

Posted: 19 Jun 2021 09:39 PM PDT

Recently, I needed to generate all arithmetic progressions of three distinct perfect squares without any repeats. Mapping $\left(x, y\right)$ to such an arithmetic progression centered at $\left(x^2+y^2\right)^2$ with difference $4xy\left(y^2-x^2\right)$ for all $\left(x,y\right)\in\mathbb{X}$, where $\mathbb{X}:=\left\{\left(x,y\right)\in\mathbb{N}^2:0<x<y\right\}$, should solve this problem, as Fibonacci famously derived in his solution to the Congruum Problem. This means that for such an arithmetic progression $\left(b^2-h, b^2, b^2+h\right)$, there should a unique solution in $\mathbb{X}$ to the system of equations $x^2+y^2=b,4xy\left(y^2-x^2\right)=h$. However, in the case of the progression $\left(205^2,425^2,565^2\right)$, for which $h=138600$, these equations have no solution in $\mathbb{X}$, as can be seen in this Desmos plot. What mistake am I making?

Skolem Hulls and countable elementary submodels of $H(\theta)$

Posted: 19 Jun 2021 10:15 PM PDT

Let $\theta$ be regular and uncountable. Fix a well-ordering $<$ of $H(\theta)$. Since the structure $(H(\theta),\in,<)$ has a definable well-ordering, every subset $A\subset H(\theta)$ admits a Skolem hull, namely the set of elements of $H(\theta)$ which are definable in $(H(\theta),\in,<)$ from parameters in $A$.

Let $M\prec (H(\theta),\in,<)$ be countable. Take $p\in M$ which is finite and non-empty. Obviously, the Skolem hull is contained in $M$. But does it have to be a member of $M$? There's the obvious obstruction of the undefinability of truth, but I can't find an example where the hull is definitely not in $M$. The restriction $p\neq\emptyset$ is necessary, since $\emptyset$ is in the Skolem hull of $\emptyset$.

I'm specially interested in the case where $p$ is a finite $\in$-chain of countable elementary submodels of $(H(\theta),\in,<)$.

laplace transform of hypergeometric gaussian [closed]

Posted: 19 Jun 2021 09:45 PM PDT

I want to ask a question about a closed from of a Laplace transformation https://en.wikipedia.org/wiki/Laplace_transform for Gauss Hypergeometric Functions https://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/22/01/ of form $$\int_{0}^{\infty} e^{-s x}{ }_{2} F_{1}(a ; b ;c; x) d x$$

Thanks for your answers and references (books and articles).

Prove that $\frac1{a(1+b)}+\frac1{b(1+c)}+\frac1{c(1+a)}\ge\frac3{1+abc}$

Posted: 19 Jun 2021 09:41 PM PDT

I tried doing it with CS-Engel to get $$ \frac{1}{a(1+b)}+\frac{1}{b(1+c)}+\frac{1}{c(1+a)} \geq \frac{9}{a+b+c+ a b+b c+a c} $$ I thought that maybe proof that $$ \frac{1}{a+b+c+a b+b c+a c} \geq \frac{1}{3(1+a b c)} $$ or $$ 3+3 a b c \geq a+b+c+a b+b c+a c $$, but I don't know how

Find the equation of a line whose distance from a point is given

Posted: 19 Jun 2021 10:06 PM PDT

The question is: If the distance of a point $(1, 4)$ from a line passing through the intersection of the lines $x-2y+3=0$ and $x-y-5=0$ is $4$ units. Find its equation.

No comments:

Post a Comment