Sunday, September 5, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Uniform Convergence and Subsequences

Posted: 05 Sep 2021 08:09 PM PDT

Exercise Show that a sequence of functions $\{f_n\}$ fails to converge uniformly to a function $f$ on a set $E$ if and only if there exists some positive $\varepsilon$ such that a sequence $\{x_k\}$ of points in $E$ and a subsequence $\{f_{n_k}\}$ can be found such that

$$|f_{n_k}(x_k)-f(x_k)| \geq \varepsilon$$


I am struggling to prove the $(\Leftarrow)$ direction.

For the $(\Leftarrow)$ direciton, we suppose that there is $\varepsilon > 0$ such that a sequence $\{x_k\} \subset E$ and $\{f_{n_k}\} \subset \{f_n\}$ such that

$$|f_{n_k}(x_k)-f(x_k)| \geq \varepsilon$$

It is at this point that I get stuck. In the previous direction $\big[(\Rightarrow)\big]$ if you assume that the $f_n$'s are not uniformly convergent on the entire sequence of functions and the entire set $E$, then it follows immediately that any sequence $\{x_k\}$ and subsequence $\{f_{n_k}\}$ also satisfy the conclusion (not uniformly convergent).

But working from the bottom and generalizing outward is tougher. How does one start with a non-uniformly convergent subsequence $\{f_{n_k}\}$ of a sequence of functions $\{f_n\}$ and generalize this to the sequence of functions $\{f_n\}$?

Four Lemma with E-exact Sequences

Posted: 05 Sep 2021 08:06 PM PDT

  • An e-exact sequence is a sequence of $R$-modules and $R$-module homomorphisms \begin{align*} \cdots \rightarrow M_{i-1} \xrightarrow{f_{i-1}} M_i \xrightarrow{f_i} M_{i+1} \rightarrow \cdots \end{align*} that is e-exact at each $M_i$. That is, $\text{Im}(f_{i-1}) \leq_e \ker(f_i)$ for all $i$. This means that $\text{Im}(f_{i-1})$ is a submodule of $\ker(f_i)$ such that $\text{Im}(f_{i-1}) \cap N \neq 0$ for all nonzero submodules $N$ of $\ker(f_i)$. Equivalently, $\text{Im}(f_{i-1}) \cap Rx \neq 0$ for all $x \in \ker(f_i)$ such that $x \neq 0$.
  • An $R$-module homomorphism is called a monomorphism (or monic, for short) if it is injective. An $R$-module homomorphism is called an epimorphism (or epic, for short) if it is surjective.
  • An $R$-module homomorphism $g:A \rightarrow B$ is e-epic if $\text{Im}(g) \leq_e B$. This means that $\text{Im}(g)$ is a submodule of $B$ such that $\text{Im}(g) \cap B' \neq 0$ for all nonzero submodules $B'$ of $B$. Equivalently, $\text{Im}(g) \cap Rb \neq 0$ for all $b \in B$ such that $b \neq 0$.

With the above established, I have just two (very similar) questions regarding the proof of Lemma 2.2 (which introduces a generalization of the four lemma to e-exact sequences) in this paper:

For the proof of (1), it's stated that "Since $\text{Im}(t_3) \leq_e B_3$, $\text{Im}(t_3) \cap Rg_2(b_2) \neq 0$". But, why is $g_2(b_2) \neq 0$? Since $b_2$ is an arbitrary non-zero element of $B_2$, this would suggest that the kernel of $g_2$ is trivial, but I do not see why this is true from the given assumptions. Similarly, why is $rsb_2 - t_2(a_2) \neq 0$?

Thanks!

Show that there exists an $i$ such that $(a_i ) = (a_{i+1})$

Posted: 05 Sep 2021 07:55 PM PDT

Let $R$ be a UFD. Suppose that there exist $a_1, a_2, ... \in R$ such that $(a_1) \subseteq (a_2) \subseteq \cdots $ . Show that there exists an $i$ such that $(a_i ) = (a_{i+1})$.

My attempt: Suppose that $(a_1) \subseteq(a_2) \subseteq \cdots $. Then for all positive integer $i$, $a_{i+1}\mid a_i$. So $a_i=a_{i+1}b_i$ for some nonzero nonunit $b_i \in R$. But then, in particular, for $a_1$ we have that $a_1=a_2b_1=a_3b_2b_1=\cdots $. That is, $a_1$ can be written as a product of infinitely many irreducibles. But by definition of UFD, the factorization of $a_1$ is finite. Thus for some $i$, we must have that $(a_i ) = (a_{i+1})$.

Here is an answer to this very exercise. But I wanted to bring it to a notation a bit closer to the one I use. I must say that the idea behind the solution is clear to me, but at the time of writing the proof it has been quite difficult for me. Do I need to add more details to my proof? When I make sure in the last part that the definition of UFD completes the proof, is this correct? Are there any other arguments? Thanks in advance

Find the area of triangle ABC given area of a subtriangle

Posted: 05 Sep 2021 08:03 PM PDT

I'm stuck with this question:

Given the lines in triangles and the intersection point, I thought there could be some implications from those info. My intuition tells me $\Delta ANP=12$ and $\Delta NBC=24$, but could that be true? I find no theorems and properties to back that up.

I attempted Pick's Theorem, though. However, the calculation didn't turn out well. Problem enclosed in this picture

Left exactness of $-(U)$

Posted: 05 Sep 2021 07:58 PM PDT

A few questions about the proof given here: $\Gamma(U,\cdot)$ is a left-exact functor $\mathfrak{Ab}(X)\to\mathfrak{Ab}$

Let $0\to\mathcal{F}'\xrightarrow{\phi}\mathcal{F}\xrightarrow{\psi}\mathcal{F}''$ be a left-exact sequence. Then for any open set $U\subseteq X$, the sequence $0\to\Gamma(U,\mathcal{F}')\to\Gamma(U,\mathcal{F})\to\Gamma(U,\mathcal{F}'')$ is left-exact.

$$\begin{array}{cccccc} 0 & \rightarrow & \Gamma(U,\mathcal{F}') & \xrightarrow{\phi_U} & \Gamma(U,\mathcal{F}) & \xrightarrow{\psi_U} & \Gamma(U,\mathcal{F}'')\\ & & \downarrow & & \downarrow & & \downarrow\\ 0 & \rightarrow & \mathcal{F}'_p & \xrightarrow{\phi_p} & \mathcal{F}_p & \xrightarrow{\psi_p} & \mathcal{F}^{''}_p \end{array}$$

Now if $s\in\Gamma (U,\mathcal{F}')$, we have $\psi_U(\phi_U(s))_p=\psi_p(\phi_p(s_p))=0$ for every $p\in U$, hence $\psi_U(\phi_U(s))=0$. The inclusion left to show is $\ker(\psi_U)\subseteq\operatorname{im}(\phi_U)$.

First, I suppose vertical maps are denoted by $(-)_p$. Also, I suppose the LHS of $\psi_U(\phi_U(s))_p=\psi_p(\phi_p(s_p))$ should be $(\psi_U(\phi_U(s)))_p$ (the image of $\psi_U(\phi_U(s))$ under the vertical arrow). I don't see how $(\psi_U(\phi_U(s)))_p=0$ in $\mathcal F''_p$ implies $\psi_U(\phi_U(s))=0$ in $\mathcal F''(U)$. The former means that there is an equality of equivalence classes $[(\psi_U(\phi_U(s)), U)]=[(0,X)]$, which only means that $res_{U,W}(\psi_U(\phi_U(s)))=res_{X,W}(0)$ for some $W\subset U$. How does this imply that $\psi_U(\phi_U(s))=0$?

Next, the answer there says:

In my opinion, you need to reason why the $s'_P$ do glue to some $s'\in\mathcal{F}'(U)$. However, this is not that tough. We pick neighborhoods $V_P\subseteq U$ for each point $P$ such that $s'_P=(f_P,V_P)$ with $f_P\in\mathcal{F}'(V_P)$. For $W:=V_P\cap V_Q\ne 0$, we have $\phi_W(f_P|_W)=t|_W=\phi_W(f_Q|_W)$ and we know that $\phi_W$ is injective.

Here, I suppose, the notation $f_P|_W $ means $res_{V_P,W}(f_P)$ and similarly for $f_Q|_W$ and $t|_W$. How does $\phi_W(f_P|_W)=t|_W=\phi_W(f_Q|_W)$ follow? I tried drawing the diagram that witnesses the functoriality of $\phi$, but I didn't see how it helps. And probably even a more basic question: why do we need to check this? if we're given some $s_p\in\mathscr F_p'$, it has the form $[(f,U)]$ for some open $U$ that contains $p$ and some $f\in \mathcal F'(U)$. Its preimage under the vertical map is just $f$.

For $a_i \in \mathbb{C}$, does $(1+a_1^k)(1+a_2^k)\cdots (1+a_n^k)=1$ for any positive integer $k$ imply $a_1=\cdots =a_n =0$?

Posted: 05 Sep 2021 07:48 PM PDT

For $a_i \in \mathbb{C}$, does $(1+a_1^k)(1+a_2^k)\cdots (1+a_n^k)=1$ for any positive integer $k$ imply $a_1=\cdots =a_n =0$?

In fact, I want to prove $|I+A^k|=1$ for any positive integer $k$ ,where $A$ is a $n \times n$ matrix implies $A$ is nilpotent.

Basic statement of group theory [closed]

Posted: 05 Sep 2021 08:08 PM PDT

Let F be the set of bijective functions 𝑓 ∶ 𝐴 → 𝐴. Is 𝐹 under the composition operations ◦ form a Group (structure similar to ℝ with +)? I want to proper proof

Inserting a projection between (un)bounded operators

Posted: 05 Sep 2021 07:27 PM PDT

Consider a Hilbert space $\mathcal{H}$ and let $A$ be a bounded operator on $\mathcal{H}$, $B$ a possibly unbounded operator with domain $\mathcal{D}(B) \subset \mathcal{H}$ such that the composition $AB$ admits an extension to a bounded operator on $\mathcal{H}$. Furthermore, let $P$ be an orthogonal projection on $\mathcal{H}$. Does it follow that the composition $APB$ is also a bounded operator on $\mathcal{H}$?

The scenario I am considering is something of the following sort. Consider the composition of operators $(-\Delta + 1)^{-1}(-\Delta)$ acting on the Hilbert space $L^2(\mathbb{R}^n)$. Clearly the negative laplacian $-\Delta$ is not defined on all of $L^2(\mathbb{R}^n)$ (say I define it with domain $\mathcal{D}(-\Delta) = H^2(\mathbb{R}^n)$). However, using the Fourier transform, it is fairly easy to show that the composition $(-\Delta + 1)^{-1}(-\Delta)$ is a bounded operator (since $0 \leq \frac{\lvert{k}\rvert^2}{\lvert{k}\rvert^2 + 1} < 1$). Now consider an orthogonal projection $P$. Is it also true that the operator $(-\Delta + 1)^{-1}P(-\Delta)$ is bounded? As soon as I deal with a projection the Fourier Transform fails to be a good tool, so I'm not sure which tool to use to approach the problem.

Intuitively, I believe the general case to be true. Even though $B$ may be unbounded, as the composition $AB$ is bounded it means that $A$ can "deal" with any "roughness" that $B$ finds in the extension of $AB$ to the entire Hilbert space. If I further restrict the output of $B$ by replacing it with $PB$, I restrict the set of possible "bad" outputs that get fed into $A$, so the same should hold. If a counterexample holds, I would love to see it (I have not been able to come up with one myself). I would appreciate any discussion on this topic. Thanks for reading!

Monomorphism $\mathbb{Q} \to \mathbb{Q}/\mathbb{Z}$ is a maximal subobject of $\mathbb{Q}/\mathbb{Z}$?

Posted: 05 Sep 2021 07:23 PM PDT

A standard example of both a monomorphism and epimorphism that is not a isomorphism is the projection $f:\mathbb{Q} \to \mathbb{Q}/\mathbb{Z}$ in the category $\mathsf{DivAb}$ of divisible abelian groups. The kernel $\ker(f)$ should be equal to $\mathbb{Z}$, but $\mathbb{Z} \notin \mathsf{DivAb}$, so it must be something smaller, but the only divisible subgroup of $\mathbb{Z}$ is $0$. Therefore, $f$ is mono.

In every category, we have a notion of subobject category. As $f$ is mono, I write $\mathbb{Q} \in \mathsf{Sub}(\mathbb{Q/Z})$. I want to know, if it's a maximal subobject, i.e. if the following claim is true or false:

For any $X\in \mathsf{Sub}(\mathbf{Q/Z})$ if $g:\mathbb{Q} \to X$ is a inclusion of subobjects and the composition $\mathbb{Q} \to X \to \mathbb{Q/Z}$ is equal to $f$, then $g$ is an isomorphism.

I think it's false, and there should be a zoo of examples, but I cannot find any. Put more profoundly, the question is about the preorder of epi-mono factorizations of $f$, as defined here. The examples of other standard "fake isomorphisms" are beautifully explained in terms of this preorder, but I don't see it in this situation.

I tried to use some facts about divisible groups, for example that $\mathbb{Q/Z} = \bigoplus_{p \text{ - prime}} \mathbb{Z}(p^\infty)$, where $\mathbb{Z}(p^\infty)$ is a Prüfer p-group.

Does there exist a Noetherian local ring of depth $1$ whose localization at some prime ideal has higher depth?

Posted: 05 Sep 2021 08:13 PM PDT

Let $(R,\mathfrak m)$ be a Noetherian local ring of depth $1$.

Then, is it possible that depth$(R_P)\ge 2$ for some prime ideal $P$ of $R$?

Of course such an example has to be non-Cohen-macaulay i.e. we would have to have $\dim R \ge ht(P)=\dim (R_P) \ge depth(R_P) \ge 2>depth(R)$.

Please help.

How to prove that $ a \mod ( b \mod a ) + b \mod a \le \frac { a + b } 2 $?

Posted: 05 Sep 2021 07:27 PM PDT

The idea is to prove that after two iterations of the Euclid GCD algorithm, the sum of the two arguments would be less than half the original sum. Starting at $ a $ and $ b $, where $ a < b $, after the first iteration, the arguments are $ ( b \mod a , a ) $ and after the second iteration, the arguments are $ \big( a \mod ( b \mod a ) , b \mod a \big) $.

Using properties that $ b \mod a \le a $ and $ b \mod a \le b - a $ (if $ b \ge a $), I've reached the conclusion that the $ \operatorname {sum} \big( a \mod ( b \mod a ) , b \mod a \big) \le 2 a $ but I'm completely at loss on how to reduce it further ($ \operatorname {sum} { } \le a $ would be enough to finish the proof). Is there any $ \mod { } $ property that I'm missing to simplify this further?

How would you simplify $\frac{\frac{6y}{y + 6}}{\;\frac{5}{7y + 42}\;}$ using LCD method?

Posted: 05 Sep 2021 07:39 PM PDT

Peace to all. While in class I was taught to solve complex fractions by an "alternative method" in which you would:

  1. Multiply the numerator and denominator by the LCD
  2. Apply the distributive property
  3. Factor and Simplify

An example: $\dfrac{4 - \dfrac 6 {x}} {\dfrac 2 {x} - \dfrac 3 {x^2}}$

  1. Multiply the numerator and denominator by the LCD (x^2):

$\dfrac{4 - \dfrac 6 {x}} {\dfrac 2 {x} - \dfrac 3 {x^2}}$

  1. Apply the distributive property:

$$\dfrac{x^2 × (4) - x^2 × \dfrac 6 {x}} {{x^2} × \dfrac 2x - x^2 × \dfrac 3 {x^2}}$$

  1. Factor and Simplify

$$\dfrac{4x^2 - 6x} {2x-3}$$

$$\dfrac{2x(2x - 3)}{2x-3}$$

A:2x

This example is very straightforward and simple. However, when I put it to use it becomes very difficult. For example for the equation: $\dfrac{\dfrac{6y}{(y + 6)}}{\dfrac{5}{(7y + 42)}}$. I get $y + 42$ as the LCD. I'm not too confident of this because when I begin to distribute I get very large numbers. How would one apply this to the prior equation?

How many n digit strings in a n+ digit string?

Posted: 05 Sep 2021 08:01 PM PDT

Is there a formula to find out how many times a string of N consecutive numbers occurs in a larger string of N+ numbers. So for example how many ways can a string of n2 numbers occur in a string of n3 numbers. So if i was searching for a string of any 2 numbers in a list of 1000 numbers how many possible strings could I find - i think the answer here is 200.

However what i want is a general formula to find the number of potential strings of n numbers (eg any 4 consecutive numbers) in a much larger string of say 9 numbers (eg 1000,000,000) can anyone help with this question please? Thanks in advance.

Need help graphing an absolute value function

Posted: 05 Sep 2021 07:57 PM PDT

The function I have is $$|x-1|+|y+2|\leq2$$

So I tried graphing it the same way I would $|x|+|y|\leq1$

I know that $|x-1|$ is $x-1$ when $x>0$ and $-x+1$ when $x<0$

Similarly, I know that $|y+2|$ is $y+2$ when $y>0$ and $-y-2$ when $y<0$

So, I have four cases:

I. $x>0,y>0$ (Quadrant I)

II. $x<0,y>0$ (Quadrant II)

III. $x<0,y<0$ (Quadrant III)

IV. $x>0,y<0$ (Quadrant IV)

So for Quadrant I, we need to solve $x-1+y+2\leq2$ and get $y\leq-x+1$

Quadrant II, we solve $-x+1+y+2\leq2$ and get $y\leq x-1$

Quadrant III, we solve $-x+1-y-2\leq2$ and get $y\geq -x-3$

Quadrant IV, we solve $x-1-y-2\leq2$ and get $y\geq x-5$

When I graph these lines in their respective quadrants, I do not get the correct answer. Can someone please tell me where I went wrong in my method?

Composition of quadratic forms

Posted: 05 Sep 2021 07:13 PM PDT

Consider the following two quadratic forms on $\mathbb{R}^3$:

$Q(x, y, z)=\lambda x^2+4y^2+16z^2$, $R(x, y, z)=2xy+2yz$

Determine precisely those values of $\lambda\in\mathbb{R}$ such that there's a linear transformation $T$ on $\mathbb{R}^3$ so that $R=Q\circ T$

By some direct computation, I've found out that if such a linear $T$ exists then

$\begin{bmatrix} 0 & 1 & 0\\ 1 & 0 & 1\\ 0 & 1 & 0 \end{bmatrix} =T^t\begin{bmatrix} \lambda & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & 16 \end{bmatrix}T$

But then I don't know how to proceed.

Find all real frequencies $\omega$ such that $\det(C(j\omega I-A)^{-1}B)=0$

Posted: 05 Sep 2021 08:02 PM PDT

Given $A\in\mathbb{R}^{n\times n}$,$B\in\mathbb{R}^{n\times m}$ and $C\in\mathbb{R}^{m\times n}$ such that $\det(j\omega I-A)\neq0$, find all real frequencies $\omega$ such that

$$\det(C(j\omega I-A)^{-1}B)=0.$$

Here $j=\sqrt{-1}$ and $I$ is an identity matrix of an appropriate size.


If $m=n$, then

\begin{align} \det(C(j\omega I-A)^{-1}B)=\det(BC(j\omega I-A)^{-1})=\det(BC)\det(j\omega I-A)^{-1}=0. \end{align} Since $\det(j\omega I-A)^{-1}\neq0$, we get $\det(BC)=0$ which doesn't depent on $\omega$, thus

if $\det(BC)=0$, then $$\det(C(j\omega I-A)^{-1}B)=0\text{ for any }\omega\in(-\infty,+\infty),$$

else if $\det(BC)\neq0$, then

$$\det(C(j\omega I-A)^{-1}B)=0\text{ for any }\omega\in\{\emptyset\}.$$

If $m\neq n$, we don't have $\det(C(j\omega I-A)^{-1}B)=\det(BC(j\omega I-A)^{-1})$, thus I couldn't find an answer.

How to prove minimax with normal distribution.

Posted: 05 Sep 2021 08:03 PM PDT

Let $X$ be a sample from $N_p(\theta,I_p)$ with unknown $\theta\in \mathcal{R}^p$.

Consider the estimation of $\theta$ under the loss function $$L(\theta,T)\,=\,\|T-\mathcal\theta\|^2\,=\,\sum_{i=1}^{p}(T_i-\mathcal{\theta_i} )^2,$$ with independent priors for $\theta_i$'s.

How to show that $X$ is minimax of $\theta$?

A beautiful equality

Posted: 05 Sep 2021 08:09 PM PDT

$$\int_{0}^{\infty }\frac{Li_{2}(\frac{(1-x^{4})^{2}}{(1+x^{4})^{2}})}{1+x^{4}}dx=\sqrt{2} \Re (\int_{0}^{1 }\frac{Li_{2}(\frac{(1+x^{4})^{2}}{(1-x^{4})^{2}})}{1+x^{2}}dx)$$ This question was proposed by Sujeethan Balendran in RMM(Romanian Mathematical Magazine) I did by using complex number but i didn't by Real method by using $$Li _{2}(a)=-a\int_{0}^{1}\frac{\ln x}{1-ax}dx$$ I believe some of you know some nice proofs of this, can you please share it with us?

Question related to Riemann Sums

Posted: 05 Sep 2021 07:59 PM PDT

I was reading the book of Real Analysis by H.L.Royden and they have given the definition of Riemann sums and Riemann integral as follows:enter image description here

After this definition they have given the following footnoteenter image description here

Now I am not getting what the footnote 1 is saying. I think the statement of that note 1 is not always true. For example if we take the Dirichlet function (which is 0 on rationals and 1 on irrationals ) then the footnote 1 is not true. Am I correct or am I missing something?

(Edit:) My thoughts on why the foot note does not work for Dirichlet function:

We define $f:[0,1]\rightarrow \mathbb R$ by $f(x)=0$ if $x\in\mathbb Q$ and $f(x)=1$ otherwise. Now consider any partition $P$ of $[0,1]$. Then for any subinterval $[x_{i-1},x_i]$ we get $m_i=0$ because each subinterval will definitely have infinitely may rationals. Also $M_i=1$ because each subinterval will definitely have infinitely many irrationals. Then Lower sum will be 0 and upper sum will be 1.

Is there a dictionary/encyclopedia/thesaurus of equivalent equations?

Posted: 05 Sep 2021 07:15 PM PDT

I had heard from Barry Barish that (paraphrased from Lex Fridman podcast): In 1916, Einstein noticed that if he wrote the formulas for general relativity in a particular way, they looked a lot like the formulas for electricity and magnetism. He knew that electricity and magnetism had waves. So he said if the formulas looked similar, then gravity probably has waves too.

The hope is that finding such a dictionary may lead to similar insights about whatever equation / expression of relationships one is starting with.

I want to find a dictionary that lists equations across disciplines that are isomorphic/equivalent to one another. For example:

Physics: $$F=ma$$

Electronics: Ohms law: $$V=iR$$

Physics: Hooke's law: $$F=-kx$$

And so on (preferably way more disciplines and sub disciplines and math sub disciplines)

Would love any help whatsoever!

Reals as a vector space over rationals: why infinite dimensional?

Posted: 05 Sep 2021 07:54 PM PDT

The reals can be thought of as a vector space over the rationals. The properties of a vector space are that addition and "scaling" by some scalar are well defined and this certainly holds for the reals. There are many posts explaining this, examples here and here. I completely understand that the reals satisfy all the properties of vector spaces with the scalars defined as rational numbers. However, how do we know that this vector space is infinite dimensional? Isn't it just 1 dimensional? And if it is infinite dimensional, what is the first dimension? Can there not be some infinite dimensional basis for this vector space then? In the second post, @AsrafKaragila says its impossible to write such a basis by hand and we can only prove its existence. Why is it impossible?

Visualize the isometry of the sphere $S^2$ is $S^3/(\mathbf{Z}/2) ?$

Posted: 05 Sep 2021 07:56 PM PDT

How to visualize the rotational symmetry group or isometry of the sphere $S^2$ is $$S^3/(\mathbf{Z}/2) ?$$

I meant that because that the rotational symmetry group is $SO(3)=RP^3=S^3/(\mathbf{Z}/2) $. I know this fact, but how to convince ourselves that by visualization that there is a $\mathbf{Z}/2$ mod out?

It may be useful to use the fibration fact that the $S^3/(\mathbf{Z}/2)$ can be a lens space obtained by $S^1$ fiber over $S^2$? But how is that relevant to the isometry group of the sphere $S^2$?

p.s. I certainly know $𝑆𝑂(3)=𝑅𝑃3=𝑆^3/(𝐙/2)$. My point here is how to visualize the rotational group has a quotient out $(𝐙/2)$ part in $𝑆^3/(𝐙/2)$ for the rotational group? Which path of the loop of the rotational group element gives that $\pi_1(𝑆^3/(𝐙/2))=(𝐙/2)$?

Complete regularity of function spaces

Posted: 05 Sep 2021 07:27 PM PDT

Let $X$ and $Y$ be topological spaces. Wikipedia states that if $Y$ is $T_{3½}$, then so is $Y^X$. Yet I couldn't find the proof anywhere. So here's my own attempt of a proof:

(I also had had a hard time proving that if $Y$ is $T_3$, then so is $Y^X$. My professor verified that proof. Thanks, Mr. Iggy.)

Let $f \in Y^X$, and let $S(C,U)$ be a subbasis element having $f$. I shall separate $f$ from $Y^X \setminus S(C,U)$ by a continuous function $g : Y^X → [0,1]$. There are few steps for finding such $g$.

Step 1. I shall find a continuous function $h : f[C] × Y → [0,1]$ that separates the diagonal $\Delta = \{(y,y) : y \in f[C]\}$ from $f[C] × (Y \setminus U)$. Since $f[C] × f[C]$ is a compact Hausdorff space which $\Delta$ is closed within, $\Delta$ is a compact subspace of the $T_{3½}$ space $f[C] × Y$. So such $h$ exists. (Munkres' Topology, Section 33, Exercise 8.)

Step 2. Now give $g(\phi) = \sup\{h(f(x),\phi(x)) : x \in C\}$.

Is this proof correct so far? It should be fairly easy to replace $S(C,U)$ by an arbitrary basis element and then by an arbitrary open set.

Orbits of $z_{n+1} = z_n ^2 - 1$

Posted: 05 Sep 2021 08:00 PM PDT

Consider the sequence $z_{n+1} = z_n ^2 - 1$ defined for an arbitrary complex number $z_0$. I am trying to determine all $z_0$ such that the sequence eventually becomes periodic.

Here is my progress so far:

If $|z_0|> \frac{1+\sqrt{5}}{2}$ the sequence is absolutely increasing since $$|z_{n+1}|=|z_n^2 - 1|\geq|z_n|^2-1 > |z_n| ,$$ which is true since $ |z_n| > \frac{1+\sqrt{5}}{2}$ holds inductively.

If $z_0=\frac{1+\sqrt{5}}{2}$ the sequence would be the constant sequence of $\frac{1+\sqrt{5}}{2}$ and hence periodic.

Similar to the first case if $|z_0|<\frac{1}{2}$ or more precisely if $|z_0|$ less than the root of $\alpha^3 +2\alpha -1 = 0,$ the sequence $\{z_{2i} \}$ becomes strictly decreasing since $$|z_{2i}| = |z_{2i-1}^2 - 1 |= |z_{2i-2}^4 - 2z_{2i-2}^2|< |z_{2i-2}| \Leftrightarrow |z_{2i-2}^3 - 2z_{2i-2}| < 1.$$ Which holds true if $$ |z_{2i-2}^3 - 2z_{2i-2}| \leq |z_{2i-2}^3|+| 2z_{2i-2}| <1.$$ And as a result $\{z_i\}$ cannot be periodic.

On $\mathrm{\sum\limits_{n=0}^\infty \left(C(n)-\frac{\sqrt\pi}{2\sqrt2}\right)+ \sum\limits_{n=0}^\infty \left(S(n)-\frac{\sqrt\pi}{2\sqrt2}\right)}$

Posted: 05 Sep 2021 07:25 PM PDT

This question will take inspiration from

Evaluation of $\sum\limits_{n=0}^\infty \left(\operatorname{Si}(n)-\frac{\pi}{2}\right)$?

and

On $\mathrm{\sum\limits_{x=1}^\infty Ci(x)}=\frac{\ln(2)+\ln(\pi)-γ}{2}$

The problem will include the version with the Fresnel integrals. The separate sums converge to a similar value each, but converge slowly due to oscillation. Note there are also hypergeometric function representations, but these probably are not useful for evaluation. F(x) is the Dawson Integral function. I will not put more representations as they can easily be found in the link. The other representations are just as complicated. Here are the definitions:

$$\mathrm{C(x)=\int cos\left(x^2\right)dx, S(x)=\int sin\left(x^2\right)dx, \lim_{x\to\infty} C(x), S(x)= \frac{\sqrt\pi}{2\sqrt2} }$$

so the function is asymptotic to $\frac{\sqrt\pi}{2\sqrt2}$ and the following follow from it:

$$\mathrm{\sum_{n=0}^\infty \left(C(n)-\frac{\sqrt\pi}{2\sqrt2}\right)+\sum_{n=0}^\infty \left(S(n)-\frac{\sqrt\pi}{2\sqrt2}\right)=-\sqrt{\frac\pi2}+\sum_{n=1}^\infty \left(C(n)+S(n)-\sqrt{\frac\pi 2}\right)=\sum_{n=0}^\infty \left( \frac{(1 - i) \left(F\left( (-1 + i) n\right) - i\,F\left((1 + i) n \right) e^{2 i π n^2 }\right)}{e^{in^2}}+ \frac{(1 - i) \left(F\left((-1 + i)n\right) - i\,F\left((1 + i) n\right) e^{i π n^2}\right)}{e^{ i n^2 }}-\sqrt{\frac\pi 2}\right)=log_b\prod_{n=0}^\infty b^{C(n)+S(n)-\sqrt{\frac\pi2}}}$$

Here is what each individual summand looks like using the problem's definitions. The blue plot is the cosine version and the purple is the sine version:

enter image description here

There are also sum to integral representation formulas like the Abel-Plana formulas. This one may work with the Abel-Plana formula:

$$\mathrm{\sum_{n=0}^\infty \left(C(n)-\frac{\sqrt\pi}{2\sqrt2}\right)+\sum_{n=0}^\infty \left(S(n)-\frac{\sqrt\pi}{2\sqrt2}\right)=-\frac12 \frac{\sqrt\pi}{2\sqrt2} +\int_0^\infty C(x)-\frac{\sqrt\pi}{2\sqrt2}dx +i\int_0^\infty\frac{C(ix)-\frac{\sqrt\pi}{2\sqrt2}-C(-ix)- -\frac{\sqrt\pi}{2\sqrt2}}{e^{2\pi x}-1}-\frac{\sqrt\pi}{2\sqrt2} \frac12+\int_0^\infty S(x)-\frac{\sqrt\pi}{2\sqrt2} dx+ i\int_0^\infty\frac{S(ix)-\frac{\sqrt\pi}{2\sqrt2}-S(-ix)- -\frac{\sqrt\pi}{2\sqrt2}}{e^{2\pi x}-1}= 2\int_0^\infty \frac{S(x)}{e^{2\pi x}-1}dx-2\int_0^\infty\frac{C(x)}{e^{2\pi x}-1}dx-\frac12 -\frac{\sqrt{\pi}}{2\sqrt2}}$$

Another idea is to use the main power series definition of the Fresnel Integrals seen in the bolded link:

$$\mathrm{\sum\limits_{n=0}^\infty \left(C(n)-\frac{\sqrt\pi}{2\sqrt2}\right)+ \sum\limits_{n=0}^\infty \left(S(n)-\frac{\sqrt\pi}{2\sqrt2}\right)= \sum\limits_{x=0}^\infty \left(-\sqrt\frac{\pi}{2}+\sum_{n=0}^\infty\left(\frac{(-1)^n x^{4n+3}}{(2n+1)!(4n+3)}+\frac{(-1)^nx^{4n+1}}{(2n)! (4n+1)}\right)\right)}$$

Using the Imaginary Error function, one other idea is to notice that:

$$\mathrm{(-1)^{\frac74} \frac{\sqrt{\pi} }{2}\,erfi\left(\sqrt[4]{-1} x\right)=C(x)+i\,S(x)=\int e^{ix^2}dx\implies \sum\limits_{n=0}^\infty \left(C(n)-\frac{\sqrt\pi}{2\sqrt2}\right)+ \sum\limits_{n=0}^\infty \left(S(n)-\frac{\sqrt\pi}{2\sqrt2}\right)= Re\sum_{x=0}^\infty \left((-1)^{\frac74} \frac{\sqrt{\pi} }{2}\,erfi\left(\sqrt[4]{-1} x\right)-\frac{\sqrt\pi}{2\sqrt2}(1+i)\right)+Im \sum_{x=0}^\infty \left((-1)^{\frac74} \frac{\sqrt{\pi} }{2}\,erfi\left(\sqrt[4]{-1} x\right)-\frac{\sqrt\pi}{2\sqrt2}(1+i)\right)}$$

This means that an equivalent summation to find is the following. Let's try to use the Abel-Plana formula on it:

$$\mathrm{\sum_{x=0}^\infty \left((-1)^{\frac74} \frac{\sqrt{\pi} }{2}\,erfi\left(\sqrt[4]{-1} x\right)-\frac{\sqrt\pi}{2\sqrt2}(1+i)\right)≈-.65507…+.86175…i}$$ How can I evaluate this integral or summation? Please correct me and give me feedback!

Why do the partial sums of the Maclaurin series expansion of $\sin$ approximate it better than their hyperbolic counterparts approximate $\sinh$?

Posted: 05 Sep 2021 07:08 PM PDT

While exploring Taylor series with numerical and graphing tools, I noticed a very peculiar and interesting fact: the partial sums of the Maclaurin series expansion of $\sin(x)$ approximate it better than their hyperbolic counterparts approximate $\sinh(x)$. To be specific, I noticed that for every $x\in\mathbb{R}$ and $k\in\mathbb{N}$, the following inequality seems (haven't proven it) to hold

$$\left|\sin(x)-\sum_{n=0}^{k}(-1)^{n}\frac{x^{2n+1}}{(2n+1)!}\right|\leq\left|\sinh(x)-\sum_{n=0}^{k}\frac{x^{2n+1}}{(2n+1)!}\right|$$

For all $x\neq 0$, the inequality is strict, showing that $\sum_{n=0}^{k}(-1)^{n}\frac{x^{2n+1}}{(2n+1)!}$ does indeed approximate $\sin(x)$ better than $\sum_{n=0}^{k}\frac{x^{2n+1}}{(2n+1)!}$ approximates $\sinh(x)$. But why do they provide better approximations? The remainders

$$\sin(x)-\sum_{n=0}^{k}(-1)^n\frac{x^{2n+1}}{(2n+1)!},\text{ }\sinh(x)-\sum_{n=0}^{k}\frac{x^{2n+1}}{(2n+1)!}$$

obviously can't be equal everywhere, but given the similarity between the partial sums for each function, it's unclear to me why the $\sin$ remainder should be bounded by an inequality as "uniform" as the one above. Why isn't it the other way around, with $\sinh$'s remainder being bounded by $\sin$'s? Why is the inequality true for all $x\in\mathbb{R}$, and not just some disjoint intervals? Looking at the partial sums for each series, it can be seen that the only difference is the $(-1)^n$ factor for the $\sin$'s series, which doesn't seem like the kind of thing that's capable of making the difference between

$$\left|\sin(x)-\sum_{n=0}^{k}(-1)^{n}\frac{x^{2n+1}}{(2n+1)!}\right|\leq\left|\sinh(x)-\sum_{n=0}^{k}\frac{x^{2n+1}}{(2n+1)!}\right|$$

and some other inequality relating the two remainders. Can someone give an intuitive explanation for why this happens?

Important: I'm not seeking for a proof of this inequality; that's a puzzle I'll solve later. What I am seeking is intuition, some perspective that will make the truth of this inequality obvious, if that makes sense.

Difficult implications of a detail in an(other) approach to the simultaneous Pell equations $24a^2+1=t^2$ and $48a^2+1=u^2$

Posted: 05 Sep 2021 07:16 PM PDT

In reviewing & improving my attempt to solve the question on simultaneous Pell-equations (see here in MSE) I came across a detail which has surely wider implications and which I cannot encompass (correct term?) so far.
The original question is about:
How many solutions (and which) have the simultaneous Pell equations $$ 24a^2+1=t^2 \\ 48a^2+1=u^2 \tag 1 $$


My ansatz, first part: I approached this problem the following way. First look at the difference equation: $$ 24a^2=u^2-t^2 \tag 2 $$ and use the method of parametrization of the pythagorean triples for $b^2 = c^2-a^2$ [which comes out to be $(a,b,c)=(n^2-m^2,2nm, n^2+m^2)$ (wikipedia)] such that I arrive at $$ u=2m^2+3n^2 \\ t=2m^2-3n^2 \\ a= nm \tag 3 $$ Expansion shows that it is indeed a solution for (2)
$$ 24m^2n^2 \overset{?}= (2m^2+3n^2)^2-(2m^2-3n^2)^2 = 12m^2n^2- (-12m^2n^2) = 24m^2n^2 \tag 4 $$ But while this shows solutions for the difference-eq (2) it is easy to see, that this parametric solution for $(a,b,c)$ cannot at the same time be a solution for the first equation $$ 24 a^2+1 = t^2 \tag {5a} $$ which leads to $$ 24(n^2m^2) =(2m^2-3n^2)^2 -1 \tag {5b}$$ This has no solution which can be seen when taken $\pmod 5$.
$\qquad \qquad \qquad $update2: this is corrected from previous version after helpful comment of user servaes, see revision history

So my conclusion on my scribble notes was first: no solutions, simply :-)) ...

But actually there are known solutions for the simultaneous system, namely, $(a,t,u)=(0,\pm1,\pm1)$ and $(\pm1,\pm5,\pm7)$ and thus it looks, as if my approach is useless here and should be discarded.


My ansatz, second part: In a second look on it, after some coffee, I see that this ansatz can be revived, if we allow non-integer values for $(m,n)$. For instance

  • $(m,n)=(\sqrt 3,\frac1{\sqrt 3})$ gives $(a,t,u)=(\frac {\sqrt 3}{\sqrt 3},2\cdot 3-\frac33, 2\cdot 3+ \frac33 )=(1,5,7)$ .
  • Analoguously $(m,n)=(\frac1{\sqrt 2},\sqrt 2,)$ gives $(a,t,u)=(\frac {\sqrt 2}{\sqrt 2},\frac22-3\cdot2, \frac22+ 3 \cdot2)=(1,-5,7)$ .

In this example, the values $\sqrt3$ and $\sqrt 2$ are quite obvious when eq's (3) are considered. But to allow irrational numbers in that well known and such commonly used parametrization of the pythagorean triples shall likely have many more implications- but which I do not see here, even if we uset the restriction that the irrational values have to be selected in a way that $(a,t,u)$ are still all integer.

This is surely of importance, if I want to use the full logic of my small approach for other, but comparable, cases like $Aa^2+1=t^2 ; Ba^2+1=u^2$ .

Q1: can the introduction of that irrational values be understood as meaningful generalization of the Pythagorean-triple parametrization?

Q2: under the assumption of quadratic irrational values for $(n,m)$ - can we still prove, that the set of solutions is that set of the two which are already known?

Parametrization in complex plane

Posted: 05 Sep 2021 08:04 PM PDT

Parametrize the semicircle $| z-4-5i| = 3$ clockwise from $z=4+8i$ to $z=4+2i$
I know that $z(t)=r$ from $$ 0\leq t \leq \pi $$ Then I have the following inequality: $$3i\leq \sqrt{3}e^{-it}\leq -3i $$ What should I do next/have done wrong?

Moment of Inertia of Rectangular Prism about one of its edges

Posted: 05 Sep 2021 07:07 PM PDT

Question: What is the moment of inertia of a rectangular prism with dimenions $l\times w\times h$ represented by $a\times b\times c$ about one of its edges? Are my bounds correct, and what is $r$?
Motivation: Imagine a book standing upright on a table. A force is applied to the top of the book so it begins to rotate about one of it's axes without slipping. How much energy must be applied, knowing the dimensions & mass of the book, to tip the book over (i.e. to move the center of mass over the pivoting axis)? With moment of inertia I think I know how to do this - if there's a better way, answer the question anyway because I'm curious :)
Attempt: Here's what I've got. I know the equation for moment of inertia is: $$I=\iint_D \rho r^2\,dA\\OR\\I=\iiint_R \rho r^2 \,dV$$ I think the triple integral is more appropriate here, and filling in the bounds gives $$I=\int_0^a\int_0^b\int_0^c\rho r^2\,dz\,dy\,dz.$$ Let's assume constant density (1). Is $r^2$ simply $x^2+y^2+z^2$? This doesn't seem quite right. If it is, that gives $$I=\int_0^a\int_0^b\int_0^c x^2+y^2+z^2\,dz\,dy\,dz.$$ Which is easily solvable. Which I don't feel it should be.

Alternative notation for exponents, logs and roots?

Posted: 05 Sep 2021 07:14 PM PDT

If we have

$$ x^y = z $$

then we know that

$$ \sqrt[y]{z} = x $$

and

$$ \log_x{z} = y .$$

As a visually-oriented person I have often been dismayed that the symbols for these three operators look nothing like one another, even though they all tell us something about the same relationship between three values.

Has anybody ever proposed a new notation that unifies the visual representation of exponents, roots, and logs to make the relationship between them more clear? If you don't know of such a proposal, feel free to answer with your own idea.

This question is out of pure curiosity and has no practical purpose, although I do think (just IMHO) that a "unified" notation would make these concepts easier to teach.


No comments:

Post a Comment