Friday, June 3, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Exploring the DIVERGENT SERIES!

Posted: 03 Jun 2022 10:42 AM PDT

My dear math friends,

All I want is to clear these basic things, that I know, partially know and where I have some kind of doubts.
All I want is to strengthen my math concepts and explore more of the math world.

But, I have just completed my Grade 10, and will start grade 11. So, please don't use high-fi math language for explanation, dear friends, because I don't know the basic math notations also. [Except those which I read till now].
As of now, my holidays are going on and I think it is the best time to strengthen my math concepts :)

I know the following things (and require clarifications/solutions on the bold & italicised text)*:

  1. 1 + 2 + 3 + ... till n terms = n(n+1)/2.
    This is the concept I had understood in my Grade 10 chapter of Arithmetic Progressions (CBSE Board, NCERT textbook). To get this we use the formula: $S_{n}$ = n/2 * [2a + (n-1)d] (or) n/2 * [a + l] where n is the number of the term (like first term, second term, etc), a is the first term, d is the common difference, and l is the last term. So, (when we use the second formula) we get:
    n/2 * (1 + n) (or) n(n+1)/2
  2. 1 + 2 + 3 + 4... (to ∞) = -1/12.
    Sum of all natural numbers to infinity is -1/12. At least, that is what I have heard (from my genius math friend) and what I have read online. I found that S. Ramanujan got this by using some special type (or method) of summation, now called as the 'Ramanujan summation'. Some person(s) on YouTube are claiming it to be -1/8. So, I would request for a clarification on the same.
  3. I know how to solve problems like:
    Problem


So, I solve it in this way:
Solution
Solution Cont.
enter image description here
enter image description here
After solving the quadratic equation, we get
enter image description here
(Thanks to WolframAlpha)
Another similar example is:
enter image description here
And solving it using the same method, we get two roots: -2 and 3.
Now, the doubt is: Do we have to ignore the negative root (the root on the negative side of x-axis); like in the case of the above example, do we ignore -2 and consider the answer as 3 only?

Concluding the entire writeup, I want to explore some more divergent series stuff THAT IS APPROPRIATE TO MY LEVEL (please consider the informal part I have writen on the topmost). THANK YOU!

Uniqueness of SVD

Posted: 03 Jun 2022 10:41 AM PDT

I know that uniqueness of a matrix is guaranteed upto certain condition if the eigen values are unique and distinct. But what about the below question, I am unable to come to a conclusion.

In terms of yes or no, The SVD of any 2 × 3 matrix is unique?

Thanks in advance.

Are mathematical definitions logical equivalences or a material equivalences?

Posted: 03 Jun 2022 10:41 AM PDT

Lets consider representing logical equivalence using the symbol $\equiv$ and material equivalence with the symbol $\longleftrightarrow$.

I know that the formulas $P$ and $Q$ are logically equivalent if and only if the statement of their material equivalence $P \longleftrightarrow Q$ is a tautology.

My question is: When mathematicians define something, which of the equivalences is being used. For example, when defining the limit of a function $f$ we can write:

Let $I$ be an open interval containing $c$, and let $f$ be a function defined on $I$, except possibly at $c$. The limit of $f(x)$, as $x$ approaches $c$, is $L$, denoted by $$ \lim_{x\rightarrow c} f(x) = L, $$

means that given any $\epsilon>0$, there exists $\delta>0$ such that for all $x \neq c$, if $|x−c| < \delta $, then $|f(x)−L| < \epsilon$.

If we translate this to symbols, which one the correct?

$\lim_{x\rightarrow c} f(x) = L \longleftrightarrow \forall \, \epsilon > 0, \exists \, \delta > 0 \; s.t. \;0<|x - c| < \delta \longrightarrow |f(x) - L| < \epsilon .\text{)}$

or

$\lim_{x\rightarrow c} f(x) = L \equiv \forall \, \epsilon > 0, \exists \, \delta > 0 \; s.t. \;0<|x - c| < \delta \longrightarrow |f(x) - L| < \epsilon .\text{)}$

This question arose when reading the book Discrete Mathematics with Applications by Susanna S. Epp, because the author defines both $\equiv$ and $\longleftrightarrow$, but then uses $\iff$ which is never defined in the book when writing definitions.

$C^\infty$ of series of functions of two variables.

Posted: 03 Jun 2022 10:39 AM PDT

Let $a,b,c,d\in \mathbb R$, and $U:=[a,b]\times (c,d)\subset \mathbb R^2$.

Suppose $f_n : U\to \mathbb R$ is $C^\infty$ for each $n\in \mathbb N.$

I'm considering why the statement below is true.

If $\sum_n \left(\frac{\partial}{\partial x} \right)^r \left(\frac{\partial}{\partial y} \right)^s f_n(x,y)$ uniformly converges on $U$ for all $r,s \in \mathbb Z_{\geqq 0}$, then $f:=\sum_n f_n$ is $C^\infty$ on $U$. $\cdots (\ast)$

My book uses this theorem without describing details, and I wonder why this holds.

What I know is this theorem.

For $\{g_n\}_n$ where $g_n:[a,b]\to \mathbb R$ and $g_n$ is differentiable, if $\sum_n g_n$ uniformly converges on $[a,b]$ and $\sum_n g_n'$ uniformly converges on $[a,b]$, then $\sum_n g_n$ is differentiable and $(\sum_n g_n)'=\sum_n g_n'$.

So if each $g_n$ is $C^\infty$ and $\sum_n g_n^{(k)}$ uniformly converges on $[a,b]$ for all $k$, then $\sum_n g_n$ is differentiable for arbitrary times and $C^\infty$. $\cdots (\bigstar)$

I think the theorem $(\ast)$ is related to this theorem $(\bigstar)$, but $C^\infty$ of one variable function is different from $C^\infty$ of two variable function, so I don't think I can directly apply $(\bigstar)$ for proving $(\ast)$.

I'm having difficulty in finding the reason why $(\ast)$ holds.

Do you have any idea for proving $(\ast)$ ?

A property on the sum of $x^{\rho}/\rho$ over all non-trivial zeros

Posted: 03 Jun 2022 10:36 AM PDT

Have $\rho=\beta+\gamma i$ be the nontrivial zeros of the Riemann zeta function. Then, due to the PNT, it should be true that $$\sum_{\rho} \frac{x^{\rho}}{\rho}=A(x)$$ with $A(x) \in \mathbb{R}^+$. This implies that somehow, although all the $\rho$ are complex for $\gamma>0$, the sum of $\frac{x^{\rho}}{\rho}$ over such values evaluates to be a real number and even diverges as $x \to \infty$. Therefore, we have $$\sum_{\rho} \frac{x^{\rho}}{\rho}=\bigg|\sum_{\rho} \frac{x^{\rho}}{\rho}\bigg|\leq\sum_{\rho} \bigg|\frac{x^{\rho}}{\rho}\bigg|\leq\sum_{\rho} \frac{x^{\beta}}{\rho}<x\sum_{\rho} \frac{1}{\rho}=Cx,$$ for a well-known constant $C\approx0.02$ since $|x^{\rho}|=x^{\beta}$. Is this true? I am having difficulty understanding how complex values play a role in the Prime Number Theorem.

Solving equation involving roots and powers .

Posted: 03 Jun 2022 10:35 AM PDT

I'm trying to solve this equation :

$\sqrt{3}\sqrt{237x^2 + \frac{224}{x^2}}x^7 + \frac{35293}{222}x^8 + \frac{2}{999}\sqrt{3}{(\sqrt{237x^2 + \frac{224}{x^2}})}^3x^5 - \frac{44968}{111}x^4 + \frac{12544}{333}=0$.

It looks pretty complicated to me, but the computer gives me exact solutions. For example one real solution is $\frac{2}{\sqrt{3}}$, and another : $-\frac{2}{3^{\frac{3}{4}}\sqrt[4]{7}}$ and so on.

Does anyone know how to obtain the exact solutions to the above equation? Thanks in advance!

if $\frac 1B=A$ and $\frac 1A=B $ then $A \times B =1$?

Posted: 03 Jun 2022 10:46 AM PDT

if $\frac 1B$=A and $\frac 1A=B$ , then $ a\times b $ =1 ?
when we take B as 6.0220648 we get A as 1.66056
and when we take A as 1.66056 we get B as 6.0220648
so my question comes here , that when we multiply A and B i.e $1.66056 \times 6.0220648$
we get the answer as 9.999 which does not satisfy AB=1

How is defined the dual frame of a vector bundle?

Posted: 03 Jun 2022 10:35 AM PDT

In this article Geometric wave equation by Stefan Waldmann he has at page 7

For a chart $(U, x)$ we consider a compact subset $K \subseteq U$ together with a collection $\left\{e_{\alpha}\right\}_{\alpha=1, \ldots, N}$ of local sections $e_{\alpha} \in \Gamma^{\infty}\left(\left.E\right|_{U}\right)$ such that $\left\{e_{\alpha}(p)\right\}_{\alpha=1, \ldots, N}$ is a basis of the fiber $E_{p}$. We always assume that $U$ is sufficiently small or e.g. contractible such that local base sections exist. The collection $\left\{e_{\alpha}\right\}_{\alpha=1, \ldots, N}$ will also be called a local frame. The dual frame will then be denoted by $\left\{e^{\alpha}\right\}_{\alpha=1, \ldots, N}$ where $e^{\alpha} \in \Gamma^{\infty}\left(\left.E^{*}\right|_{U}\right)$ are the local sections with $e^{\alpha}\left(e_{\beta}\right)=\delta_{\beta}^{\alpha}$. For $s \in \Gamma^{\infty}(E)$ we have unique functions $s^{\alpha}=e^{\alpha}(s) \in \mathcal{C}^{\infty}(U)$ such that $$ \left.s\right|_{U}=s^{\alpha} e_{\alpha} $$

Is $\delta_{\beta}^{\alpha}$ a function or a number? If it is a number how
this operation $e^{\alpha}\left(e_{\beta}\right)$ is defined?

Name for (incompressible) Stokes equation with non-symmetric bilinear form acting on velocity space?

Posted: 03 Jun 2022 10:32 AM PDT

I've seen the incompressible Stokes equation written in its weak form as: \begin{equation} 2 \left< A(\mathbf{v}), A(\mathbf{u})\right> - \left<\nabla \cdot \mathbf{v}, p \right> - \left< q, \nabla \cdot \mathbf{u} \right> = \left< \mathbf{v}, \mathbf{f} \right> \end{equation} where $\left<\cdot, \cdot \right>$ denotes the inner product appropriately defined for tensors, vectors, and scalars, $(\mathbf{u}, p)$ is the velocity/pressure solution pair, $(\mathbf{v}, q)$ is the velocity/pressure, $A$ is the symmetric gradient tensor, and $\mathbf{f}$ is some forcing term for the velocity.

Suppose I have an additional term on the left side, given by: \begin{equation} \left<\Omega(\mathbf{v}), A(\mathbf{u}) Q \right> \end{equation} where $\Omega$ is the anti-symmetric gradient, and $Q$ is some symmetric tensor. This added term is still bilinear, but it has lost its symmetry in $\mathbf{v}$ and $\mathbf{u}$. Hence, the methods for numerically solving the Stokes equation no longer apply (e.g. Schur complement with Conjugate Gradient method). I'm hoping to find some references that deal with this kind of equation, and have perhaps established some convergence estimates for numerical solutions. For context, this extra term arises because the fluid in question has internal microstructure.

If $G$ is a finite group, and $p$ is the smallest prime number which divides $o(G)$. Suppose $H<G$ s.t. $O(G)/O(H)=p$, prove $H \triangleleft G$

Posted: 03 Jun 2022 10:36 AM PDT

If $G$ is a finite group, and $p$ is the smallest prime number which divides $o(G)$. Suppose $H<G$ s.t. $O(G)/O(H)=p$, prove $H \triangleleft G$

Verification theorem for non autonomous Hamilton-Jacobi PDE

Posted: 03 Jun 2022 10:31 AM PDT

In my course I have a verification theorem for the non autonomous Hamilton-Jacobi PDE $$ \cases{ u_t+H(D_xu,x) = 0 \\ u(x,0) =g(x) } $$ in $\mathbb R^n \times (0,T)$ and I don't understand a point in the statement. We consider the following candidate $$ v(x,t) = \inf \left\lbrace \int_0^tL(\dot w(s),w(s))ds + g(w(0)): w \in C^1([0,t];\mathbb R^n),\quad w(t) = x\right\rbrace, $$ where $L$ is the non autonomous Lagrangian, which is the Fenchel conjugate of the Hamiltonian: $$ L(\cdot,x) = H(\cdot,x)^* $$ and we assume that $H(\cdot,x)$ is convex superlinear, so that the same holds for the conjugate. Then the theorem goes as follows

Verification theorem: Assume that $u \in C^1(\mathbb R^n \times (0,T))$ solves the Cauchy problem. Then

  1. Forall $(x,t) \in \mathbb R^n \times (0,T)$, $u(x,t)\leq v(x,t)$.
  2. We have equality if and only if there is a $w \in C^1([0,t];\mathbb R^n)$ such that $w(t)=x$ and $w$ solves the ODE on $(0,t)$ $$ \dot{w}(s) \cdot D_xu(w(s),s) - L(\dot w(s),w(s)) = H(D_xu(w(s),s),w(s)). $$

My problem is for the second part, if there is such an admissible curve $w$ I see how to prove that the inequality in 1 is an equality but I don't see how to get the converse: if no admissible curve solves the PDE, then the inequality is strict.

Proof of 1: Take $w$ admissible curve and define $$ \varphi(s) = u(w(s),s) + \int_s^t L(\dot w(\tau),w(\tau)) d\tau, $$ we see that $$ \dot \varphi (s) = D_xu(w(s),s) \cdot \dot w(s) - H(D_xu(w(s),s),w(s)) - L(\dot w(s),w(s)) \leq 0 $$ so that $$ g(w(0)) + \int_0^t L(\dot w(s),w(s))ds = \varphi(0) \geq \varphi(t) = u(x,t). $$ Then taking the infimum over admissible curve yields the result. $\square$

For the converse I suspect that even though no admissible curve solves the ODE, it might be possible that taking the infimum we can reach $u(x,t)$. It seems like we are saying that the calculus of variations problem has a solution in $C^1$ and to my knowledge it is not automatic that a minimizer exists or is smooth.

Ordinal notations and the Church-Kleene Ordinal

Posted: 03 Jun 2022 10:28 AM PDT

According to Kleene, an $r$-system is a pair $(S, |\cdot|)$ where $S\subseteq\mathbb{N}$ and $|\cdot|$ maps $S$ to countable ordinals, such that

  1. There is a partial recursive $K(x)$, such that $K(x) = 0$ iff $|x| = 0$, $K(x) = 1$ iff $|x|$ is a successor ordinal and $K(x) = 2$ iff $|x|$ is a limit ordinal.
  2. There is a partial recursive $P(x)$, such that for all $x\in S$ with $K(x) = 1$ we have $P(x)\in S$ with $|P(x)| + 1 = |x|$.
  3. There is a partial recursive $Q(x, t)$, such that for all $x\in S$ with $K(x) = 2$ we have $Q(x, t)\in S$ for all $t$ and $|Q(x, t)|$ is strictly increasing in $t$ with limit $|x|$.

Now the well-known Church-Kleene ordinal is (by definition) the supremum of all ordinals that have a name in some $r$-system.

Say we restrict to those $r$-systems where we additionally require that the set $S$ should be recursive. Then what is the limit of all ordinals that have a name in such a restricted $r$-system?

$a|a|^{-1}$ unitary for $a$ in a unital $C^*$-algebra?

Posted: 03 Jun 2022 10:27 AM PDT

Let $A$ be a unital $C^*$-algebra and $a ∈ A$ arbitrary. I have already shown that for $a ∈ \text{Inv}(A)$, we have $|a| := \sqrt{a^*a} ∈ \text{Inv}(A)$, so we can indeed, for $a ∈ \text{Inv}(A)$, define $u := a|a|^{-1}$ so that $a = u|a|$. (I see that we are essentially 'normalising' the element $a$, insofar as that is meaningful here.)

Now we can just compute:

$$u^{-1} = |a|a^{-1} = \sqrt{a^*a}a^{-1} = (a^*)^{\frac{1}{2}}a^{\frac{1}{2}}a^{-1} = (a^*)^{\frac{1}{2}}a^{-\frac{1}{2}} = \sqrt{a^*a^{-1}} $$ and $$u^* = (a|a|^{-1})^* = (|a|^{-1})^*a^* = \left(\left(\sqrt{a^*a}\right)^{-1}\right)^*a^* = \left((a^*a)^{-\frac{1}{2}}\right)^*a^* = (a^*a)^{-\frac{1}{2}}a^* = a^{-\frac{1}{2}}(a^*)^{\frac{1}{2}} = (a^{-1})^{\frac{1}{2}}(a^*)^{\frac{1}{2}} = \sqrt{a^{-1}a^*} $$

So apparently, if $u$ is to be unitary (i.e., $u^* = u^{-1}$), we must have that $a^{-1}$ commutes with $a^*$, but I see reason to assume that.

Euler-Lagrange equation minimal surface/graph

Posted: 03 Jun 2022 10:25 AM PDT

Let $u:[a,b]\to \mathbb{R}$ be differentiable, $$\cal{F}(u):=\int_a^b\sqrt{1+u'(t)^2}dt.$$ Find $u$ with the minimal graph and $u(a)=u_0, u(b)=u_1$.

Idea: Minimalizing. It is $0=\frac{d}{ds}\cal{F}(u+s\phi)|_{s=0}=\int_a^b\frac{u'(t)\cdot \phi(t)}{\sqrt{1+u'(t)^2}}$, so integration by parts and the fundamental lemma gives $$\left(\frac{u'(t)}{\sqrt{1+u'(t)^2}}\right)'=0$$ Now I need to find $u$ which satisfies the initial conditions. It is $\frac{u'(t)}{\sqrt{1+u'(t)^2}}$ constant so I thought $$\frac{u'(t)}{\sqrt{1+u'(t)^2}} \equiv c \Rightarrow u'(t)=\frac{c}{\sqrt{1+c^2}}$$ Integrating gives $$u(t)=\frac{c}{\sqrt{1+c^2}}\cdot t+c_0$$ with $u(a)=\frac{ac}{\sqrt{1+c^2}t+c_0}+c_0=u_0$ and $u(b)=\frac{bc}{\sqrt{1+c^2}t+c_0}+c_0=u_1$ to find $c_0$. Correct?

what is meant by binary operation

Posted: 03 Jun 2022 10:48 AM PDT

What does it mean for a set $S$ which contains $a$ and $b$ to have an operation $*$ that assigns an element $a*b$?

I think it means for example, if $S = \{a,b,c,\ldots\}$ and the set has the operation $*$ then you can use the operation with the objects in S to get $a*b$ and even $b*c$ is this what is meant by $SXS\to$$S$?

An Incorrect Solution to Hartshorne Ex. III.5.5(c).

Posted: 03 Jun 2022 10:43 AM PDT

Let $X:=\mathbb{P}^r_k$ and $Y$ a closed subscheme of dimension $q$ which is a complete intersection. The reader is asked to show $H^i(Y,\mathcal{O}_Y(n))=0$ for all $0<i<q$ and all $n\in\mathbb{Z}$.

The following fake proof yields too much information, and my suspicion is that the initial step is incorrect. I would like an explanation on what is wrong. I have bolded the part that I suspect is faulty. Indeed, I think the cohomology of $i_*\mathcal{O}_Y(n)$ may not be the cohomology group $H^i(Y,\mathcal{O}_Y(n))$...

Look at the short exact sequence $0\rightarrow \mathscr{I}_Y\rightarrow \mathcal{O}_X\rightarrow i_*\mathcal{O}_Y\rightarrow 0$ where $i:Y\hookrightarrow X$. Now twist this by $n$ to get $0\rightarrow \mathscr{I}_Y(n)\rightarrow \mathcal{O}_X(n)\rightarrow i_*\mathcal{O}_Y(n)\rightarrow 0$. Take the long exact sequence of cohomology,

$$0\rightarrow H^0(X,\mathscr{I}_Y(n))\rightarrow H^0(X,\mathcal{O}_X)\rightarrow\dots H^i(X,\mathscr{I}_Y(n))\rightarrow H^i(X,\mathcal{O}_X(n))\rightarrow H^i(Y,\mathcal{O}_Y(n))\rightarrow H^{i+1}(X,\mathscr{I}_Y(n))\rightarrow\dots $$

Now the sheaf $\mathscr{I}_Y(n)$ is quasi-coherent since $\mathscr{I}_Y$ is quasi-coherent and $\mathcal{O}_X(n)$ is invertible. So Serre's Criterion says all higher cohomology vanishes. Since we know $H^i(X,\mathcal{O}_X(n))=0$ for all $0<i<r$, we then obtain $H^i(Y,\mathcal{O}_Y(n))=0$ for all $0<i<r$ as well.

Edit: This method would also give part (a) of the problem for free, but again, it appears faulty.

Showing that a convergent sequence $(a_n)_n$ converges to $a \in (\min_{1 \leq i \leq n} a_i, \max_{1 \leq i \leq n} a_i)$ for some $n$.

Posted: 03 Jun 2022 10:46 AM PDT

Given a real sequence $(a_n)_n$, which converges to a limit $a \in \mathbb{R}$, prove that $$a \in \left(\min_{1 \leq i \leq n} a_i, \max_{1 \leq i \leq n} a_i \right)$$ for some $n$.

In other words, the sequence does not converge to either $\lim_{n \to \infty} \min_{1 \leq i \leq n} a_i$ or $\lim_{n \to \infty} \min_{1 \leq i \leq n} a_i$. Or, the sequence is greater than its limit at least once and smaller than its limit at least once.

Are there any known sufficient conditions for this to hold? Clearly it is necessary (but not sufficient) that infinitely many term differences $a_n-a_{n-1}$ are negative and infinitely many are positive.

For the sake of elucidation, I'll post here the specific sequence I'm working with. Let $G \in \{2,3,\dots\}$ and $\sum_{g=1}^G \pi^{(g)} = 1$, where $0 \leq \pi^{(g)} \leq 1$ for each $g$. Let $X_1,X_2,\dots$ be iid random variables with categorical distribution $C(\pi^{(1)},\dots,\pi^{(g)})$. Define $n_g = \sum_{i=1}^n I(x_i = g)$ so that $n = \sum_{g =1}^G n_g$ and \begin{equation} \eta_{nij} = \begin{cases} 1 \mbox{ if } x_i \neq x_g \\ -\left(\frac{n-n_g}{n_g-1}\right) \mbox{ otherwise.} \end{cases} \end{equation}

I'm interested in the sequence given by terms of the form $$a_n = \left( \sum_{1 \leq i < j \leq n} \eta_{nij}^2 \right)^{-1} \sum_{I} \eta_{nij} \eta_{ni'j'}$$ where $I = \{ (i,j,i',j') \in \{1,2,\dots,n\}^4 \mid i,j,i',j' \text{ pairwise distinct}\}$.

Given $x=a^b$ and $y=b^a$, find solutions for $a$ and $b$ in terms of $x$ and $y$.

Posted: 03 Jun 2022 10:18 AM PDT

My first approach involved simply taking the product: $$xy=a^bb^a=b^{b\log_b(a)+a}$$ However, I can't really do anything to, for instance, separate the exponent from the base $b$. Additionally, division has basically the same problem, and addition and subtraction are quite messy. Therefore, I settled on exponentiation and logarithms for my second and third approaches. $$x^y=({a^b})^{b^a}=a^{b^{a+1}}$$ $$\log_x(y)=\frac{\ln(y)}{\ln(x)}=\frac{a\ln(b)}{b\ln(a)}=\frac{a}{b}\log_a(b)$$ The third approach looked especially promising in conjunction with the first approach, but there was no way to surpass the $b^\mathrm{<exponent>}$ barrier.

EDIT

I am looking for algebraic solution, that can apply to real numbers, instead of number theory solutions, that can only apply to the integers. Thank you.

How fast can a series of positive rationals converge if we know its limit is rational?

Posted: 03 Jun 2022 10:15 AM PDT

Question: Let $\{x_n\}_{n\ge 0}$ be a sequence of positive reals for which $\sum_{n\ge 0}x_n$ converges. Does there always exist a sequence $\{y_n\}_{n\ge 0}$ of rationals such that $0<y_n<x_n$ for all $n\ge 1$, and $\sum_{n\ge 0}y_n$ converges to a rational number?

If not, can you give a counterexample $\{x_n\}_{n\ge 1}$?


Motivation: Let $\{a_n\}_{n\ge 1}$ and $\{b_n\}_{n\ge 1}$ be sequences of positive integers, and suppose that $\alpha:=\sum_{n\ge1}\frac{a_n}{b_n}$ converges. We'd like to place some conditions on the sequences $\{a_n\}_{n\ge 1}$ and $\{b_n\}_{n\ge 1}$ and conclude that $\alpha$ is irrational by imitating the classical proof that $e$ is irrational.

Suppose that $\alpha = p/q$, then for all $m\ge 1$, we have that $$ 0 < q\cdot \operatorname{lcm}(b_1,\ldots,b_m)\cdot\left(\alpha-\sum_{n=1}^m\frac{a_n}{b_n}\right)=q\cdot\operatorname{lcm}\left(b_1,\ldots,b_m\right)\sum_{n=m+1}^{\infty}\frac{a_n}{b_n} $$ is a positive integer. So if $$ \lim_{m\to \infty}\operatorname{lcm}(b_1,\ldots,b_m)\sum_{n=m+1}^\infty\frac{b_n}{a_n}=0, $$ we obtain a contradiction for $n$ large enough. Hence, $\alpha$ must be irrational in this case.

What we need for this argument to work, is for $a_n/b_n$ to decrease fast enough, where 'fast enough' is determined by how quickly $\operatorname{lcm}(b_1,\ldots,b_n)$ grows as a function of $n$. I'm asking if we really need a second condition besides $a_n/b_n$ decreasing fast.

Is the restriction of analytic functions on smooth manifolds analytic?

Posted: 03 Jun 2022 10:47 AM PDT

Let $M \subset \mathbb{R}^N$ be a smooth manifold. Let $f:\mathbb{R}^N \to \mathbb{R}$ be an analytic function.

I wonder if the restriction $f|_M$ is analytic?

My motivation to write this question is to understand the embedding theorem from Whitney which states that a differentiable manifold can be smoothly embedded in $\mathbb{R}$. Hence some people claims (without proof ) that $M$ admits an analytic structure.

Is the closed form of $\int_0^1\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx$ known in the literature?

Posted: 03 Jun 2022 10:30 AM PDT

Using

$$\text{Li}_{2a+1}(x)-\text{Li}_{2a+1}(1/x)=\frac{i\,\pi}{(2a)!}\ln^{2a}(x)$$ $$-2\sum_{k=0}^a \sum_{j=0}^k \frac{(-{\pi}^2)^{k-j}\eta(2a-2k)}{(2k-2j)!(2j+1)!}\ln^{2j+1}(x)\tag{1}$$

and

$$\int_0^1x^{n-1}\operatorname{Li}_a(x)\mathrm{d}x=(-1)^{a-1}\frac{H_n}{n^a}-\sum_{k=1}^{a-1}(-1)^k\frac{\zeta(a-k+1)}{n^k}\tag{2}$$

We have

$$\int_0^1\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx=(2^{-4a-3}-2^{-2a-3})\pi\zeta(2a+1)+\frac{2a+1}{2}\beta(2a+2)$$ $$+\sum_{k=0}^a \sum_{j=0}^k \frac{(-{\pi}^2)^{k-j}}{(2k-2j)!}\eta(2a-2k) \beta(2j+2);\tag{3}$$


$$\int_0^1\frac{\ln^{2a}(x)\ln(1-x)}{1+x^2}dx=\frac{(2a)!}{2}\ln(2)\beta(2a+1)-\frac{(2a+1)!}{2}\beta(2a+2)$$

$$-(2a)!\sum_{k=0}^a \sum_{j=0}^k \frac{(-{\pi}^2)^{k-j}}{(2k-2j)!}\eta(2a-2k) \beta(2j+2);\tag{4}$$


$$\sum_{n=0}^\infty\frac{(-1)^n H_{2n+1}}{(2n+1)^{2a+1}}=(2^{-4a-3}-2^{-2a-3})\pi\zeta(2a+1)+\frac{2a+1}{2}\beta(2a+2)$$ $$+\sum_{k=0}^a \sum_{j=0}^k \frac{(-{\pi}^2)^{k-j}}{(2k-2j)!}\eta(2a-2k) \beta(2j+2)+\sum_{k=1}^{2a}(-1)^k\zeta(2a-k+2)\beta(k).\tag{5}$$


Proof of $(1)$:

Set $z=-x$ in

$$\text{Li}_{2a+1}(-z)-\text{Li}_{2a+1}(-1/z)=-2\sum_{k=0}^a \frac{\eta(2a-2k)}{(2k+1)!}\ln^{2k+1}(z),$$ we have $$\text{Li}_{2a+1}(x)-\text{Li}_{2a+1}(1/x)=-2\sum_{k=0}^a \frac{\eta(2a-2k)}{(2k+1)!}\ln^{2k+1}(-x)\tag{a}$$

Using the binomial theorem, we have

$$\ln^{2k+1}(-x)=\left(\ln(x)+i\pi\right)^{2k+1}=\sum_{j=0}^{2k+1}\binom{2k+1}{j}(i\pi)^{2k-j+1}\ln^{j}(x)$$ use $$\sum_{j=0}^{2k+1}f(j)=\sum_{j=0}^{k}f(2j+1)+\sum_{j=0}^{k}f(2j),$$

$$\ln^{2k+1}(-x)=i\pi\sum_{j=0}^{k}\binom{2k+1}{2j+1}(-\pi^2)^{k-j}\ln^{2j+1}(x)+\sum_{j=0}^{k}\binom{2k+1}{2j}(-\pi^2)^{k-j}\ln^{2j}(x)$$

Substitute this result in $(a)$,

$$\text{Li}_{2a+1}(x)-\text{Li}_{2a+1}(1/x)=i\pi\sum_{k=0}^{a}\sum_{j=0}^{k}-2\eta(2a-2k)\binom{2k+1}{2j+1}(-\pi^2)^{k-j}\ln^{2j}(x).$$

$$-2\sum_{k=0}^{a}\sum_{j=0}^{k}\eta(2a-2k)\binom{2k+1}{2j}(-\pi^2)^{k-j}\ln^{2j+1}(x).$$

The first double sum magically simplifies to $\frac{1}{(2a)!}\ln^{2a}(x)$ and this completes the proof.


Proof of $(2)$: Using the defnition of the harmonic number,

$$H_n=\int_0^1\frac{1-x^n}{1-x}dx\overset{IBP}{=}-n\int_0^1 x^{n-1}\ln(1-x)dx$$

$$\overset{IBP}{=}n\left(\zeta(2)-\int_0^1 x^{n-1}\text{Li}_2(x)dx\right).$$

Integrating by parts repeatedly gives $(2)$.


Proof of $(3)$:

$$\int_0^1\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx=\int_0^\infty\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx-\underbrace{\int_1^\infty\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx}_{x\to 1/x}$$

$$=\int_0^\infty\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx-\int_0^1\frac{\text{Li}_{2a+1}(1/x)}{1+x^2}dx$$ add the integral to both sides

$$2\int_0^1\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx=\int_0^\infty\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx+\int_0^1\frac{\text{Li}_{2a+1}(x)-\text{Li}_{2a+1}(1/x)}{1+x^2}dx$$ $$=I_1+I_2\tag{b}$$

For the first integral, write the integral form of the polylogarithm,

$$I_1=\int_0^\infty\frac{1}{1+x^2}\left(\frac1{(2a)!}\int_0^1 \frac{x\ln^{2a}(y)}{1-xy}dy\right)dx$$

$$=\frac1{(2a)!}\int_0^1 \ln^{2a}(y)\left(\int_0^\infty\frac{x}{(1+x^2)(1-xy)}dx\right)dy$$

$$=\frac1{(2a)!}\int_0^1 \ln^{2a}(y)\left(-\frac{\pi}{2}\frac{y}{1+y^2}-\frac{i\pi}{1+y^2}-\frac{\ln(y)}{1+y^2}\right)dy$$

$$=(4^{-2a-1}-4^{-a-1})\pi\zeta(2a+1)-i\pi\beta(2a+1)+(2a+1)\beta(2a+2).$$

For the second integral, by using $(1)$, we have

$$I_2=\frac{i\,\pi}{(2a)!}\int_0^1\frac{\ln^{2a}(x)}{1+x^2}dx-2\sum_{k=0}^a \sum_{j=0}^k \frac{(-{\pi}^2)^{k-j}\eta(2a-2k)}{(2k-2j)!(2j+1)!}\int_0^1\frac{\ln^{2j+1}(x)}{1+x^2}dx$$

$$=i\,\pi\beta(2a+1)+2\sum_{k=0}^a \sum_{j=0}^k \frac{(-{\pi}^2)^{k-j}}{(2k-2j)!}\eta(2a-2k)\beta(2j+2).$$

Plugging $I_1$ and $I_2$ in $(b)$ completes the proof.


Proof of $(4)$: Again, using the integral form of the polylogarthm, we have

$$\int_0^1\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx=\int_0^1\frac{1}{1+x^2}\left(\frac1{(2a)!}\int_0^1 \frac{x\ln^{2a}(y)}{1-xy}dy\right)dx$$

$$=\frac1{(2a)!}\int_0^1 \ln^{2a}(y)\left(\int_0^1\frac{x}{(1+x^2)(1-xy)}dx\right)dy$$

$$=\frac1{(2a)!}\int_0^1 \ln^{2a}(y)\left(\frac{\ln(2)}{2(1+y^2)}-\frac{\pi\,y}{4(1+y^2)}-\frac{\ln(1-y)}{1+y^2}\right)dy$$

$$=\frac12\ln(2)\beta(2a+1)+(2^{-4a-3}-2^{-2a-3})\pi\zeta(2a+1)-\frac1{(2a)!}\int_0^1\frac{\ln^{2a}(x)\ln(1-x)}{1+x^2}dx.$$

The integral on the LHS is given in $(2)$.


Proof of $(5)$:

$$\int_0^1\frac{\text{Li}_{2a+1}(x)}{1+x^2}dx=\sum_{n=0}^\infty (-1)^n\int_0^1 x^{2n} \text{Li}_{2a+1}(x) dx$$

use the result $(2)$

$$=\sum_{n=0}^\infty (-1)^n\left(\frac{H_{2n+1}}{(2n+1)^{2a+1}}-\sum_{k=1}^{2a}(-1)^k \frac{\zeta(2a-k+2)}{(2n+1)^k}\right)$$

$$=\sum_{n=0}^\infty\frac{(-1)^n H_{2n+1}}{(2n+1)^{2a+1}}-\sum_{k=1}^{2a} (-1)^k \zeta(2a-k+2)\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)^k}$$

$$=\sum_{n=0}^\infty\frac{(-1)^n H_{2n+1}}{(2n+1)^{2a+1}}-\sum_{k=1}^{2a} (-1)^k \zeta(2a-k+2)\beta(k).$$


Question: Are the results $(3)$ to $(5)$ known in the literature? Is so, any reference? Thanks.

If you have a filtration in a group does it induce a filtration in a quotient?

Posted: 03 Jun 2022 10:34 AM PDT

Let us assume that a group $G$ has a filtration $\{N_i\}_{i\in \mathbb{N}}$. Let $A$ be a normal subgroup of $G$. I want to know if it is always $$\bigcap_{i\in \mathbb{N}}N_iA=A?$$

I have not been able to find a counterexample or nor a proof. It seems to me that it should be true but I am not sure how to prove it.

Any help will be appreciated.

Computing quotient group of subgroups of $\mathbb{Z}^6$

Posted: 03 Jun 2022 10:34 AM PDT

I want to compute the isomorphism type of the following abelian quotient group $G/H$:

$$G=\operatorname{span}((1,1,1,1,0,0),(-1,0,-1,0,1,0),(0,-1,1,0,0,1))$$

(by span I mean the span over the integers, since in this whole setting, I am working over the ring of integers.)

$$H=\operatorname{span}((1,1,1,1,0,0),(-1,-1,0,0,1,1)).$$

Does anybody know how to compute this? Is there a calculator for this I can find online? Intuitively I think it should be isomorphic to the integers. Thanks for any help.

Partial derivative with a sum un the dénominator

Posted: 03 Jun 2022 10:46 AM PDT

I have a Problem in which my coordinates gives $z=\sin \theta$ and $v_z=v_r \sin \theta$ and also $v_r=v(r)$. And i Need to Find the following derivative : $$ \frac{\partial v_z}{\partial z }$$ how can i do this ? I tried to developp the numerator and denominator ($dz$ Gives a sum). At least I Know that I should get the derivative to be equal to $\frac{v}{r}\sin^2 \theta + \frac{dv}{dr}\cos^2 \theta$

Find length of edge $c$ given the lengths of $a, b$ and $r$ (the radius of circle).

Posted: 03 Jun 2022 10:24 AM PDT

I made some problem and i get stuck to solve it!

The diagram for the question is here:
Find the length of edge $\enspace\pmb{c}\enspace$ in terms of $\enspace\pmb{a}$,$\enspace\pmb{b},\enspace$ and $\enspace\pmb{r},\enspace$ where $\enspace r\enspace$ is the radius of the circle shown in the diagram.

I think we can find the lengtn of the chord $\enspace c\enspace$ using certain angles and the radius of circle. I also think $\enspace\cos\left(A+B\right)\enspace$ is the angle we need. I want to solve it, but it is too difficult.

Derivative of Kronecker product inside a Frobenius norm

Posted: 03 Jun 2022 10:22 AM PDT

I need help to find the derivative w.r.t. to $ X $ in the problem below:

$$ \min_X \Vert A - (I \otimes X) \Vert_F^2 $$

where $A $ is a complex matrix, $ I $ is the identity matrix, and $\otimes$ denotes the Kronecker product.

The problem seems to be similar to this question but the trick of using 'vec' operarator does not work here.

Definition of map to barycentric subdivision?

Posted: 03 Jun 2022 10:22 AM PDT

I'm reading the Rotman's An Introduction to Algebraic Topology, p.247 and I'm confused with some notation.

enter image description here

What is the $\operatorname{Sd} : K \to \operatorname{Sd} K$ ? Here, $\operatorname{Sd}K$ is the barycentric subdivision.

Note that $\operatorname{Vert}(K) \subset \operatorname{Vert (Sd}K)$ (vertex sets). Let $i$ be the inclusion.

I guess that the $\operatorname{Sd}$ means this inclusion. And my question is, is this inclusion a simplicial map? ; i.e., whenever $\{ p_0, \cdots , p_q \}$ spans a simplex of $K$, then $\{i(p_0),\cdots, i(p_q) \}$ spans a simplex of $\operatorname{Sd}K$?

Why does the unbiased statistic in this example be MVUE immediately?

Posted: 03 Jun 2022 10:18 AM PDT

I am reading "Introduction to Mathematical Statistics" to familiarize myself with sufficient statistics. I got stuck by an example in Sec. 7.3 of the book. Below are page 427 and 428 that relate to my question.

enter image description here

My question is the last sentence of Example 7.3.1 underlined in red in the screenshot. The preceding steps of the solution find an unbiased statistic $[(n-1)Y_2]/n$ of $\theta$; let's denote it as $Y_3$. In my understanding, next we need to follow Theorem 7.3.1 in page 427 to construct an unbiased estimator $E(Y_3|Y_1)\buildrel \Delta \over =\varphi(Y_1)$. But:

  1. The example does not make any attempt to compute $E(Y_3|Y_1)$,
  2. The theorem requires that the unbiased estimator (here $Y_3$) be "not a function of $Y_1$ alone." But $Y_3$ is a function of sufficient statistic $Y_1$. How to reconcile this violation?
  3. Even if we compute $E(Y_3|Y_1)$ according to Theorem 7.3.1, we only have that its variance is less than or equal to that of $Y_3$. We don't have, however, that this variance is less than or equal to that of any other statistics, which is the definition of MVUE (minimum-variance unbiased estimator). So, why does the last sentence claim immediately that the unbiased statistic $Y_3$ "is an MVUE of $\theta$"?

I tried my best to read the text to fill the gap but failed, so I ask for help here for a reason for such an immediate conclusion. If you know of this text, you can infer from the fact that I read this book that I am not a major in math and my knowledge of probability and statistics is at naïve level, so please provide an accessible answer to my questions. Thank you.

Volume above cone $z = a\sqrt{x^2+y^2}$ and inside sphere $x^2+y^2+z^2=b^2$ [closed]

Posted: 03 Jun 2022 10:44 AM PDT

Find the volume of the region: $$\iiint(x^2+y^2+z^2)dV $$ where $R$ is the region above the cone $z = a\sqrt{x^2+y^2}$ and inside the sphere $x^2+y^2+z^2=b^2$.

No diffeomorphism that takes unit circle to unit square

Posted: 03 Jun 2022 10:27 AM PDT

This is not a homework problem. I am trying to learn my own from John M. Lee's Introduction to Smooth Manifolds.

In Chapter 3, there is the problem 3-4

Let $C \subset \mathbb{R}^2$ be the unit circle, and let $S \subset \mathbb{R}^2$ be the boundary of the square of side 2 centred at origin: $S= \lbrace (x,y) \colon \max(|x|,|y|)=1 \rbrace.$ Show that there is a homeomorphism $F:\mathbb{R}^2 \to \mathbb{R}^2$ such that $F(C)=S$, but there is no diffeomorphism with the same property. [Hint: Consider what $F$ does to the tangent vector to a suitable curve in $C$].

I can construct a homeomorphism (by placing the circle inside the square and then every radial line intersects the square at exactly one point). But, I don't know how to do the rest of the problem or understand the hint.

I do not know how to write out what tangent space should be for the square. If there were a diffeomorphism then $F_\star$ is isomorphism between any two tangent space. If I show that the tangent space on the corner of square has dimension zero, would it solve problem?

No comments:

Post a Comment