Saturday, February 5, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Determine whether the equation describes an ellipse.

Posted: 05 Feb 2022 08:19 AM PST

I couldn't figure out a way to solving this problem.

Suppose there is a polynomial in two variables.

We can construct it in the following parametrised form:

$x = m \cos(ωt) ; y = n \cos(ωt-ϕ)$

Is this an ellipse for all real values of m, n, ω, and ϕ? If not, how can we prove that these equations do describe an ellipse barring the set of values for the above constants for which they don't? Many thanks.

how xy \in (x^2,y)?

Posted: 05 Feb 2022 08:19 AM PST

If F is a field, the ideal (x,y) is maximal in F[x,y] and therefore prime. Furthermore (x,y)^2 \subsetneq (x^2,xy,y^2) \subsetneq (x^2,y) \subsetneq (x,y).

Question: How (x^2,xy,y^2) \subset (x^2,y), i.e. how xy \in (x^2,y)?

Picard iteration to a first order equation involving a maximum

Posted: 05 Feb 2022 08:19 AM PST

I am working on the following problem in ordinary differential equations:

Apply the Picard iteration to the first-order equation

\begin{align*} \dot{x} = 2t - 2\sqrt{\text{max}(0,x)}, \hspace{1cm} x(0)=0. \end{align*}

Does it converge?

Here is my solution so far:

Picard iteration yields approximate solutions $x_n(t) = x_0 + \int_{t_0}^t f(s,x_{n-1}(s))ds$ for $x(t)$, where $t_0 = 0$, $x_0 = 0$, and $f(t,x) = 2t - 2\sqrt{\text{max}(0,x)}$. With this, we have

\begin{align*} x_1(t) = 0 + \int_0^t (2s - 2\sqrt{\text{max}(0,0)})ds = 0 + t^2 = t^2 \\ x_2(t) = 0 + \int_0^t (2s - 2\sqrt{\text{max}(0,s^2)})ds = 0 + t^2 - t^2 = 0 \\ x_3(t) = 0 + \int_0^t (2s - 2\sqrt{\text{max}(0,0)})ds = 0 + t^2 = t^2 \\ \cdots \\ \cdots \\ \cdots \\ x_n(t) = t^2 \hspace{0.2cm} \text{if $n$ is odd}, 0 \hspace{0.2cm} \text{if $n$ is even}. \end{align*}

Is this correct? I'm wary because I'm not sure how to obtain the exact solution $x(t) = \lim_{n \rightarrow \infty} x_n(t)$ from here. Is it the case here that, since $\lim_{x \rightarrow \infty} x_n(t)$ DNE, the Picard iteration does not converge, hence there is no exact solution?

Thanks!

Transitive Lie group action on the Hantzsche-Wendt Manifold

Posted: 05 Feb 2022 08:17 AM PST

Does there exists a smooth transitive action of a (finite dimensional) Lie group $ G $ on the Hantzsche-Wendt manifold?

In other words, does there exists a Lie group $ G $ and a closed subgroup $ H $ such that $ G/H $ is diffeomorphic to the Hantzsche-Wendt manifold?

If such a transitive actions exists my guess is that the group $ G $ is the Euclidean group $ E_3 $ or some subgroup of $ E_3 $. Note that $ G $ must be noncompact as all three manifolds with transitive action by a compact group are already given here

https://math.stackexchange.com/a/4364430/758507

Also the group must be at least dimension 4 since all manifolds which are the quotient of a three dimensional Lie group by a cocompact lattice are given here

https://www.sciencedirect.com/science/article/pii/0166864181900183

Finally, here is some general background:

The Hantzsche-Wendt manifold $ M $ is a compact connected flat orientable 3 manifold.

Like all compact flat manifolds it is normally covered by a torus, in this case $ T^3 $. And moreover (like all flat manifolds) it is aspherical. So it is determined by its fundamental group which is presented in https://arxiv.org/abs/math/0311476 as $$ \pi_1(M) \cong <X,Y:X=Y^2XY^2,Y=X^2YX^2> $$ where $ X,Y,Z=(XY)^{-1} $ are the generating screw motions which square to the translations $ t_1= X^2, t_2=Y^2,t_3=Z^2 $ given in Wolf theorem 3.5.5. Since $ M $ is compact and flat $ \pi_1 $ is a Bieberbach group, indeed it fits into the short exact sequence $$ 1 \to \mathbb{Z}^3 \to \pi_1(M) \to C_2 \times C_2 \to 1 $$ so $ M $ has holonomy $ C_2 \times C_2 $. Abelianizing $ \pi_1 $ we can see that the first homology is $$ H_1(M,\mathbb{Z})\cong C_4 \times C_4 $$ From the short exact sequence above we can see that $ \pi_1 $ is virtually abelian and solvable.

Does the general definition of nondegenerate bilinear form apply to both slots?

Posted: 05 Feb 2022 08:16 AM PST

According to Wikipedia, a bilinear form $B$ on a vector space $V$ is nondegenerate if the map

$$B^\flat:V\ni x\mapsto B(x,\cdot)\in V^*$$

is an isomorphism. (This is equivalent to the more common definition only if $V$ is finite dimensional.) It seems natural to also consider whether the map

$$B^\flat{'}:V\ni y\mapsto B(\cdot,y)\in V^*$$

is an isomorphism. If $B$ is not assumed to be symmetric, are these two conditions equivalent? Please give a proof or counterexample.


We only need to prove that the first condition implies the second, since the reverse would then hold by a similar argument. I think I've managed to prove injectivity, but not surjectivity, of $B^\flat{'}$. Suppose $B^\flat{'}(y)=0$ for some $y\in V$. Then $B(x,y)=0$ for all $x\in V$. But then by the first condition, $y$ is in the kernel of every linear functional $f\in V^*$. This implies $y=0$, as otherwise there is always some linear functional $f$ such that $f(y)=1$.

Questions about exchange of the order of double sum

Posted: 05 Feb 2022 08:14 AM PST

I have learned series before, but have not built a systematic understanding of the exchange of the order of summation, here goes the question.

We have 4 scenarios.

a. $\sum_{i=1}^{n} \sum_{j=1}^{m}a_{ij}$, we can exchange the order of summation easily.

But how about the other scenarios? Under what circumstances or satisfy what conditions can we change the order of summation?

b.$\sum_{i=1}^{n} \sum_{j=1}^{\infty}a_{ij}$

c.$\sum_{i=1}^{\infty} \sum_{j=1}^{m}a_{ij}$

d.$\sum_{i=1}^{\infty} \sum_{j=1}^{\infty}a_{ij}$

To make it more complicated, when we have $E$(dealing with calculating the expectation), can we change the order of summation and Expectation? coz expectation is summation in essence.

a. $E \sum_{j=1}^{m}a_{j}$

$E \sum_{j=1}^{\infty}a_{j}$

$ \sum_{j=1}^{\infty}Ea_{j}$, they are trivial.

b.$E \sum_{i=1}^{n}\sum_{j=1}^{\infty}a_{ij}$

c.$\sum_{i=1}^{\infty} E[a_{ij}]$

d.$E[\sum_{i=1}^{\infty} \sum_{j=1}^{\infty}a_{ij}]$ Appreciate your helping hands.

Why is $\sin x=\sin(180^\circ-x)?$ I understand this is basic but answers I'm seeing just dont clear it up for me

Posted: 05 Feb 2022 08:17 AM PST

Picture

So even in this drawing, which is how I see it explained, how does that explain it? Both triangles have an angle $\theta$ and so the height of the triangles is going to be $\sin\theta$. How does this explain why $\sin x=\sin(180^\circ-x)$?

Is there an algebraic group in this cyclic sequence of numbers?

Posted: 05 Feb 2022 08:06 AM PST

Introduction. Take a function like $f(x) = (x^2 + 1) \bmod{1763}$. Define an initial value such as $x_0 = 0$. Iterate $f$ so that we get the sequence $0, 43, 215, 817, 430, 1376, [1634, 1591, 1032, 387]$ where the brackets denote a cycle in the sequence. Sequences such as this one must always close a cycle back on themselves.

Question. Can we form a group at all under such sequences? What would the operation be?

Attempted solution. My first impression is that we cannot form a group. The typical identity 1 for modular arithmetic is not in the cycle. Perhaps we can take any element in cycle as an identity, but group identities are unique and this approach will make any element in the cycle as an identity. I can't see how we can take an element in the cycle and somehow make it a unique identity under any operation whatsoever, so I don't think there's any possibility of forming a (cyclic) group with this cycle. If the group would not be cyclic, that would seem very strange because the very creation of the sequence is through a cyclic process, so I'm not seeing any possibility there. I'm looking for possibly an approach here that can produce a formal argument.

Disclaimer. This is not homework. I'm just investigating such sequences and seeing what sort of theories could teach me more about them.

if $||\nabla f_1(x,y)||\leq a, ||\nabla f_2(x,y)||\leq a$ then $f$ is a contraction

Posted: 05 Feb 2022 08:07 AM PST

I'm having some trouble with the following exercise:

Let $f:\mathbb R^2 \to \mathbb R^2$ be a funcion of class $C^1$ and supose that there exists $a\in(0,\frac {\sqrt2} 2)$ such that $$\forall (x,y) \in \mathbb R^2, ||\nabla f_1(x,y)||\leq a, ||\nabla f_2(x,y)||\leq a$$ Where $f_1$ and $f_2$ are the components of the function $f$. Prove that $f$ is a contraction.

I was able to prove that for any $X,Y\in \mathbb R^2$, then: $$|f_i(Y)-f_i(X)|\leq a||Y-X||$$ for $i=1,2$.

After this I did the following: $$||f(Y) - f(X)||=||(f_1(Y) - f_1(X), f_2(Y)-f_2(X))||$$ $$\leq |f_1(Y) - f_1(X)| + |f_2(Y) - f_2(X)|$$ $$\leq 2a||Y-X|| \leq\sqrt 2 ||Y-X||$$

And I'm not being able to conclude that $f$ is a contraction. How can this be done?

Definition of the spaces $c_{00}$ and $c_0$

Posted: 05 Feb 2022 08:19 AM PST

I am given the following chain of subspaces:

Let $S$ be a set. Then we have $c_{00}(S)\subseteq\ell^1(S)\subset c_0(S)\subset \ell^\infty(S)$. The context is reflexivity and the following theorem is proven:

There is an isometric isomorphism $\Phi:\ell^1(S)\to c_0(S)^\ast$ given by $\Phi(f)(g)=\sum_{s\in S} f(s)g(s)$ for all $f\in\ell^1(S)$ and $g\in c_0(S)$.

I have tried to find the notation in several books on functional analysis. $c_0(S)$ seems to be the set of sequences converging to $0$.

And $c_{00}(S)$ should be the set of convergent sequences.

Can you confirm this, or give a reference? I have also looked at this question: $c_0, \ell^1,\ell^\infty$ and their Dual Spaces: Rudin's RCA, Problem $5.9$

And tried to find the notation in Rudin's Book, but it is not listed in the symbol reference.

Thanks in advance.

Spivak Chapter 6, Question 1.4

Posted: 05 Feb 2022 08:09 AM PST

Spivak's chapter 6, question 1 asks: for which of the following functions $f$ is there a continuous function $F$ with domain $\mathbb{R}$ such that $F(x) = f(x)$ for all $x$ in the domain of $f$. Part (iv) asks us to consider the function $f(x) = 1/q, x = p/q$ rational in lowest terms.

The answer book says: No, because it would have to be that $F(x) = 0$ for irrational $x$, but then $F$ would not be continuous at rational numbers. I understand the second part (that $F$ would not be continuous at rational $x$ if $F(x) = 0$ for irrational $x$). But I don't know how to prove the first part. If there were an $F$ such that $F(x) = f(x)$ for all $x$ in the domain of $f$, why does it follow that $F(x) = 0$ for irrational $x$?

What would the negation of these two statements be?

Posted: 05 Feb 2022 08:18 AM PST

I need to negate these two statements and I believe that I have the quantifiers correct, but I'm not completely sure how to negate the math statements. I think I would keep the equations before the equal signs the same but I'm still unsure how to negate the last parts.

Here are the statements I want to negate:

Statement 1: $$\exists x,y\in\mathbb{R}_{\geq0}, \sqrt{9x^2+y^2}\neq 3x+y.$$

Statement 2: $$\forall x\in\mathbb{R}_{\geq0},\sqrt{25x^2+9}=5x+3.$$

Distribution of $(X,XY)$ with $X$ a standard Gaussian and $Y$ a Rademacher random variable

Posted: 05 Feb 2022 08:11 AM PST

Let $X$ be a standard gaussian rv and $Y$ a Rademacher rv such that $X$ and $Y$ are independent. It is easy to show that $Z=XY$ has has standard gaussian distribution. Find the distribution of $(X,Z)$ and deduce the distribution of $Z$ given $X$.

The obvious approach here seems to be to calculate $\mathbb P(X\le x,Z\le z)$ for $(x,z)\in\mathbb R^2$ but since $X$ and $Z$ aren't independent I don't really see how to proceed.

As for the second part of the question, if we have the distribution $F_{X,Z}$ then it is quite easy to get that of $F_{X|Z}$, but I don't know what $F_{X,Z}$ is.

Find $k>0$ such that two permutations conjugate and are different from the identity permutation

Posted: 05 Feb 2022 08:10 AM PST

I am given two permutations in $S_{10}$: $$\alpha = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ 5 & 4 & 8 & 6 & 7 & 10 & 1 & 9 & 3 & 2 \end{pmatrix}$$

$$\beta = \begin{pmatrix} 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ 4 & 3 & 7 & 5 & 6 & 9 & 8 & 2 & 10 & 1 \end{pmatrix}$$

and my question is: how do I find $k>0$ such that $\alpha ^k$ and $\beta ^k$ conjugate and are not the identity permutation? I tried doing it and I know that the cycles should be the same, but I find that $k=2$ gives us the same cycles and $k=4$ also, and I am not sure if I am correct or totally wrong.

I know that two permutations $\sigma,\sigma′\in S_n$ are conjugate if exists $\tau\in S_n$ such that: $\sigma′=\tau\sigma\tau−1=(\tau(a_0),\tau(a_1)…\tau(a_k))$, where $\alpha=(a_0a_1…a_k)$.

Is there an easy way to find $k$ and $\tau$ that works in this case?

Permutation and Combinations Help Please

Posted: 05 Feb 2022 08:12 AM PST

You have {a,b,c} as your character set. You need to create words having 10 characters that must be chosen from the character set. Out of all the possible arrangements, how many words have exactly 6 "a"?

My logic: We need 6 "a"s in our words. The rest 4 words must come from {b,c} Final answer = $\frac{(2C1 * 4 * 10!)}{6!}$ The 6! comes from the 6 duplicate "a"s in our words. Is my logic fine?

What is the name of this theorem about transformations from a finite to an infinite "volume"?

Posted: 05 Feb 2022 08:12 AM PST

This is a quote from the physicist Harald Lesch:

In der Mathematik gibt es einen Beweis der ist unglaublich schwierig zu erklären, aber in seinen Konsequenzen von extrem großer Bedeutung. Es geht um die Frage: Gibt es unendlich überhaupt in der Natur? Und in der Mathematik kann man sich die Frage stellen: Wenn ich ein endliches Volumen habe, also ein Volumen das klar begrenzt ist, kann sich das durch irgendeine mathematische Transformation in ein unendliches verwandeln? Die Aussage ist nein. Das bedeutet, wenn wir über Unendlichkeiten sprechen, dann waren die immer schon unendlich.

In mathematics there is a proof that is incredibly difficult to explain, but in its consequences of extremely great importance. It's about the question: Is there anything infinite at all in nature? And in mathematics you can ask the question: If I have a finite volume, i.e. a volume that is clearly delimited, can it be converted into an infinite one by some mathematical transformation? The statement is no, it cannot. This means that when we talk about infinities, they have always been infinite.

The statement is quite vague since it is adressed to a public audience. What is he referring to?

Feel free to edit the tags.

How to prove that a given linear map is a contraction

Posted: 05 Feb 2022 08:15 AM PST

In an exercise, a teacher asked me to show that a given linear transformation from $\mathbb R^2$ to itself was a contraction. The matrix of the transformation with respect to the canonical basis is the following: $$\left( \begin{matrix} \frac{1}{2}\sin(x_0) & \frac{1}{2}\\ \frac{1}{2}& 0 \end{matrix} \right)$$

for some constant $x_0$. I noticed that the determinant of this matrix is $-1/4$ and this means that after the transformation all areas are scaled down by a factor of $1/4$ but this is not enough to prove that $f$ is a contraction.

I tried setting $x=(x_1,x_2)$ and $y=(y_1,y_2)$ and directly proving that $||f(x)-f(y)||\leq L||x-y||$ for some $L\in (0,1)$ but I wasn't able to do so. How can this be solved?

Every Hermitian operator with a large nullspace is the direct sum of positive and negative operators with large nullspaces

Posted: 05 Feb 2022 08:12 AM PST

In this paper titled Commutators of Operators by Paul R. Halmos, the following result is used to conclude the proof of Lemma $2$. Please help me with a (possibly elementary) proof of the same.

"It suffices to note that every Hermitian operator is the direct sum of a positive and a negative operator, and, in case the original operator has a large null space, then the direct summands can be selected so that they too have that property."

In this context, "large" means (as discussed in this post also):

A subspace $H$ of a Hilbert space is large if $H$ contains infinitely many orthogonal copies of its orthogonal complement, or, in other words, if $\dim H \ge \aleph_0 \dim (H^\perp)$.

In this post, it is established that if $A \in \mathcal B(\mathcal H)$ is Hermitian, then there exist positive operators $P_1, P_2\in \mathcal B(\mathcal H)$ such that $A=P_1-P_2$ and $P_1P_2=0$. I think this may be related; however, the notion of a direct sum of operators is different from the usual sum (see this definition). So, we want to find Hilbert subspaces $\mathcal H_1,\mathcal H_2 \subset \mathcal H$ such that $\mathcal H = \mathcal H_1\oplus \mathcal H_2$, $P\in \mathcal B(\mathcal H_1)$ is positive, $N\in \mathcal B(\mathcal H_2)$ is negative, and $A = P \oplus N$. Further, we want to show that if $\ker A$ is large, then $P,N$ can be chosen such that $\ker P \subset \mathcal H_1$ and $\ker N\subset \mathcal H_2$ are also large subspaces.

How should I proceed to prove this seemingly complicated result? Any help would be appreciated. Thanks!

Closed convex hull in the space of probability measures

Posted: 05 Feb 2022 08:18 AM PST

In the book Gradient flows: In Metric spaces and in the space of probability measures of Ambrosio, Gigli and Savaré, one can read (Remark 5.1.2) that for $\mathcal K\subset \mathcal P(X)$, one has: $$\mu\in\overline{\text{Conv}}(\mathcal K)\Leftrightarrow \int_Xfd\mu\leq\sup_{\nu\in\mathcal K}\int_Xfd\nu,\quad \forall f\in C^0_b(X),$$ where $C^0_b(X)$ is the space of bounded continuous functions $f:X\to\mathbf R$.

The authors claim that this comes from Hahn-Banach theorem, but I do not see why...

Can you help me ? For information, I understand the beginning of Remark 5.1.2, namely that narrow convergence is induced by the weak* topology of $(C_b^0(X))'$.

$A,B \in M_3(\mathbb{R}) $ such that $A,B$ are symmetric. Find all values of $k\in \mathbb{N}$ such that $A^k , B^k$ are congruent.

Posted: 05 Feb 2022 08:19 AM PST

$A,B \in M_3(\mathbb{R}) $ such that $A,B$ are symmetric.

$P_A(x)=x^3-x$ (the characteristic polynomial of $A$).

$P_B(y)=y^3-3y^2+2y$ (the characteristic polynomial of $B$).

Find all values of $k\in \mathbb{N}$ such that $A^k , B^k$ are congruent.

My solution :

$P_A(x)=x^3-x=0 \implies x_{1}=0,x_{2}=1, x_{3}=-1$.

$P_B(y)=y^3-3y^2+2y=0 \implies y_{1}=0,y_{2}=1, y_{3}=2$

$A,B$ are symmetric $\implies A,B$ are diagonalizable matrices.

Then, $\exists P_1,P_2 \in \mathbb{M_3(\mathbb{R})}$ (nonsingular matrices) such that $P_1^tAP_1=diag\{0,1,-1\}, P_2^tBP_2=diag\{0,1,2\}$.

For odd $k$ we get that $(P_1^tAP_2)^k=(P_1^tA^kP_1)=diag\{0^k,1^k,(-1)^k\}=diag\{0,1,-1\}$,

and $(P_2^tBP_2)^k=(P_2^tB^kP_2)=diag\{0^k,1^k,2^k\}=diag\{0,1,2^k\}$

The signatures are different, then $A^k , B^k$ are not congruent for odd $k$.

For even $k$ we get that $(P_1^tAP_2)^k=(P_1^tA^kP_1)=diag\{0^k,1^k,(-1)^k\}=diag\{0,1,1\}$,

and $(P_2^tBP_2)^k=(P_2^tB^kP_2)=diag\{0^k,1^k,2^k\}=diag\{0,1,2^k\}$

The signatures are equals, then $A^k, B^k$ are congruent for odd $k$.

Is my solution correct?

Feedback is welcome !

Thanks !

Continuity of this function as $(x,y)$ tends to $(0,0)$

Posted: 05 Feb 2022 08:13 AM PST

Here's a function in $x$ and $y$ defined piecewise as $$f(x,y)= \left\{\begin{array}{ll} 0 & (x,y)=(0,0)\\ \frac{x^2y}{x^4+y^2} & (x,y) \neq (0,0) \\ \end{array}\right.$$ Examine its continuity as the ordered pair tends to $(0,0)$.

Okay, so I first tried this by assuming $x=\frac{1}{n}$ and $y=\frac{1}{n^2}$, where $n\rightarrow \infty$ . The limit of the function as $(x,y) \rightarrow (0,0)$ came out to be $\frac{1}{2}$, and since this is not equal to the value of the function at the said point, its discontinuous at the origin.
But when I assumed $x=\frac{1}{n}$ and $y=\frac{1}{n}$ , I got the limit zero:$$f(\frac{1}{n},\frac{1}{n})= \frac{\frac{1}{n^3}}{\frac{1}{n^4}+ \frac{1}{n^2}} = \frac{\frac{1}{n}}{\frac{1}{n^2}+ 1} $$
Since $x,y \rightarrow 0$,$f(\frac{1}{n},\frac{1}{n}) \rightarrow \frac{0}{0+1}=0$
Where am I going wrong?

Integration with assortative matching function

Posted: 05 Feb 2022 08:14 AM PST

Say two random variables $g$ and $k$, which are lognormally distributed with variances of logarithms $\sigma_{g}^{2}$ and $\sigma_{k}^{2}$. And there is a positive assortative matching $k(g)$ that maps each $g$ with each $k$ for top to bottom.

Then I want to integrate a function of $g$: $$\alpha g^{\alpha-1} k(g)^{\beta}$$. The answer seems be $A g^{\left(\alpha \sigma_{g}+\beta \sigma_{k}\right) / \sigma_{g}}+C$, where $A$ is a constant and $C$ is the constant from the integration. How can I derive this integration result?

The linear operator in $\frac{\partial u}{\partial x} = \lambda u$ is not bounded, but the linear operator in $A u = \lambda u$ is bounded, why?

Posted: 05 Feb 2022 08:12 AM PST

Let $A: R \rightarrow R$ is a linear operator, satisfying $A u = \lambda u$, where $\lambda \in R$, is bounded, and has norm $\|A\| = |\lambda|$.

$\frac{\partial }{\partial x}: C^{\infty}([0,1])\rightarrow C^{\infty}([0,1])$ is also a linear operator, satisfying $\frac{\partial u}{\partial x} = \lambda u$, where $\lambda \in R$ is not bounded, why?

My confusion comes from Example 5.13, Example 5.14 in Hunter's applied analysis.

Character group of non-split torus in $GL_2$

Posted: 05 Feb 2022 08:16 AM PST

Let $E=\mathbb Q(\sqrt{-d})$ be an imaginary quadratic field and let $R_{E/\mathbb Q}(\mathbb G_m)$ be the restriction of scalars of the multiplicative group, i.e. $R_{E/\mathbb Q}(\mathbb G_m)(X) = \mathbb G_m(X \times_{\mathbb Q} E)$ for each $\mathbb Q$-scheme $X$.

Picking a basis $\langle 1, -\sqrt{-d}\rangle$ for $E$ and letting $E$ act on itself, we can embed $E$ into $M_2(\mathbb Q)$ by $(\alpha, \beta) \mapsto \begin{bmatrix}\alpha&-d\beta\\\beta&\alpha\end{bmatrix}$.

Question 1: This embedding should give rise to an embedding of algebraic groups over $\mathbb Q$, $R_{E/\mathbb Q}(\mathbb G_m) \hookrightarrow GL_2$. Is $R_{E/\mathbb Q}(\mathbb G_m)$ a maximal (non-split) torus in $GL_2$? I seem to remember that the elements of maximal non-split tori in $GL_n$ all satisfy $\det = 1$, which is not the case here. What went wrong?

Question 2: Whatever the correct definition of the maximal torus in $GL_2$ obtained from $E$ is, how can we describe its character group explicitly?

Interesting $\arctan$ integral

Posted: 05 Feb 2022 08:04 AM PST

In a generalization of this problem: Integral involving product of arctangent and Gaussian, I am trying to calculate the integral

$$ I(a,b) = \int_{\mathbb{R}^2} \arctan^2 {\left( \frac{y+b}{x+a} \right)} e^{- (x^2 + y^2)} d x d y , $$

for $a,b \in \mathbb{R}$ in terms of commonly used special functions. The substitution that worked in the aforementioned question, i.e.

$$ \frac{1}{(x+a) \sqrt{\pi}} \arctan{ \left( \frac{y+b}{x+a} \right) } = \int_0^{\infty} \mathrm{erf} ((y+b)s) e^{-(x+a)^2 s^2} d s , $$

where $\mathrm{erf}(z) = \frac{2}{\sqrt{\pi}} \int_0^z e^{-t^2} d t$ is the standard error function, forces us to consider the integral

$$ \int_{\mathbb{R}} \mathrm{erf} ((y+b)t) \mathrm{erf} ((y+b)s) e^{- y^2} d y . $$

In the case of $b = 0$, we may use the result

$$ \int_0^{\infty} \mathrm{erf}{(ax)} \mathrm{erf}{(bx)} e^{-c^2 x^2} d x = \frac{1}{c \sqrt{\pi}} \arctan{\left( \frac{ab}{c \sqrt{a^2 + b^2 + c^2}} \right)} $$

found in, e.g., formula (18) of page 158 of https://nvlpubs.nist.gov/nistpubs/jres/75B/jresv75Bn3-4p149_A1b.pdf. Making this substitution, we find

$$ I(a,0) = \pi \int_0^{\pi/2} \int_0^{\infty} r \arctan{ \left( \frac{r \cos{\theta}\sin{\theta}}{\sqrt{1+r^2}} \right) } \exp{ \left\lbrace - \frac{a^2 r^2}{1 + r^2} \right\rbrace} \frac{(1 + 2 a^2 + r^2)}{(1+r^2)^{5/2}} d \theta d r . $$

I would be very happy just to demonstrate a closed-form for $I(a,0)$ above. However, now I am stuck trying to compute

\begin{align} \int_0^{\pi / 2} \arctan{ \left( \frac{r \cos{\theta}\sin{\theta}}{\sqrt{1+r^2}} \right) } d \theta . \hspace{2cm} (*) \end{align}

Would anyone on this platform know how to compute ($*$)? I have tried Mathematica, but it returns some nasty expression. Maybe I am going about this wrong by trying to compute the $\theta$ integral first? Any ideas about ($*$) or the original integral $I(a,b)$ are appreciated. I asked about a related integral here: Integral involving sin, cosine, exponential, and error functions.. The integral in this question is found by starting from polar coordinates with $I(a,b)$. Unfortunately, I have not been able to make progress on this integral either.

Update: Observe that

$$ \frac{d}{dr} \frac{e^{- a^2 r^2 / (1+r^2)}}{\sqrt{1+r^2}} = - r \exp{ \left\lbrace - \frac{a^2 r^2}{1 + r^2} \right\rbrace} \frac{(1 + 2 a^2 + r^2)}{(1+r^2)^{5/2}} . $$

We may therefore integrate-by-parts to find

\begin{align*} I(a,0) & = - \pi \int_0^{\pi/2} \int_0^{\infty} \arctan{ \left( \frac{r \cos{\theta}\sin{\theta}}{\sqrt{1+r^2}} \right) } \frac{d}{dr} \frac{e^{- a^2 r^2 / (1+r^2)}}{\sqrt{1+r^2}} dr d \theta \\ & ​= \pi \int_0^{\pi/2} \int_0^{\infty} \frac{\cos{\theta} \sin{\theta}}{(1+r^2) (1 + r^2 ( 1 + \cos^2{\theta} \sin^2{\theta} )) } e^{- a^2 r^2 / (1+r^2)} dr d \theta . \end{align*}

Hopefully this observation simplifies the problem some. Apparently, Mathematica believes

$$ \int_0^{\pi/2} \frac{\cos{\theta} \sin{\theta}}{1 + r^2 ( 1 + \cos^2{\theta} \sin^2{\theta} ) } d \theta = \frac{2 \mathrm{arctanh}{ \frac{r}{\sqrt{5 r^2 + 4}} }}{r \sqrt{ 5 r^2 + 4 } } , $$

where $\mathrm{arctanh}$ is the inverse hyperbolic tangent. I haven't been able to derive this myself though.

Update 2: I was able to make a little more progress on the special case $I(a,0)$. Starting from the previous identity involving the hyperbolic $\mathrm{arctanh}$, our integral of interest becomes \begin{align*} I(a,0) = 2 \pi \int_0^{\infty} \frac{2 \mathrm{arctanh}{ \frac{r}{\sqrt{5 r^2 + 4}} }}{r (1 + r^2) \sqrt{ 5 r^2 + 4 } } e^{- a^2 r^2 / (1+r^2)} d r . \end{align*} Now, observe that \begin{align*} \frac{d }{d r} \mathrm{arctanh}^2{ \frac{r}{\sqrt{5 r^2 + 4}} } = 2 \frac{\mathrm{arctanh}{ \frac{r}{\sqrt{5 r^2 + 4}} }}{(1+r^2) \sqrt{5 r^2 + 4}} . \end{align*} Then, via integration-by-parts, \begin{align*} I(a,0) & = 2 \pi \int_0^{\infty} \frac{1}{r} e^{- a^2 r^2 / (1+r^2)} \frac{d }{d r} \mathrm{arctanh}^2{ \frac{r}{\sqrt{5 r^2 + 4}} } d r \\ & = 2 \pi \int_0^{\infty} \left( \frac{1}{r^2} + \frac{2a^2}{(1+r^2)^2} \right) e^{- a^2 r^2 / (1+r^2)} \mathrm{arctanh}^2{ \frac{r}{\sqrt{5 r^2 + 4}} } d r . \end{align*} Now we look to making the Euler substitution to simplify the integral. Let $\sqrt{ 5 r^2 + 4 } = \sqrt{5} r + t$. Then, \begin{align*} r = \frac{4 - t^2}{2 \sqrt{5} t} = \frac{\sqrt{5}}{10} \left( \frac{4}{t} - t \right). \end{align*} Consequently, \begin{align*} d r = - \frac{\sqrt{5}}{10} \left( \frac{4}{t^2} + 1 \right) d t . \end{align*} Plugging this into our integral and simplifying reveals \begin{align} I(a,0) = 4 \pi \sqrt{5} \int_{- \infty}^{\infty} (4 + t^2) \left( \frac{1}{(t^2 - 4)^2} + \frac{40 a^2 t^2}{(t^4 + 12 t^2 + 16)^2} \right) e^{- \frac{a^2 (t^2 - 4)^2}{16+12t+t^2}} \mathrm{arctanh}^2{ \left( \frac{t^2 - 4}{ \sqrt{5} (t^2 + 4)} \right) } d t . \end{align}

It is starting to seem as though some "closed-form" in terms of $a$ is possible. Due to the appearance of $\mathrm{arctanh}^2$ in the previous integral, I am inclined to conjecture that $\mathrm{arctanh}$ is part of the final answer. This would be a nice "duality" with this integral and the final answer to the related one in the question linked at the very top.

Does anyone have any suggestions on how I could proceed?

Update 3: I found a mistake in my original version of ($\ast$). Please see the posted answer of mine for an update.

Geometric interpretation for the binary expansion using gradient

Posted: 05 Feb 2022 08:11 AM PST

The following question is modified from the the 2021 Mathsbombe competition (now closed):

Beatrice lines up an infinitely large piece of squared graph paper with red lines going horizontally at 1cm intervals and blue lines going vertically at 1cm intervals. Beatrice then draws an infinitely long straight line, with gradient $21/31$ to the horizontal starting from the point with coordinates $(0.5,0.5)$. Every time Beatrice's line crosses a red (horizontal) line she writes down R. Every time her line crosses a blue (vertical) line she writes down B. This way she makes an infinite sequence starting BRBRBBRBR..

The first part of the picture is drawn below.

PIC

It turns out that the binary expansion of the gradient of the line, $21/31$,is $0..101011010...$ Which is the same as the sequence of letters BRBRBBRBR...

Is the a way to understand this connection?

Which 3 manifolds admit transitive action by compact group?

Posted: 05 Feb 2022 08:10 AM PST

The only connected 2 manifolds admitting a transitive action by a compact Lie group are the sphere projective plane and torus.

Let M be a connected three manifold which admits a transitive action by a compact Lie group. Then must it be the case that either M is the product of a circle with a surface that admits a transitive action by a compact Lie group $$ T^3, S^2 \times S^1, \mathbb{R}P^2 \times S^1 $$ or $M$ is $$ SU_2/\Gamma $$ for a finite subgroup $ \Gamma $ of $SU_2$?

EDIT: no it's not true the list is missing the mapping torus of the antipodal map of $ S^2 $.

Why must we require the local trivialization of fiber bundles, $\varphi:\pi^{-1}(U)\to U\times F$, to satisfy $\pi={\rm proj}_1\circ\,\varphi$?

Posted: 05 Feb 2022 08:18 AM PST

The relevant Wikipedia article about Fiber bundles defines them as structures $(E,B,\pi,F)$ with $\pi:E\to B$ a continuous surjection such that

  1. For every $x\in B$, there is a neighborhood $U\subseteq B$ with $x\in U$ such that there is a homeomorphism $$\varphi:\pi^{-1}(U)\to U\times F,$$
  2. The maps $\pi$ and $\varphi$ "agree" with the projection onto the first factor, meaning that $$\pi = \operatorname{proj}_1\circ\,\varphi.$$

I don't quite understand if and why this second condition is required. More precisely, it feels like we should not need to add it as a further requirement for the definition of fiber bundle.

I imagine $\varphi$ as a map that "locally straightens out" the total space. For example, for the Mobius strip, if $U$ is a neighborhood of a point in $S^1$, then $\pi^{-1}(U)$ is the set of points of $\mathbb R^3$ that gets projected to points in $U$, that is, a set of lines pointing in different directions but all intersecting $U$ at some point. The map $\varphi$ should then, I suppose, "straighten up" all these lines, thus in some sense recognizing that all the fibers are, in fact, lines (i.e. one-dimensional vector spaces).

It would seem obvious then that projecting on the first part of the result of the application of $\varphi$ would give back the original $x\in U$. Is this not the case? If not, what is an example in which not adding this as a further assumption gives an object which we would not like to call a "fiber bundle"?

Trouble with problem #13 3.A Linear Algebra Done Right

Posted: 05 Feb 2022 08:02 AM PST

I am having an incredibly difficult time understanding this proof. I will break down my confusion on every step, and hopefully some of this confusion will clear up.

Problem: Suppose $v_1,...,v_m$ is a linearly dependent list of vectors in $V$. Suppose also that $W \neq \{0\}$. Prove there exist $w_1,...,w_m \in W$ such that no $T \in \mathcal{L}(V,W)$ satisfies $Tv_k=w_k$ for each $k=1,...,m$

Proof (not my attempt):

  1. There exist scalars $a_i$ not all $0$ such that $ \sum a_iv_i=0$.
  2. Suppose $a_k \neq 0$.
  3. Pick any $w_k \neq 0$ in $W$ and let $w_i=0$ for $ i \neq k$.
  4. If there exists a linear map $T:V \to W$ such that $Tv_i=w_i$ for all $i$ then $0=T( \sum a_iv_i)= \sum a_iT(v_i)=a_kw_k$ which is a contradiction. Hence no such $T$ exists.

I have numbered all the sentences and will display my confusion.

My understanding of the problem: If $v_1,...,v_m$ are linearly dependent and $W \neq \{0\}$ we are supposed to show there exists some $w's$ such that no transformation exists:

we need to show there exist $w's \in W$ such that there is no transformation $T:V \rightarrow W$ such that $T(a_1v_1+\dots +a_mv_m)=a_1w_1+\dots +a_mw_m$

Now to the proof above.

Sentence 1.

This is the only part of the proof I understand. Since $v_1,...,v_m$ are linearly dependent there exist scalars $a_1,...,a_m$ not all zero such that $a_1v_1+\dots +a_mv_m=0$

Sentence 2.

"Suppose $a_k \neq 0$" To my understanding this means $a_k$ is a coefficient in this linearly combination that is nonzero. To my understanding $k \in \{1,...,m\}$

Sentence 3.

This is the part I get lost at. "Pick any $w_k \neq 0$ in $W$ and let $w_i=0$ for $ i \neq k$"

What does Pick any $w_k \neq 0$ in $W$ mean? To my understanding this means this is a nonzero vector that the transformation is supposed to map to.

In other words $T(a_1v_1+\dots + a_mv_m)=a_kw_k$

The next sentence states

Let $w_i=0$ for $i\neq k$ To my understanding this means all the vectors $w_i$ where $i \neq k$ are getting mapped to the zero vector.

Sentence 4.

This is where I become lost I have no idea what is going on here

"If there exists a linear map $T:V \to W$ such that $Tv_i=w_i$ for all $i$ then $0=T( \sum a_iv_i)= \sum a_iT(v_i)=a_kw_k$ which is a contradiction. Hence no such $T$ exists."

I have a very hard time understanding this. I understand that the proof is using the linearity of the transformation and applying this transformation to both sides of the linear dependence relation $0=a_1v_m+\dots +a_mv_m$ I do not understand why they are doing this. Also is the result that $T(a_1v_1+\dots +a_mv_m)=a_kw_k$ due to the fact that this($w_k$) was the only $w$ that was chosen, to get mapped to a nonzero vector? I understand that the contradiction comes from assuming $a_k,w_k \neq 0$ and getting them equal to $0$ in the final transformation applied to the linear dependence relation. Any help on this like breaking the proof down into something understandable would be much appreciated.I also think I am misunderstanding exactly what this question is asking.

Evaluate indefinite integral $\displaystyle\int \frac{1}{\sin x + \sec x}dx$

Posted: 05 Feb 2022 08:10 AM PST

Evaluate $$\displaystyle\int \frac{dx}{\sin x + \sec x}$$ My work is to substitute $\sin x = u$. Then $\displaystyle\sec x=\frac{1}{\sqrt{(1-u^2)}}$ And $\displaystyle dx = \frac{du}{\sqrt{(1-u^2)}}$ and I got this formula

$$\displaystyle \int \frac{du}{1+ u\sqrt{(1-u^2)}} $$

But I could not evaluate the previous integral.

No comments:

Post a Comment