Tuesday, February 1, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


Parallel transport preserves Lie bracket

Posted: 01 Feb 2022 12:26 AM PST

Let $(M,g)$ be a Riemannian manifold and $\sigma(t)$ a geodesic on $M$. I'll write $\Pi_{t_0}^{t_1}$ for the parallel transport from $\sigma(t_0)$ to $\sigma(t_1)$ and $[\cdot, \cdot]$ for the Lie bracket on vector fields.

  1. Is it true that $\Pi_{t_0}^{t_1}([X,Y])=[\Pi_{t_0}^{t_1}(X), \Pi_{t_0}^{t_1}(Y)]$ for $X,Y \in T_{\sigma(t_0)}M$?

  2. If not, there exists a basis of vectors of $T_{\sigma(t_0)}M$ having that property?

For my final purposes, I am interested on veryfing that if $[X,Y]$ vanishes, so does $[\Pi_{t_0}^{t_1}(X), \Pi_{t_0}^{t_1}(Y)]$, but if the property holds in more general settings would be great.

primitive root of unity modulo Question

Posted: 01 Feb 2022 12:24 AM PST

Hello I do not understand why this is true. " Since n = 3^a is a power of an odd prime, there exists a primitive root of unity g such that g, g^2,... g^eulerFunction(n) has different remainder modulo n and also covers all the residues mod n relatively prime to n."

For which $k$ is $f_{k}(x)=x^{k}sin(1/x)$ differentiable?

Posted: 01 Feb 2022 12:30 AM PST

Let $k \in \mathbb{N}$ and let $f_{k}$ be a function $f_{k}: \mathbb{R} \rightarrow \mathbb{R}$ with $f_{k}(x)= \begin{cases} x\neq 0 \ f_{k}(x)=\ x^{k} \sin(\frac{1}{x}) \\ x=0 \ \ f_{k}(x)=0 \end{cases}$ For which $k$ is the function differentiable?

Hey guys I'm preparing for exams and I'm trying to improve my proof-writing. Is this one correct?

By definition the derivative is characterized as $\lim_{h \rightarrow 0} \frac{f_{k}(h)-f(0)}{h}$. Thus the the question is equivalent to: For which $k$ does the limit $\lim_{h \rightarrow 0} h^{k-1} \sin(\frac{1}{h})$ exist?

Let $k>1$. Notice that the exponent in the expression $h^{k-1}$ is positive due to our assumption. Further notice that $-1 \leq \sin(\frac{1}{h}) \leq 1$. We multiply the inequality with $h^{k-1}$ and find $h^{k-1} \leq h^{k-1} \sin(\frac{1}{h}) \leq h^{k-1}$. Since $\lim_{h \rightarrow 0} -h^{k-1}=0$ and $\lim_{h \rightarrow 0} h^{k-1}=0$ we know by using the squeeze theorem that $\lim_{h \rightarrow 0} h^{k-1} \sin(\frac{1}{h})=0$. Not only have we shown the existance, we even calculated the limit explicitly. Thus for $k>1$ the function is differentiable.

Let $k=1$ and one can check by using this that the limit becomes $\lim_{h \rightarrow 0} f_k(h)=\lim_{h \rightarrow 0} \sin(\frac{1}{h})$. Assume the limit existed and take the two sequences $x_n:=\frac{1}{2 \pi n+ \pi/2}$ and define $y_n:=\frac{1}{2 \pi n}$ with $x_n, y_n \rightarrow 0$ for $n \rightarrow \infty$. then we find $\sin(\frac{1}{x_n})=\sin(2 \pi n+ \pi/2)=1$ but also $\sin(\frac{1}{y_n})=\sin(2 \pi n)=0$. But if the limit existed the values ​​should match, but they don't. So for $k=1$ not differentiable.

Let $k < 1$. Now notice that we can rewrite the limit $\lim_{h \rightarrow 0} \frac{\sin(\frac{1}{h})}{h^{1-k}}$. Our assumption assures that the exponent $1-k$ is always postitive. Assume the limit existed . Thus we find by substituting $x_n$ that $\frac{\sin(\frac{1}{x_n})}{x_n^{1-k}}=\sin(2 \pi n + \pi/2)(2 \pi n + \pi /2)^{1-k}$. But $\sin(2\pi n + \pi/2)=1$ and thus we find that the limit with the sequence doesn't even exist. So for $k < 1$ not differentiable.

So the function $f_k$ is differentiable if and only if $k>1$

Evaluate $\cos 36 ^\circ - \cos 72 ^\circ$ [duplicate]

Posted: 01 Feb 2022 12:22 AM PST

Evaluate $\cos 36 ^\circ - \cos 72 ^\circ$

Here is my method:

First, $\cos 72 ^\circ$ can be written as $\cos^2 36 ^\circ - \sin^2 36 ^\circ$ via the double angle formula. Then to get rid of the $\sin^2 36^\circ$ I used the Pythagorean Identity and said $\sin^2 36 ^\circ = 1 - \cos^2 36 ^\circ$.

So then our equation becomes $-2 \cos^2 36 ^\circ + \cos 36 ^\circ +1$ (After a little algebra). I noticed this was a quadratic and therefore I set $\cos 36 ^\circ = y$ and solved for $y$ where I got roots $\frac {1} {2}, -1$. $\frac {1} {2}$ is right answer to the equation but since I was solving for $y$ I believe it shouldn't be correct. Can someone help explain where I got it wrong and how to solve the equation using my method.

Finding sets which are not $\mu^*$-measurable

Posted: 01 Feb 2022 12:06 AM PST

Recall that the Dirac measure is defined as follows:

$\mu(A)=\Big\{\begin{array}{ll} 1 & \mbox{if $x_0 \in A$}\\ 0 & \mbox{if $x_0 \notin A$}\end{array}$,

where $x_0$ can be any arbitrary point.

Now, let's define an outer measure which is similar to the Dirac measure:

$\mu^*(A)=\left\{ \begin{array}{ll} 1 & \mbox{if $x_0 \in A$}\\ 0.5 & \mbox{if $x_0 \in \bar{A}$ but $x_0 \notin A$}\\ 0 & \mbox{if $x_0 \notin \bar{A}$}\end{array} \right .$,

where $\bar{A}$ is the closure of $A$. You can check that $\mu^*$ is not a measure but it satisfies the conditions of being an outer measure.

According to this outer measure, I want to find not $\mu^*$-measurable sets in $R^2$ when we consider $x_0$ to be the origin.

Recall that a set $A$ is $\mu^*$-measurable in $R^2$ when for every set $E$ in $R^2$ the following equality is held:

$\mu^*(E)$=$\mu^*(E\cap A)+\mu^*(E\cap A^c)$.

In this problem, I have found that the sets whose boundary passes the origin are not $\mu^*$-measurable. This is because, by considering $E=R^2$, we have: $\mu^*(E)=1$ but $\mu^*(E\cap A)+\mu^*(E\cap A^c)=1.5$.

Could you find other not $\mu^*$-measurable sets according to the defined outer measure? Or, could you prove that there is no other not $\mu^*$-measurable set?

Enigma Find the age of the father and the son

Posted: 01 Feb 2022 12:23 AM PST

A slippery enigma.

Here is a speech between a father and his son. My father can I know your current age. The father answers: ah my son! When I had the age of your current age, you had yellow hair like the sun. The son insists, I want to know your age my father. The father replies: Well, you want to know my age!

  • The sum of my current age with the age you had when I had the age of your current age is 40
  • The sum of your current age with the age I will have when you have the age of my age is 60 or 120.

The son answers, now I know your age. Does the son lie?

I think the son lied because we normally find two solutions. What do you think

Showing there doesnt exists a continuous function which satisfies all the below conditions

Posted: 01 Feb 2022 12:22 AM PST

Let $0 \leq$ a $\leq 1$; prove that there does not exist a continuous function $f:[0,1] \rightarrow(0, \infty)$ which satisfy all the following conditions : \begin{align}\int_{0}^{1} f(x) d x &= 1 ,\\ \int_{0}^{1} x f(x) d x& = a,\\ \int_{0}^{1} x^{2} f(x) d x&=a^{2}. \end{align} My work : by multiplying the second integral by $2$ and then subtracting it with the other two we get integral $(x-1)^2f(x) = (a-1)^2$ now I don't get how to show $f(x)$ doesn't exist of such type?

Couette flow with infinite depth

Posted: 31 Jan 2022 11:59 PM PST

Consider fluid below X axis or XY plane. It's top layer starts to move with velocity $v$ at and after time $t_0$ If flow is not fully developed and evolving from standing water I receive $v_x=0$ from continuity equation due to zero velocity in y direction (and z in 3D case). Which corresponds to fully developed flow since the acceleration is zero.

Task itself was to calculate velocity at certain level $h$ below the surface after given time $t_1$. (Pressure at the surface is atmospheric).

Could one advice what I'm doing wrong?

classical part functor in derived categories

Posted: 31 Jan 2022 11:51 PM PST

Good morning to everyone,

I am writing here because I need to understand some proofs about

adjunctions in the settings of quasi-abelian categories.

In this article

http://www.numdam.org/issue/MSMF_1999_2_76__R3_0.pdf

on page 26 in the definition 1.2.26, how the functor $C: LH(E) \rightarrow E$

''classical part'' is defined on morphisms?

I think this: For any $(X, d_{X})$ complex on E,

I factorize

$$d_{X}^{-1} = \ker(d_{X}^{0}) \circ \delta_{X}^{\prime, -1} \circ coker(\ker(d_{X}^{-1}))$$

I define

$$C(X, d_{X}) := Coker(\delta_{X}^{\prime, -1})$$

But I do not know how to define it on morphisms and to prove that it

is a well-defined functor.

If someone wants to contact me in private,

please write it in the comments.

Thank you in advance

Properties of Interval Exchange Transformations

Posted: 31 Jan 2022 11:49 PM PST

I'm studying a paper from Michael Keane on Interval Exchange Transformations.

Before I ask my question I explain a little about the subject:

Let $X=[0,1)$ and $n \geq 2$ an integer. For each probability vector $\alpha=(\alpha_1, \alpha_2, \cdots, \alpha_n) $ with $\alpha_i >0$ for $1 \leq i \leq n$ we set : \begin{align} &\beta_0 =0\\ &\beta_i=\sum\limits_{j=1}^i\alpha_j\\&X_i=[\beta_{i-1},\beta_i) \end{align} Let $\tau$ be a permutation of the symbols $\{1,2,\cdots,n\}$. Then \begin{align} \alpha^{\tau}=(\alpha_{\tau^{-1}(1)},\alpha_{\tau^{-1}(2)},\cdots, \alpha_{\tau^{-1}(n)}) \end{align} is a probability vector with positive components, and we can form the corresponding $\beta_i^{\tau}$ and $X_i^{\tau}$, $1\leq i \leq n$. Now define $T:X \to X$ by setting \begin{align} Tx=x-\beta_{i-1}+\beta_{\tau(i)-1}^\tau \end{align} For each $ x \in X_i$ and each $ 1 \leq i \leq n$. $T$ maps each interval $X_i$ isometrically onto the corresponding interval $X_{\tau(i)}^\tau$.

We call $T$ an $(\alpha,\tau)$-Interval Exchange Transformation.

$T$ possesses the following properties:

1- $T$ is invertible, and its inverse is the $(\alpha^\tau, \tau^{-1})$ interval exchange transformation.

2- $T$ is continuous except at the points of the set $D=\{\beta_1,\beta_2,\cdots,\beta_{n-1}\}$ and $T$ is continuous from the right at these points.

3- $\lim\limits_{x\to \beta_i}Tx= \beta_{\tau(i)}^\tau$ for $1 \leq i \leq n$

4- $T\beta_i=\beta_{\tau(i+1)-1}^\tau$ for $0 \leq i \leq n-1$.

My question is to prove 3 and 4.

My try:

For 3: \begin{align} \lim\limits_{x \to \beta_i}Tx=T\beta_i &= \beta_i - \beta_{i-1}+\beta_{\tau(i)-1}^\tau\\ &= \alpha_i+ \beta_{\tau(i)-1}^\tau\\ &=\alpha_{\tau^{-1}(\tau(i))}+(\alpha_{\tau^{-1}(1)} + \cdots +\alpha_{\tau^{-1}(\tau(i)-1)})\\ &=\alpha_{\tau^{-1}(1)} + \cdots +\alpha_{\tau^{-1}(\tau(i)-1)}+\alpha_{\tau^{-1}(\tau(i))}\\ &=\beta_{\tau(i)}^\tau \end{align} Now how could I compute number 4? Where should I do a different thing in my calculation?

Is infinitesimal calculus rigorous?

Posted: 31 Jan 2022 11:45 PM PST

Up until now I had known that calculus had been initially formulated by Leibniz and Newton in terms of infinitesimals, but that approach led to inconsistencies, and so Cauchy and others reformulated calculus rigorously using the epsilon-delta method.

Recently I stumbled across "Elementary Calculus : An Infinitesimal Approach" by H. J. Keisler. In his preface Keisler says that the infinitesimal approach to calculus had been placed on rigorous footing in the 1960s by Abraham Robinson.

And here is my question : is it universally accepted now that Robinson's calculus is rigorous/not logically inconsistent? In essence, what's the common consensus in the math community regarding Robinson's approach to infinitesimal calculus?

What mistake is the author making (if any) while finding this antiderivative?

Posted: 01 Feb 2022 12:20 AM PST

Find the following antiderivative: $\int{\frac{1}{\sqrt{(2-x)(x-1)}}}dx$

My attempt:

$$\int{\frac{1}{\sqrt{(2-x)(x-1)}}}dx$$

$$=\int{\frac{1}{\sqrt{2x-2-x^2+x}}}dx$$

$$=\int{\frac{1}{\sqrt{-2+3x-x^2}}}dx$$

$$=\int{\frac{1}{\sqrt{-(2-3x+x^2)}}}dx$$

$$=\int{\frac{1}{\sqrt{-(x^2-3x+2)}}}dx$$

$$=\int{\frac{1}{\sqrt{-(x^2-3x+{(\frac{3}{2})^2)}-2+(\frac{3}{2})^2}}}dx$$

$$=\int{\frac{1}{\sqrt{0.5^2-(x-\frac{3}{2})^2}}}dx$$

$$[\text{Let $x-\frac{3}{2}=u$}\\ \therefore du=dx]$$

$$=\int{\frac{1}{\sqrt{0.5^2-u^2}}}dx$$

$$[\text{Formula:}\int{\frac{1}{\sqrt{a^2-x^2}}}dx=\arcsin\left(\frac{x}{a}\right)+C, |x|<a]$$

$$=\arcsin\left(\frac{u}{0.5}\right)+C$$

$$=\arcsin\left(\frac{x-\frac{3}{2}}{0.5}\right)+C$$

$$=\arcsin(2x-3)+C$$

My work is correct. I know this because integral-calculator agrees with me (after going to the link, click on "Go!". For steps, click on "Show steps").

However, my book did this math in a different way:

My book's attempt:

$$\int{\frac{1}{\sqrt{(2-x)(x-1)}}}dx$$

$$[\text{Let}\ x-1=u^2,dx=2udu,x=u^2+1,2-x=1-u^2]$$

$$=\int{\frac{1}{\sqrt{(1-u^2)(u^2)}}}2udu$$

$$=\int{\frac{1}{\sqrt{(1-u^2)}u}}2udu$$

$$=2\int{\frac{1}{\sqrt{(1-u^2)}}}du$$

$$[\text{Formula:}\int{\frac{1}{\sqrt{a^2-x^2}}}dx=\arcsin\left(\frac{x}{a}\right)+C, |x|<a]$$

$$=2\arcsin(u)+C$$

$$=2\arcsin(x-1)+C$$

Comments:

This is the graph. From the graph, it doesn't look like the book's answer differs only by a constant from my answer. So, is it wrong? If it is wrong, what mistakes did it make?

How to construct a finite sub-cover of an arbitrary compact set in $\mathbf{R}$

Posted: 31 Jan 2022 11:59 PM PST

$\def\R{\mathbf{R}}$ $\def\N{\mathbf{N}}$ Note. When I use the term "compact", I mean specifically "closed and bounded." I also only understand topological notions inside only $\R$, no other metric space/topology .

The proofs of the Heine-Borel Theorem (compact $\implies$ always finitely sub-coverable) I've seen are all indirect. They show the existence of a finite-subcover but never how to find it. My end goal is to come up with a direct proof of the Heine-Borel Theorem. Here's a conjecture I came up with that I believe is true (intuitively), but cannot prove.

Conjecture 1. Let $K \subseteq \R$ be compact, and $K'$ be the set of limit points of $K$. If $O$ is an open cover for $K'$, then $K \setminus O$ is necessarily finite.

If this conjecture is true, (which I am assuming for the moment), then this shortens our problem to finding a finite sub-cover for $K'$, since by the conjecture, of the remaining points of $K$ our sub-cover missed, we must've missed only finitely many, and can hence add back in a few more sets to cover them. Hence if $K'$ itself is finite, then it's easy to construct a finite sub-cover for $K$. The problem arises when $K'$ is infinite. I'm thinking, if $K'$ was countably infinite, then we can look at $K''$ (the set of the limit points of $K'$). If $K''$ is finite, then we can finitely sub-cover $K'$, and hence finitely sub-cover $K$. Thus if $K_0=K$ is a compact set with $K_{n+1}={K_{n}}',$ then given an open-cover of $K$, we can construct a finite sub-cover if there exists an $N \in \N$ such that $K_N$ is finite. It makes me wonder whether this is always the case for an arbitrary countable compact set, which leads me to making the following conjecture:

Conjecture 2. Let $K\subseteq\R$ be compact and countably infinite. Define $K_0 := K$, and define $K_{n+1} := {K_{n}}'$. Then there exists an $N \in \N$ such that $K_N$ is finite. In other words, as you iteratively take set of the limit points of any countable compact set, you will eventually get a finite set.

So that's good and all, but then there are compact sets where no matter how many times I iteratively take its set of limit points, it remains infinite (such as uncountable compact sets or non-empty perfect sets).

Is this line of attack worthwhile, and is it possible to extend these arguements to directly prove the Heine-Borel Theorem?

On some conjectures regarding repunits

Posted: 01 Feb 2022 12:27 AM PST

While researching the topic of Descartes numbers, I came across the following seemingly related subproblem:

PROBLEM: Determine conditions on $n$ such that $$\frac{{10}^n - 1}{9}$$ is squarefree.

MY ATTEMPT

Set $$m := \frac{{10}^n - 1}{9}.$$

(Note that $m$ is called a repunit. Searching for the keyword "squarefree" in this Wikipedia page did not return any results.)

I noticed that $m$ is squarefree for $n = 1$.

So, let $n > 1$. I also observed that, for $n \geq 2$, we actually have $$m \equiv 3 \pmod 4,$$ so that $m$ is not a square.

Next, I considered the prime factorizations of $m$ for the first dozen $n \neq 1$: $$\begin{array}{c|c|c|} \text{Value of } n &\text{Repunit } m & \text{Prime Factorization of } m \\ \hline 2 & 11 & 11 \\ \hline 3 & 111 & 3 \times 37 \\ \hline 4 & 1111 & 11 \times 101 \\ \hline 5 & 11111 & 41 \times 271 \\ \hline 6 & 111111 & 3 \times 7 \times 11 \times 13 \times 37 \\ \hline 7 &1111111 & 239 \times 4649 \\ \hline 8 & 11111111 & 11 \times 73 \times 101 \times 137 \\ \hline 9 & 111111111 & 3^2 \times 37 \times 333667 \\ \hline 10 & 1111111111 & 11 \times 41 \times 271 \times 9091 \\ \hline 11 & 11111111111 & 21649 \times 513239 \\ \hline 12 & 111111111111 & 3 \times 7 \times 11 \times 13 \times 37 \times 101 \times 9901 \\ \hline 13 & 1111111111111 & 53 \times 79 \times 265371653 \\ \hline \end{array}$$

From this initial data sample, I predict the truth of the following conjectures:

  • CONJECTURE 1: If $n \equiv 0 \pmod 6$, then $m$ is squarefree.
  • CONJECTURE 2: If $n \equiv 0 \pmod 6$, then $\bigg(3 \times 7 \times {11} \times {13} \times {37}\bigg) \mid m$.
  • CONJECTURE 3: If $n \equiv 0 \pmod 3$, then $\bigg(3 \times {37}\bigg) \mid m$.

I skimmed through OEIS sequence A002275 and did not find any references to these conjectures.

RESOLVING CONJECTURE 1

I searched for counterexamples to Conjecture 1 using Pari-GP in Sage Cell Server, I got the following output in the range $n \leq 50$:

18[3, 2; 7, 1; 11, 1; 13, 1; 19, 1; 37, 1; 52579, 1; 333667, 1]  36[3, 2; 7, 1; 11, 1; 13, 1; 19, 1; 37, 1; 101, 1; 9901, 1; 52579, 1; 333667, 1; 999999000001, 1]  42[3, 1; 7, 2; 11, 1; 13, 1; 37, 1; 43, 1; 127, 1; 239, 1; 1933, 1; 2689, 1; 4649, 1; 459691, 1; 909091, 1; 10838689, 1]  

This output means that

  • $\dfrac{{10}^{18} - 1}{9}$ is divisible by $3^2$.
  • $\dfrac{{10}^{36} - 1}{9}$ is divisible by $3^2$.
  • $\dfrac{{10}^{42} - 1}{9}$ is divisible by $7^2$.

I therefore conclude that Conjecture 1 is false.

MY ATTEMPT TO RESOLVE CONJECTURE 2

I searched for counterexamples to Conjecture 2 using Pari-GP in Sage Cell Server, I got a blank output in the range $n \leq {10}^5$.

The Pari-GP interpreter of Sage Cell Server crashes as soon as a search limit of ${10}^6$ is specified.

This gives further computational evidence for Conjecture 2.

MY ATTEMPT TO RESOLVE CONJECTURE 3

I searched for counterexamples to Conjecture 3 using Pari-GP in Sage Cell Server, I got a blank output in the range $n \leq {10}^5$.

The Pari-GP interpreter of Sage Cell Server crashes as soon as a search limit of ${10}^6$ is specified.

This gives further computational evidence for Conjecture 3.


Alas, this where I get stuck, as I do not currently know how to prove Conjectures 2 and 3.

INQUIRY

Given that Conjecture 1 is false, do you know of or can you prove a(n) (unconditional) congruence condition on $n$ which guarantees that the repunit $m$ is squarefree?

If $(^0E)^0=E$ for every closed subspace $E\subset X^*$ then $X$ is reflexive.

Posted: 31 Jan 2022 11:43 PM PST

Let $X$ be a normed vector space, $X^*$ be its dual and $X^{**}$ be its bidual.

For every $E \subset X^*$ we define the left-annihilator of $E$ as $$^0E=\{x \in X : f(x)=0 \ \forall f \in E\}.$$ And for every $F \subset X$ we define the annihilator of $F$ as $$F^0=\{f \in X^*: f(x)=0 \ \forall x \in F\}.$$ It is easy to see that $E \subset (^0E)^0$.

Suppose that for every closed subspace $E \subset X^*$ we have $$E = (^0E)^0.$$

I want to prove that it implies $X$ is reflexive.

So, we want to show that the natural isometric linear injection $J_X:X \to X^{**}$, $J_X(x)(\varphi)=\varphi(x)$, is surjective.

That is, given $\Psi \in X^{**}$ there is $x \in X$ such that for every $\varphi \in X^*$ we have $$\Psi(x)(\varphi)=\varphi(x).$$

These are some ideas I had but I didn't know how to use them:

Define $E:=null(\Psi) \subset X^*$. I think we should consider the case $E\neq X^*$ and then pick the $x\neq 0$ in $$\bigcap_{f \in E}null(f),$$ that is, $x \in (^0E)$.

I also know that we have a natural isometric isomorphism $\Phi: X^*/E \to (^0E)$, $\Phi([f])=f|_{(^0E)}$.

I appreciate any ideas or comments!

Show that $\{x \mid A_0 + \sum x_i A_i \succcurlyeq 0 \}$ is convex

Posted: 01 Feb 2022 12:11 AM PST

Let $A_0, A_1,\dots,A_m$ be symmetric matrices. Let $x \in \mathbb R^m$ and define $$A(x) := A_0 + \sum_{i=1}^m x_i A_i$$ Show that the set $C := \{x \mid A(x) \text{ is positive semidefinite} \}$ is convex.


For a set $C \subseteq \mathbb R^n$, I know of a few ways to show that it is convex:

  1. Show that $\lambda x_1 + (1-\lambda)x_2 \in C$ for all $x_1, x_2 \in C$ and $\lambda \in [0, 1]$.

  2. Show that $C$ is an intersection of convex sets (for example halfspaces).

This is easy to show using the first method, but I am struggling to show that the set is convex using the second method.

My question

The second method above uses the "outer construction" of the set which I am not comfortable with. Is there some trick to applying this method? How could I show that my set $C$ is convex using this method?

Last, are there other methods for showing a set is convex other than the two I have listed above? (I know with additional assumptions it might be easier, but I am thinking about the general case)

hwo to prove that J is prime if and only if f1 is a linear polynomial. [closed]

Posted: 01 Feb 2022 12:18 AM PST

Let f1, . . . , fn ∈ C[x1] be polynomials in one variable, with n ∈ N. Suppose f1 is not a constant polynomial and consider the ideal J = <f1(x1), x2 − f2(x1), . . . , xn−fn(xn)> ⊆ C[x1, . . . , xn] Prove that J is prime if and only if f1 ∈ C[x1] is a linear polynomial.

Im very confused trying to understand and prove this any guidance would be appreciated.

asymptotic approx. for integral appearing in SIR epidemiology model

Posted: 31 Jan 2022 11:51 PM PST

I am looking for some advice on how to get a simple approximation to an integral that appears in the classical SIR epidemiological model.

One question, that I keep pondering given recent news items, is given different configurations of infectiousness and population size, how long will it take from an initial state to reach the fabled herd immunity . e.g. if all but a small age group are perfectly vaccinated, how long will it take to reach herd immunity in that population with a given transmission rate? Does a population with greater $R_{0}$ always take longer to reach herd? What if the population is smaller? etc.

Following the terminology (and solution strategy) in the wiki page Compartmental models in epidemiology: The SIR model without vital dynamics, then given an initial state of an epidemic as $(S(0),I(0), R(0))=(N(1-\delta),\delta, 0)$ the time to progress to the herd immunity state where $S(\tau_{herd})/N=1/R_{0}$ (leading to $dI/dt=0$) is given by the integral: $$ \tau_{herd}=\int_{0}^{\ln(R_{0}(1-\delta))/R_{0}} \frac{\mathrm{d}x}{1-x-(1-\delta)\exp(-R_{0}x)} $$ In the limit $\delta\rightarrow 0$ the integral diverges, as is intuitive. However, trying to find an approximate form valid for small $\delta$ doesn't seem obvious to me, due to the mixture of linear and exponential forms in the denominator. Integration by parts does not seem fruitful and methods such as Laplace etc. rely on exponential forms: $\lim_{\lambda\rightarrow \infty} \int \exp(\lambda f(x)) \mathrm{d}x$. It seems hard to force the $\delta$ parameter dependence into such an exponential. A clever substitutions to get the above expression into a more tractable form seems not obvious.

The only approach I could think of was to expand $\exp(-R_{0}x)\simeq 1-R_{0}x +R_{0}^{2}x^{2}/2$ which makes the integral elementary and gives the (awkward) form: $$ \tau_{herd}\simeq \frac{1}{R_{0}(1-\delta)}\frac{1}{ \sqrt{ \frac{ (1+R_{0}(1-\delta))^{2}}{R_{0}^{4}(1-\delta)^{2}} - \frac{2}{R_{0}^{2}}\frac{\delta}{1-\delta} } } \ln \left[ \frac { 1+ \frac{ \ln[R_{0}(1-\delta)]/R_{0} } { \frac{1+R_{0}(1-\delta)}{R_{0}^{2}(1-\delta)} - \sqrt{ \frac{ (1+R_{0}(1-\delta))^{2}}{R_{0}^{4}(1-\delta)^{2}} - \frac{2}{R_{0}^{2}}\frac{\delta}{1-\delta} } } } { 1+ \frac{ \ln[R_{0}(1-\delta)]/R_{0} } { \frac{1+R_{0}(1-\delta)}{R_{0}^{2}(1-\delta)} + \sqrt{ \frac{ (1+R_{0}(1-\delta))^{2}}{R_{0}^{4}(1-\delta)^{2}} - \frac{2}{R_{0}^{2}}\frac{\delta}{1-\delta} } } } \right] $$ (I gather this was the approach in the original 1920s literature). This expression matches numerical integration in an OK-ish manner for small $R_{0}$ and $\delta$ but I feel there is a better approximation here that is valid for larger ranges of $R_{0}$.

Is there a more systematic theory available to find expansions for forms such as $\tau_{herd}(\delta)$?

Thanks!

some inequalities about numerical radius

Posted: 01 Feb 2022 12:01 AM PST

Let $T$ be a bounded linear operator on Hilbert space H. the set $W(T) :=\{\langle Tx,x\rangle: ||x||= 1\}$ denotes the numerical range of T and $w(T) := \sup_{\lambda \in W(T)} | \lambda |$ is the numerical radius of $T$ . I have some problems to do with this and I can't solve them.

  1. $W(T)$ is compact : obviously it is bounded but how can I prove it is closed?

  2. $w(T)\le ||T||\le 2w(T)$ obviously $w(T)\le ||T||$ but my problem is the second inequality.

  3. $w(T^n)\le w(T)^n$

Any hint will help.

Show the existence of a sorting function

Posted: 01 Feb 2022 12:15 AM PST

Let $(X,\leq)$ be a totally ordered set. A sort for $f\in X^n$ is an element $g\in X^n$ satisfying

(i) $g$ is nondecreasing.

(ii) $ g=f\circ \sigma$ for some permutation $\sigma:\{1,\dots,n\}\to \{1,\dots,n\}$.

A sorting function for $X^n$ is a function $\Phi:X^n\to X^n$ such that $\Phi(f)$ is a sort for $f$, for each $f\in X^n$.

Am asked to show that there exists a sorting function for $X^n$.

My attempt:

By induction on $n$. If $n=1$ this is trivial. Suppose there exists a sorting function $\Phi:X^n\to X^n$ for some $n\geq 1$.

Define the functions $\Gamma,\Delta:X^{n+1}\to X^{n+1}$ by $$\Gamma(f)=\bigg(\Phi[f|_{\{1,\dots,n\}}],f(n+1)\bigg)$$

$$\Delta(f)=f\circ \pi$$

where $\pi:\{1,\dots,n+1\}\to \{1,\dots,n+1\}$ is the permutation defined by

$$\pi(i)=\begin{cases} i & \text{if } \quad i<i_0 \\ n+1 & \text{if } \quad i=i_0 \\ i-1 & \text{if } \quad i>i_0 \end{cases}$$

where $i_0$ is the smallest element of the set $\big\{1\leq i\leq n:f(n+1)\leq f(i)\big\}$ if this set is nonempty, and $i_0:=n+1$ otherwise.

I claim that $\Phi':=\Delta\circ \Gamma$ is a sorting function for $X^{n+1}$. Since $\Phi[f|_{\{1,\dots,n\}}]=f|_{\{1,\dots,n\}}\circ \sigma$ for some permutation $\sigma:\{1,\dots,n\}\to \{1,\dots,n\}$ we see that $\Gamma(f)=f\circ \sigma'$ for some permutation $\sigma'$. Then

$$\Phi'(f):=(\Delta\circ \Gamma) (f)=f\circ \sigma'\circ\pi$$ for the permutation $\sigma'\circ\pi$. By considering the different cases with respect to $i_0$ we see that $i\leq j$ implies $\Phi'(f)(i)\leq \Phi'(f)(j)$. Hence $\Phi'(f)$ statisfies (i) and (ii) for all $f\in X^{n+1}$ and so is indeed a sorting function for $ X^{n+1}$.

Is this correct?

Thanks a lot for your help

Can one define a matrix norm invariant under $SL(2,\mathbb{C})$?

Posted: 01 Feb 2022 12:24 AM PST

I'm used to working with the Frobenius norm of a matrix $A\in M_{2,2}(\mathbb{C})$ defined as \begin{equation} \left\|A \right\|_F := \sqrt{\operatorname{tr}(AA^\dagger)} \end{equation} which is convenient to work with as it is invariant under unitary transformations.

Can one define a matrix norm similar to this which is invariant under $SL(2,\mathbb{C})$ transformations instead of unitary transformations? In other words can one define a norm such that \begin{equation} \left\| A \right\|= \left\| A S \right\|= \left\|S A \right\| \end{equation} for any $S\in SL(2,\mathbb{C})$?

Equation of a cubic Bezier curve

Posted: 01 Feb 2022 12:24 AM PST

quadratic bezier curve

For a quadratic Bezier Curve defined by points $A, B, C$, with point $M$ on the curve interpolated by $i$,

When points $A, M, C$ and angle $\alpha$ are given, $i$ and $B$ are:

$$i^2 = \frac{(y_A-y_M)\sin\alpha-(x_A-x_M)\cos\alpha}{(y_A-y_C)\sin\alpha-(x_A-x_C)\cos\alpha}$$ $$x_B=\frac{x_A(1-i)^2+x_Ci^2-x_M}{2i(i-1)}$$ $$y_B=\frac{y_A(1-i)^2+y_Ci^2-y_M}{2i(i-1)}$$

for a full working out: $$x_D = x_A+i(x_B-x_A)$$ $$x_E = x_B+i(x_C-x_B)$$ $$x_M = x_D+i(x_E-x_D)$$

$$x_M = x_A(1-i)^2+2ix_B(1-i)+i^2x_C$$ $$x_B=\frac{x_A(1-i)^2+i^2x_C-x_M}{-2i(1-i)} = AB\sin\alpha+x_A$$ $$y_B=\frac{y_A(1-i)^2+i^2y_C-y_M}{-2i(1-i)} = AB\cos\alpha+y_A$$ $$AB = \frac{i^2(x_A-x_C)+x_A-x_M}{2i\sin\alpha(1-i)}= \frac{i^2(y_A-y_C)+y_A-y_M}{2i\cos\alpha(1-i)}$$

hence resulting the above 3 equations.

cubic bezier curve

Now consider a cubic Bezier curve defined by points $A, B, C, D$

$1)$With given points $A, D$, $M$ interpolated by $i$,angles $\alpha, \beta$, Can $B, C$ and $i$ be derived?

$2)$ if not, With given points $A, D$, $M$ (interpolated by $i$), $N$ (interpolated by $j$), angles $\alpha, \beta$, What are the equations for $B, C$, $i$ and $j$?

Multiple distinct objects inserted together into distinct bins

Posted: 01 Feb 2022 12:05 AM PST

Suppose there are $9$ distinct apples, $7$ distinct bananas and $6$ identical bins. How many possible configurations in which exactly one bin will hold 5 apples, exactly one bin will hold 3 bananas, and the remaining 4 bins will hold 1 apple and 1 banana are there? (regardless of the bin order) Items can be inserted into the bins in any combination (for example, the bin which will hold 5 apples can have inside it apples number $1, 2, 4, 7, 9$; the bin which will hold 3 bananas can have inside it bananas number $(1, 5, 6)$.

Are there any generalization formulas for this type of problems?

Determining all possible values of $n$ in terms of $x$ of the following tree.

Posted: 01 Feb 2022 12:28 AM PST

Let $T$ be a maximal heap tree. Let $T$ hold the integers $1$ to $n$ as its nodes without duplicates. Let $x$ be a child of the root. What are the possible values that $n$ could take, in terms of $x$?

Clearly $x$ is not the maximum, so $n$ is greater than $x$. But I'm struggling with a more concrete description of all the possible values $n$ could be in terms of $x$.

Deducing the adjoint of an operator from its effect on a linear combination [closed]

Posted: 31 Jan 2022 11:45 PM PST

The idea from this task comes from $\text{Linear Algebra Done Right (3rd edition)}$.

Suppose $V$ is an inner product space, $\text{dim } V \geq 2$ and $S \in \mathcal{L}(V)$. Furthermore, $(e_1, \dots , e_n)$ is an orthonormal basis of $V$. Now $S$ is defined by

$$ S(a_1e_1 + \dots + a_ne_n) = a_2e_1 - a_1e_2$$

From this knowledge, how would I derive the adjoint $S^\ast$?

$\textbf{Idea:}$ Write $\mathcal{L}(V)$ as matrix and then derive the conjugate matrix and from there $S^\ast$. However, I'm unsure how to translate $\mathcal{L}(V)$ into a matrix.

Collision probability by density integration

Posted: 01 Feb 2022 12:23 AM PST

$\newcommand{\icol}[1]{% inline column vector \left(\begin{smallmatrix}#1\end{smallmatrix}\right)% }$

Let's consider a road segment of length $l$ on which there is always 1 single car circulating at a constant speed $V_{car_1}$ (this means that cars are always spaced by the distance $l$). Now imagine that someone is crossing the road at a constant speed $V_{car_2}$ with an intersection angle $\theta$.

Both cars are assumed to have a square shape of length $\lambda$

Scenario Description

The probability to have a collision is approximated by the following formula: $$P_{collision} = \frac{2 \lambda}{l}\left(1+ \frac{|V_{car_1}-V_{car_2}cos(\theta)|}{V_{car_2}sin (\theta)}\right)$$

We have tried to model the similar probability using a different method. We have obtained the probability density function $f$ describing the occupancy probability of the car 1: $f(x,y)$ gives you the probability of the center of the car 1 being at coordinates $(x , y)$.

Let's consider $C_2 = \icol{x_2\\y_2}$ the coordinates of the car 2, we have: $$ C_2(t) = \begin{pmatrix} x_{2_0}+V_2 cos(\theta) t\\ y_{2_0}+V_2 sin(\theta) t \end{pmatrix} $$

I thought that the probability of collision would be the following: $$ \int\limits_{0}^{+\infty} \int\limits_{y_2(t)-\lambda}^{y_2(t)+\lambda} \int\limits_{x_2(t)-\lambda}^{x_2(t)+\lambda} {f\left(x, y\right) \,dx\,dy\,dt} $$ But somehow the obtained results are different.

For example considering both cars of size $\lambda=3$ running at $V_1=V_2=10m/s$ crossing at a $\theta = 90deg$ and taking $l=100m$ gives: $$ \\ P_{collision\_method\_1} = 0.12 \\ P_{collision\_method\_2} = 0.036 $$

Where is my method wrong and what should I do to obtain the correct result with the method 2?

The code is available as a python notebook: https://colab.research.google.com/drive/1Ofd7zlYNtbmNjwAHp6dGIkG0sI7o7aNK?usp=sharing

Update:

When $\Theta = 90deg$, we have: $$ P_{collision\_method\_1} = \frac{(V_1+V_2) V_2}{2V_1\lambda} P_{collision\_method\_2} $$ This does not work well for other intersection angles. My best guess is that a $sin \theta$ or $cos \theta$ should also intervene in the scaling factor.

Structure of the group of points of order $m$ on an elliptic curve

Posted: 01 Feb 2022 12:14 AM PST

I am reading the book An Introduction to Mathematical Cryptography and in the chapter about Elliptic Curves and Cryptography there is the proposition below.

It is about the structure of the group $E(\mathbb{C})[m]$ of points of order $m$ and its counterpart for finite fields:

Proposition

That proposition seems quite unexpected and is not proven in the book. I can't figure out how we might have such a result.

Would it be possible to give the idea / intuition of a proof in the simplest terms?

If not, what would be a roadmap to be able to fully understand the reasons for this result?

how to prove irrotational for Laplace's equation

Posted: 01 Feb 2022 12:03 AM PST

Prove that if a scalar field $\phi(x,y,z)$ satisfies Laplace's equation:

$\nabla^2 \phi = 0$

$\nabla \cdot \nabla \phi = 0$

then:

$\vec{v} = \nabla \phi$ is irrotational.


irrotational is defined as any vector field where the curl is zero:

$\nabla \times \vec{v} = 0$

For non-diagonalizable matrices, the dimension of centralizer can be different from $\sum\limits_{j=1}^k d_j^2$

Posted: 01 Feb 2022 12:06 AM PST

It is known that if a square matrix $A$ is diagonalizable then the subspace $$C(A)=\{X\in M_{n,n}; AX=XA\}$$ has the dimension $\sum\limits_{j=1}^k d_j^2$, where $d_j$ denotes the geometric multiplicity of the $j$-th eigenvalue. There are several posts on this site related to this fact.1

I suppose that this is no longer true if we do not assume that $A$ is diagonalizable. What are some counterexamples showing that this is no longer true?

1For example, Finding the dimension of $S = \{B \in M_n \,|\, AB = BA\}$, where $A$ is a diagonalizable matrix, To prove that the dimension of $V$ is $d_1^2 + \ldots + d_k^2$, Show $\dim(U) = d_1^2 + d_2^2 + \cdots + d_k^2$, where $U$ is the set of matrices that commute with a diagonalizable matrix $A$, Let $D$ be an $n \times n$ diagonal matrix whose distinct diagonal entries are $d_1,\ldots, d_k$, and where $d_i$ occurs exactly $n_i$ times., Hoffman Exercise, Linear Algebra, Dimension of centralizer of a diagonalizable matrix., The Dimension of Vector Space.

My motivation for asking this is that there are questions about commuting matrices and specifically about this claim for diagonalizable matrices are posted on this site quite often. So it might be useful to have somewhere on this site a counterexample showing that this is no longer true after omitting this condition.

I have also posted in an answer a counterexample which seems relatively simple to me. Naturally, it is still interesting if somebody can provide other answers with different approaches or other useful insights.

No comments:

Post a Comment