Thursday, May 19, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


I have similar triangles sharing a common point, with common tangents. Some values are given.

Posted: 19 May 2022 04:23 AM PDT

From the following, I can come up with a host of equations, given the triangles are similar, but I am struggling to solve 3 unknowns in 3 equations, with the intention of ultimately finding x.
Textbook excerpt.

This is a text book question, in a section about similar triangles.

Let circle at centre B have radius $r_2$
Let circle at centre E have radius $r_1$

$\triangle ABC \sim \triangle AED \sim \triangle EBF$ (AAA)

  1. $\frac{AE}{AD}=\frac{AB}{AC}=\frac{EB}{EF}$

$\frac{x+r_1}{5} = \frac{x+2r_1+r_2}{5+12.8}= \frac{r_1+r_2}{12.8}$

  1. Pythagoras on $\triangle$ABC $(x+2r_1+r_2)^2=(5+12.8)^2+(r_2)^2$
  2. Pythagoras on $\triangle$AED $(x+r_1)^2=(5)^2+(r_1)^2$
  3. Pythagoras on $\triangle$EBF $(r_1+r_2)^2=(12.8)^2+(r_2-r_1)^2$

And this is the part where it gets tricky for me. When I try to solve 3 unknowns in 3 equations, I start to get confused about where to start.

I'll show you what I would try, and if you could point me in the right direction, that will be helpful.

From 4): $(r_1+r_2)^2=(12.8)^2+(r_2-r_1)^2$
$r_1+2r_1r_2+r_2^2=r_2^2-2r_1r_2+r_1^2$
$4r_1r_2=12.8^2$
$r_1=\frac{40.96}{r_2}$

I get to this point and am not sure if I'm heading in the right direction. My text book uses elimination method to solve for 3 unknowns, but I can't identify a way to eliminate any of those equations, and I've tried substitution method but it ends up messy, and incorrect.

Which finite-field square matrices are zero as quadratic form?

Posted: 19 May 2022 04:19 AM PDT

Consider a $\mathbb{F}_p$-valued $n\times n$ matrix $A$. For which $A$ do we have $$x^T A x=0$$ for all $x\in (\mathbb{F}_p)^n$? Obviously, this is the case if $A$ is anti-symmetric, since then $A=B-B^T$ for some $B$. Is this if an iff?

I think for $\mathbb{R}$ instead of $\mathbb{F}_p$, this is true since matrices of the form $xx^T$ span the whole space of symmetric matrices. Does that still hold for $\mathbb{F}_p$?

Linear maps, linear applications, linear algebra

Posted: 19 May 2022 04:13 AM PDT

enter image description hereenter image description here

1: https://i.stack.imgur.comstrong text/YYNu8.jpg

Hi there, I'm trying to solve this ex on linear maps, the result is similar to the book's one and I cannot stand where is the mistake. Thank you.

Clarification about field extension and its degree

Posted: 19 May 2022 04:11 AM PDT

I know there are some posts about this, but I'm still confused regarding this specific question. It is said that the dimension of any field extension $\mathbb{Q}(w)$ is the degree of the irreducible polynomial of $w$.

Now for $w=\sqrt{5+\sqrt{2}}$, the irreducible polynomial is $$p(x)=x^4-10x^2+23$$ which has roots $$\pm \sqrt{5+\sqrt{2}}, \pm \sqrt{5-\sqrt{2}}$$ and evidently only $\sqrt{5+\sqrt{2}}, \sqrt{5-\sqrt{2}}$ are linearly independent. Thus as a first guess a basis could be $$\left\{1,\sqrt{5+\sqrt{2}}, \sqrt{5-\sqrt{2}}\right\} \,,$$ which has only 3 elements however.

Since we must have $$\sqrt{5+\sqrt{2}}^2 = 5+\sqrt{2} \in \mathbb{Q}(w) \, $$ we can pick $\sqrt{2}$ as another basis vector. Additionally $$\sqrt{5+\sqrt{2}} \cdot \sqrt{5-\sqrt{2}}=\sqrt{23} \in \mathbb{Q}(w) \, ,$$ so $\sqrt{23}$ is another basis vector? But that would mean I have 5 basis vectors and not 4? What am I thinking wrong?

Prove that $x = 257 mod 1000$ I think the book is wrong

Posted: 19 May 2022 04:20 AM PDT

If you knew $x = 1 mod 8$ and $x = 7 mod 125$

Prove that $x = 257 mod 1000$

The mistake in the book is that it did this:

$8x = 56 mod 1000$

$125x = 125 mod 1000$

Here it multiplied the first one by 16 so it resulted in this:

$128x = 896 mod 1000$ then solved it and proved it but it should be impossible because $16$ and $1000$ are not co-prime. No?

What is a homothetic change of metric? How does it mean that using a homothetic change of metric we can set the volume equal to 1.

Posted: 19 May 2022 04:07 AM PDT

I'm reading Aubin's work on Yamabe problem(the book Some Nonlinear Problems in Riemannian Geometry page150)

He wrote that by a homothetic change of metric we can set the volume equal to one. So henceforth, without loss of generality, we suppose the volume equal to one.

What does he mean, does he mean that we can set the volume to be 1 for any problems? or there is some condition? It seems that this can simplify many estimates.

Equivalent condition for $Tor_{n+1}^R(G,N)=0$

Posted: 19 May 2022 03:57 AM PDT

I am reading Relatvie Homological Algebra by Enochs. In the proof of lemma 9.1.4, he said '$0 \to S \otimes N \to P_n \otimes N$ is exact since $Tor_{n+1}(G,N)=0$'. Besides, he said '$0 \to S \otimes M \to P_n \otimes M$ is exact and so $Tor_{n+1}(G,M)=0$'(See the following picture). How can I prove this? I know $$ Tor_{n+1}(G,M)=0 \Leftrightarrow Ker(d^{n+1}\otimes 1_M)=Im(d^{n+2}\otimes 1_M), $$ but I am stuck here. I think $S$ means $ker(d^n)$ here, the so-called partial projective resolution is not a projective resolution. Thanks in advance enter image description here

Matrix powers up to multiplicative factor

Posted: 19 May 2022 03:55 AM PDT

Let $A$ be a real $n\times n$ matrix, $A_n = A^n$, and

$$ \bar A_n = \lbrace\alpha A_n, \alpha\in \mathbb{R}\rbrace.$$

I am interested in characterizing the behavior of $\bar A_n$ when $n\rightarrow \infty$. My initial lead is to start with a Jordan decomposition of $A$. But I guess that it is a standard question that should already be studied, so I am looking for references. I have seen several documents addressing the convergence of the sequence $A_n$ to $0$ or $\infty$. Can someone indicate references for the 'projective version' of the problem ?

Thank you.

Generalizing Ramanujan cubic denesting formula to higher powers

Posted: 19 May 2022 03:54 AM PDT

We have the following theorems for denesting radicals of degree 2 and 3 :

Denesting theorem for degree 2 :

If $\alpha, \beta$ are the roots of the equation,

\begin{equation} x^2-ax+b = 0 \end{equation}

then

\begin{equation} \sqrt{\alpha} + \sqrt{\beta} = \sqrt{a + 2\sqrt{b}} \end{equation}

This theorem can easily be proved by using the Vieta formulas, and squaring both sides. It tells us that such nested radicals can be denested when the determinant is a perfect square ($a^2-4b^2 = n^2$).

Denesting theorem for degree 3 (from Ramanujan) :

If $\alpha, \beta,\gamma$ are the roots of the equation,

\begin{equation} x^3-ax^2+bx - 1 = 0 \end{equation}

then

\begin{align} \sqrt[3]{\alpha} + \sqrt[3]{\beta}+ \sqrt[3]{\gamma} &= \sqrt[3]{a + 6 + 3t} \\ \sqrt[3]{\alpha\beta}+\sqrt[3]{\beta\gamma}+\sqrt[3]{\gamma\alpha}&=\sqrt[3]{b+6+3t} \end{align}

where $t$ satisfy the equation

\begin{equation} t^3−3(a+b+3)t−(ab+6(a+b)+9)=0 \end{equation}

This theorem is well proved here, and allows to produce many nice identities, such as, $$\sqrt[3]{\cos\tfrac {2\pi}7}+\sqrt[3]{\cos\tfrac {4\pi}7}+\sqrt[3]{\cos\tfrac {8\pi}7}=\sqrt[3]{\tfrac 12\left(5-3\sqrt[3]7\right)}$$

Questions :

  1. How to generalize such denesting theorems to higher powers ?
  2. What nice identities can we generate ?

Let's take the case for degree 4. Let $\alpha, \beta,\gamma,\delta$ are the roots of the equation,

\begin{equation} x^4-ax^3+bx^2 -cx + 1 = 0 \end{equation}

then

\begin{equation} \sqrt[4]{\alpha} + \sqrt[4]{\beta}+ \sqrt[4]{\gamma} + \sqrt[4]{\delta} = \sqrt[4]{a + 4t + 6u + 12v} \end{equation}

where $t,u,v$ are

\begin{align} t &= \sum_{perm} \sqrt[4]{\alpha}^3\sqrt[4]{\beta} \\ u &= \sum_{perm} \sqrt[4]{\alpha}^2\sqrt[4]{\beta}^2 \\ v &= \sum_{perm} \sqrt[4]{\alpha}^2\sqrt[4]{\beta}\sqrt[4]{\gamma} \end{align}

which we can find by taking the LHS to the power 4, and using the multinomial theorem. It would be very nice if the $t,u,v$ where also to satisfy a polynomial equation of degree 4 (does it ?). If it so then we would have a theorem similar to the cubic and we could generate new identities.

However to verify if $t,u,v$ satisfy a quartic by elevating them to the power 4 is algebraically heavy and I have not been able to do it. I tried to use symmetrical polynomials to simplify computations.

Product of ergodic system with rotation of cyclic group of finite order.

Posted: 19 May 2022 04:15 AM PDT

Let $(X,\mu,T)$ be an ergodic system and $(G,\nu,R)$ be a system with $G=<g>$, a cyclic group of finite order, say $k\in \mathbb{N}$, $R(g^m)=g^{m+1}$ and $\nu(g^m)=\frac{1}{k}$, for $m\in \{0,1,...,k-1\}$. Isn't then the system $(X\times G, \mu \times \nu, T\times R)$ ergodic as well?

I can see that for any subset of $X\times G$ of the form $A\times B$, where $A$ is a subset of $X$ and $B$ a subset of $G$ we can only have $(T\times S)^{-1}(A\times B)=A\times B\ $ if $\mu\times \nu(A\times B)=0\ \text{or}\ 1$, but is that enough?

Convergence of $ \sum_{n=1}^{\infty} \frac{(1+nx)^n}{n!} , x>0$

Posted: 19 May 2022 04:04 AM PDT

I have been trying this question

$\frac{(1+nx)^n}{n!} > \frac{(nx)^n}{n!}$

Since $\sum_{n=1}^{\infty}\frac{(nx)^n}{n!}$ is divergent when $x \geq \frac{1}{e} \implies \sum_{n=1}^{\infty} \frac{(1+nx)^n}{n!} $ is divergent.

I don't know how to prove that $\sum_{n=1}^{\infty} \frac{(1+nx)^n}{n!}$ is convergent for $0<x < \frac{1}{e} $

Help me out. Thanks in advance

Triple integral set up using cylindrical coordinates

Posted: 19 May 2022 03:51 AM PDT

Set up an integral in cylindrical coordinates to evaluate $\iiint_{E} x y d V$ where $E$ is the region enclosed by the cone $z=2-\sqrt{x^{2}+y^{2}}$, the cylinder $x^{2}+y^{2}=1$, and the $x y$ plane.

My try:

The region $E$ is shown below:

enter image description here

So the triple integral in cylindrical coordinates is:

$$\int_{\theta=0}^{2 \pi} \int_{r=0}^{1} \int_{z=0}^{2-r}(r \cos \theta)(r \sin \theta) r d z d r d \theta$$

Is this set up correct?

Sum over exponentiated bilinear form in finite-field vector space

Posted: 19 May 2022 03:48 AM PDT

Let $A$ be a linear map over the finite-field vector space $(F_2)^n$, i.e., an $F_2$-valued $n\times n$ matrix, not necessarily symmetric. I'm interested in the sum $$\sum_{X\in F_2^n} (-1)^{X^T A X}\;,$$ with (obviously) $(-1)^0=1\in \mathbb{Z}$, $(-1)^1=-1\in \mathbb{Z}$.

Is there a way to efficiently compute this sum? Can one say anything interesting about when this sum is going to be $0$?

Meaning of measurableness

Posted: 19 May 2022 03:46 AM PDT

I've done courses on measure theory, advanced stochastic processes, etc.--years and years of using the notion of 'measurableness'. I've now come to the conclusion that, in fact, I do not understand this notion.

Yes, I 'understand' the "mathematical" meaning (i.e., the definition), it's very simple: The inverse images of (Borel-)measurable sets need to be in the sigma algebra. In fact mathematically there isn't anything to understand, it's just a definition. Yet there's a difference between 'understanding' a definition mathematically, and really understanding the semantics on the deepest level (which I'd call the 'true' meaning).

Concretely, let's focus on probability spaces. Let $(\Omega, \mathcal{F}, P)$ be a probability space. Let $\mathcal G \subseteq \mathcal F$ be a sub-sigma-algebra.

(1) What is the meaning of "the information contained in $\mathcal G$"?

Let $X \colon \Omega \to \mathbb{R}$ be a function. Then $X$ is called a random variable iff $X$ is $\mathcal F$-measurable. This means that, for any Borel set $B$, we have that $(X \in B)$ lies in $\mathcal F$. The reason behind this definition is that, for any such Borel set $B$, we want to be able to compute the probability that $X \in B$ (therefore we must demand all such to be part of the sigma-algebra). Let's suppose that $\sigma(X) \subseteq \mathcal G$, so that $X$ is even measurable w.r.t. the smaller sigma algebra.

(2) How can the "information" in $\mathcal G$ determine the value of $X$?

$\mathcal G$ is some smaller sigma algebra. In which sense does the information in $\mathcal G$ determine the value of $X$? I don't see that at all. In fact, in which sense does the information in $\mathcal F$ determine the value of $X$!?

Despite years of thinking and searching for answers online (including numerous MSE posts), the following question is pertinent:

(3) What is the link between sigma algebras and information?

(NB: I know e.g. What does it mean by saying 'a random variable $\mathit X$ is $\mathcal G$-measurable'? and Problem with intuition regarding sigma algebras and information and many others are similar/a duplicate of this, but it didn't help me solve my problems--all questions there were left unanswered, completely.)

TL;DR: I do not understand the link between sigma algebras and "information". I would appreciate any help. Right now I'm totally stuck. If I can't clear this up I basically have to drop out of the course and find a different career path.

Euclidean projection on convex set of positive semidefinite matrices

Posted: 19 May 2022 03:52 AM PDT

Define the Euclidean projection for a convex set $C$ as follows

$$\pi_C(y) := \min_{x \in C} \| y - x \|_2^2$$

How would we find the projection map when $C$ is the cone of positive semidefinite matrices, i.e., $\displaystyle C := \{ M : M \succeq 0 \}$?


I'm not really sure how to proceed since I can't really get a hold of how to think about how 'close' a non positive semidefinite matrix is to a positive semidefinite one.

Induction to prove the following Congruency? [duplicate]

Posted: 19 May 2022 04:13 AM PDT

I'm pretty much stuck and I'm obviously missing some hint to come up with a proof for the following problem:

Given the series

$$\ a_0 =1$$ $$\ a_n =4(a_{n-1}+1)$$

I want to show that $a_n\equiv 1\pmod 7 $ for all $n$

I calculated these:

$\ a_0 =1, a_1=8, a_2=36, a_3 =148$

The base is clear, we take $n = 1$, then we get $$\ a_1 = 4(a_{n-1}+1) = 4(a_{1-1}+1)=4(a_{0}+1)=4(1+1)=8$$

Afterwards i continued with:

$n \to n+1$

$$a_{n+1} = 4(a_{n}+1)$$

We can replace $a_n$ with $4(a_{n-1}+1)$

$$a_{n+1} = 4(4(a_{n-1}+1)+1)$$ or $$a_{n+1} = 16(a_{n-1})+20$$

How should I proceed from here on? Is there another way to prove this problem? I see that it is recursive - I'm just kind of stuck.

Thanks in advance for any help :)

Monotonicity of implicitly defined function

Posted: 19 May 2022 04:01 AM PDT

Let $f(x,y):\mathbb{R}^2\rightarrow\mathbb{R}$ and $g(y):\mathbb{R}^2\rightarrow\mathbb{R}$ be $C^2$-differentiable functions. Let $f(x,y)$ and $g(y)$ be strictly decreasing in $y$, and let $f(x,y)$ be also strictly decreasing in $x$. Now let $x(y)$ be implicitly defined by equation $$f(x,y)+g(y)=0.$$ How can I prove that $x(y)$ is strictly decreasing or increasing in $g(y)$? I already have that $x(y)$ is strictly decreasing in $y$ (via implicit differentiation), but I am stuck with the dependence on $g(y)$.

Combinatorial Laplacian for homology with $Z_2$ coefficients

Posted: 19 May 2022 04:24 AM PDT

Consider I have boundary operators $\partial_1$, $\partial_2$: $\partial_1 \circ \partial_2 = 0$. Then if interested in $\text{ker}\,\partial_1 / \text{im}\,\partial_2$ one can study $\text{ker}\,(\partial_1^T\partial_1 + \partial_2\partial_2^T)$. However this is only true if I can define non-degenerate inner product in order to relate $(\text{im})^{\perp}$ and $\text{ker}$. But can I some how generalise this if I have $Z_2$ coefficients?

How many different whole numbers are factors of number $2 \times 3 \times 5 \times 7 \times 11 \times 13$?

Posted: 19 May 2022 03:56 AM PDT

The question is:

How many different whole numbers are factors of number $2 \times 3 \times 5 \times 7 \times 11 \times 13$?

My answer to this question is $63$ but the right answer is $64$. I don't know why it is $64$? I need some assistance.

Prove that $\lim_{(x,y) \to (0,0)} \frac{xy(x-y)}{x^3 + y^3}$ does not exist

Posted: 19 May 2022 04:08 AM PDT

I am trying to prove that the following limit does not exist.

$$\lim\limits_{(x,y) \to (0,0)} \frac{xy(x-y)}{x^3 + y^3}$$

I have tried several paths such as:

  • $\operatorname{\gamma}(t) = (t,0)$

  • $\operatorname{\gamma}(t) = (0,t)$

  • $\operatorname{\gamma}(t) = (t,t)$

  • $\operatorname{\gamma}(t) = (t,t^2)$

  • $\operatorname{\gamma}(t) = (t^2,t)$

but all these paths equal $0$. I have started think that the limit is in fact $0$ and I expended a quite long time trying to prove it by the squeeze theorem and them I gave up and looked over WolframAlpha and learned that the limit does not exist. I do not know how to prove it.

Can someone, please:

  1. Show for this particular limit a path that is different than $0$?

  2. Explain the thought processes I should apply to this kind of problem? How does one get the feeling that this limit does not exist after trying so many paths?

How do I prove $F_*Z=(Z^i\circ F^{-1})\partial_i'$, where Z is a field, and $(F(U),x\circ F^{-1})$ a chart with coordinate fields $\partial_i'$?

Posted: 19 May 2022 04:02 AM PDT

I am teaching myself differential geometry on manifolds with some notes a professor gave me. As an initial calculation to prove that the Levi Civita connection is invariant under isometries, the notes threw the following result \eqref{push} leaving it as an exercise which I have been scratching my head about:

Let $F:M \to M'$ be a diffeomorphism and $Z$ be a vector field over $M$. The field $F_*Z$ over $M'$ is defined putting $(F_*Z)_{F(p)} = F_{*p}Z_p$. For example, if $(U,x)$ is a chart of $M$ we have that $x_* \partial_i = D_i$, partial derivative with respect to the variable. More in general, if the transformed chart $(F(U),x\circ F^{-1})$ of $M'$ has coordinate fields $\partial_i'$ we have that $F_*\partial_i=\partial_i'$ and \begin{equation}\label{push} \tag{$\sharp$} F_*Z=F_*(Z^i\partial_i)=(Z^i\circ F^{-1})\partial_i'. \end{equation}

Therefore Z is differentiable , because $F_*Z$ is differentiable

Note: $x_*$ is the pushforward map

My try:

I know the pushforward of a differentiable function $f$ between manifolds, $f_*$ is supposed to map tangent vector of one space to tangent vectors of the other one and it is defined as $(f_*\nu)h = \nu (h \circ f)$ when $\nu$ is a derivation and $h$ a smooth function, I don't know if that is useful but it took me nowhere, so I was trying to do something like :

$(F_*Z)h =Z (h \circ F)=Z^i \partial_i (h \circ F) $

and then I was hoping to find "(the expresion I want)h", so no idea where that $F^{-1}$ in \eqref{push} is coming from.

Can someone please show me the proof of \eqref{push}? Please be as detailed as posible, I am new at manipulating the pushforward map.

My teacher said that this is not necessary in the line integral, but why?

Posted: 19 May 2022 03:47 AM PDT

Question: Calculate the Scalar line integral: $$\int_C \left(𝑥\,𝑑𝑥 − 𝑦\,𝑑𝑦\right)$$ C is the segment traveled in the direction: $$(1,1)\,to\,(2,3)$$

I started by solving this question by parameterizing the points.

Parameterization formula: $$(x,y)=B\,t+(1-t)\,A$$ $$(x,y)=(2,3)\,t+(1-t)\,(1,1)$$ $$(x,y)=(2t,3t)+(1-t,1-t)$$ $$(x,y)=(1+t,1+2t)$$ Our parameterization determines that $$ 0 ≤ t≤1$$

So our function $$r(t)=(1+t)\hat{i}+(1+2t)\hat{j}$$ $$r'(t)=(1,2)$$ $$ |r'(t) |=\sqrt{1^{2}+2^{2}}=\sqrt{5}$$

So our scalar line integral will be $$\int_C 𝑥\,𝑑𝑥 − 𝑦\,𝑑𝑦 =\int_0^1 [(1+t)\,1 - (1+2t)\,2 ]\,|r'(t) | dt$$ $$=\int_0^1 [1+t - 2-4t ]\,\sqrt{5}dt $$ $$=\sqrt{5} \int_0^1 -3t - 1 dt$$ $$\therefore\sqrt{5}\,(\frac{-5}{2})$$

The thing is, my professor said that it is not necessary to have put the

$$|r'(t) |$$

in the scalar integral formula, because that was another case, but he didn't explain... was he right?

Derivative of the inverse of a symmetric matrix w.r.t itself

Posted: 19 May 2022 04:12 AM PDT

I'm trying take the derivative of a symmetrix matrix $\mathbf{C}$ with respect to itself. $$ \begin{equation} \frac{\partial \mathbf{C}^{-1}}{\partial \mathbf{C}} \end{equation} $$

Using the indicial notation, above equation can be rewritten as follows $$ \begin{equation} \frac{\partial C_{ij}^{-1}}{\partial C_{kl}} \end{equation} $$

At first I've used the following formula, $$ \begin{equation} \frac{\partial C_{ij}^{-1}}{\partial C_{kl}} = -C^{-1}_{ik}C^{-1}_{lj} \end{equation} $$

But I quickly realized that we've lost the symmetry of the problem now.

I read The Matrix Cookbook and the other posts about the same problem but unfortunately, I couldn't understand the things they've done.

For example in this article, at Eq.(100) authors have used the property below when taking the derivative of Eq.(99) $$ \begin{equation} \frac{\partial \mathbf{C}^{-1}}{\partial \mathbf{C}} = -\mathbf{C}^{-1} \boxtimes \mathbf{C}^{-T} \mathbf{I}_s \end{equation} $$ Where $\boxtimes$ is the square product, $\mathbf{I}_s$ is the symmetric fourth-order identity tensor and they are defined as follows $$ \begin{align} (\mathbf{A} \boxtimes \mathbf{B})_{ijkl} &= \mathbf{A}_{ik}\mathbf{B}_{jl} \\ (\mathbf{I}_s)_{ijkl} &= \frac{1}{2}(\delta_{ik}\delta_{jl}+\delta_{il}\delta_{jk}) \end{align} $$

I couldn't understand how did they achieve this result and how can I derive it myself.

Conjecture about areas of circular segment and polygon with equal perimeter sharing a side

Posted: 19 May 2022 04:22 AM PDT

I was playing around with shapes and have formed a conjecture.

Segment and polygon

Length of the red circular arc $=$ total length of the $n$ green line segments

Conjecture: $$\sup{\left(\frac{\text{Area}_1}{\text{Area}_2}\right)}=\sqrt{1-\frac{1}{n^2}}$$

Is my conjecture true, and if so, is it a known thing?

I have proven the case with $n=2$ algebraically, and (I think) I have verified the cases with $n=3$ and $n=4$, using desmos.

The case with $n=2$:

The ratio of the areas is maximized when the two green segments are the same length (this can be seen by considering the ellipse whose focal points are the ends of the black line, passing through the point where the two green segments meet).

I let the central angle of the arc be $2x$, and got
$$\frac{\text{Area}_1}{\text{Area}_2}=\frac{2(\sin{x})\sqrt{x^2-\sin^2{x}}}{2x-\sin{2x}}$$

Basic calculus shows that this function is decreasing in $0<x<\pi$, and the limit as $x\to0^+$ is indeed $\frac{\sqrt{3}}{2}$.

Applying a similar method to larger $n$ seems quite daunting.

(My background: high school math teacher.)

Eigenvectors & eigenvalues of "nearby" matrices

Posted: 19 May 2022 04:19 AM PDT

Suppose that I have a square matrix $A$ and another square matrix $B$ whose entries differ by $\varepsilon>0$. Is there any way to bound the differences in their eigenvalues and eigenvectors?

The idea is that if $\varepsilon$ is small, the differences shouldn't be huge.

Scale intersecting circles fixed at pivots so that they have only one point in common

Posted: 19 May 2022 04:20 AM PDT

Given two points, A and B; Given two circles, having 2 points in common, I1 and I2:

  • one circle at center C1, with radius r1, with the point A on to it
  • and another circle at center C2, with radius r2, with the point B on it.

enter image description here

Find the centers and radii of the two circles which still touch the points A and B in the same manner but only have one point in common located "mid-way" between the original circles.

Edit: Im not sure if the phrasing of the question is entirely correct, also im not sure if im missing some corner cases. Basically the question is revolving around detecting the collision of two circles "glued" to the points A and B. In the solution, center of the first circle should belong to the line (A, C1), while center of the second to (B, C2)

The Maclaurin series of $1-(1-\frac{x^2}{2} + \frac{x^4}{24})^{2/3}$ has all coefficients positive

Posted: 19 May 2022 04:04 AM PDT

It was shown in a previous post that the Maclaurin series of $1 - \cos^{2/3} x$ has positive coefficients. There @Dr. Wolfgang Hintze: has noticed that the truncation $1- \frac{x^2}{2} + \frac{x^4}{24}$ can be substituted for $\cos x$ ( seems to be true for all the truncations). The proof is escaping me. Thank you for your attention!

$\bf{Added:}$ Thomas Laffey in this paper directs to a proof of the fact that if $a_1$, $\ldots$, $a_n\ge 0$ then $\alpha = \frac{1}{n}$ makes the following series positive:

$$1- (\prod_{i=1}^n (1- a_i x))^{\alpha}$$

Numerical testing suggests that $\alpha = \frac{\sum a_i^2}{(\sum a_i)^2} \ge \frac{1}{n}$ works as well ( see the case $n=2$ tested here). So in our case, instead of $\alpha = \frac{1}{2}$ we can take $\alpha = \frac{2}{3}$. Clearly, this would then be the optimal value. This would be a test case for $n=2$. The result for $\cos x$ used the special properties of the function ( solution of a certain differential equation of second order). Maybe $1- x/2 + x^2/24$ is as general as any quadratic with two positive (distinct) roots.

Non-linearity of Einstein's field equations

Posted: 19 May 2022 03:45 AM PDT

How to show that, for two Schwarzschild- metrics, the Ricci tensors of two metric tensors do not sum up linearly:

$R_{\mu\nu} (g_1+g_2) \neq R_{\mu\nu} (g_1)+R_{\mu\nu} (g_2)$

while

the Ricci tensor is given by $$ R_{\mu\nu} = \partial_\lambda \Gamma_{\mu\nu}^\lambda - \partial_\mu \Gamma^\lambda_{\nu\lambda}+ \Gamma^\lambda_{\lambda\tau}\Gamma^\tau_{\mu\nu} - \Gamma^\lambda_{\tau\mu}\Gamma^\tau_{\nu\lambda} $$ where $\partial_\mu \equiv \frac{\partial}{\partial x^\mu}$ and sum over repeated indices is implied ($a_\mu b^\mu \equiv \sum\limits_{\mu=0}^3 a_\mu b^\mu$).

The Christoffel symbols are further given by $$ \Gamma^\lambda_{\mu\nu} = \frac{1}{2} g^{\lambda \tau} ( \partial_\mu g_{\nu\tau} + \partial_\nu g_{\mu\tau} - \partial_\tau g_{\mu\nu} ) . $$ $g^{\mu\nu}$ is the metric inverse of $g_{\mu\nu}$, and the Schwarzschild-metric is given by $$ ds^2 = g_{\mu\nu}dx^\mu dx^\nu= - c^2(1-\frac{r_s}{r})dt^2 + \frac{1}{1-\frac{r_s}{r}} dr^2 + r^2(d\theta^2 + \sin^2\theta d\phi^2)$$

At first, I'm not sure how to write down $g_1$ and $g_2$ in detail. OK, $g_1$ may stay as-is (the $g_1$ of the Schwarzschild-metric). But then, $g_2$ uses the same coordinates $r$, $\theta$, $\phi$ as $g_1$. There is more then one case:

Case 1 (the simplest): At the beginning, there are two universes with one black hole, respectively. BH1 is happy in its universe (as the only thing in the universe), and BH2 emerges out-of-nothing (from another universe) at the same $r$, $\theta$, $\phi$.

Case 2: At the beginning, again, two universes with one black hole in each. Then, BH2 emerges at an arbitrary point in the universe of BH1. At the end, the two are moving around the joint center of gravity - $R_{\mu\nu}(g_1+g_2)$. Thats quite complicated, I suppose (but more realistic than case 1)

Supposedly it's sufficient, at least to start with, to calculate case 1, because that's easier.

The point is, I do not see why the two Ricci tensors should not just add up linearly. It's a complicated thing and I don't see it. Sorry for that, I'm still learning. If one could write down the calculation (or provide a link to where it is done) - I could hopefully see and understand.

To explain a bit more in detail my confusion: The Ricci tensor can be interpreted as the volume change due to the curvature in comparison to the flat spacetime. By adding an additional black hole (from another universe) to an existing one - shouldn't the volumes simply add up? You see, I am confused. A proper caculation of the difference between $R_{\mu\nu}(g_1 + g_2)$ and $R_{\mu\nu} (g_1)$ + $R_{\mu\nu} (g_2)$ would help tremendously.

Question on the distortion of a metric embedding and Lipschitz maps

Posted: 19 May 2022 03:59 AM PDT

This is a bit of mild confusion off of Matoušek's lecture notes on metric embeddings (available at https://kam.mff.cuni.cz/~matousek/ba-a4.pdf).

An injection between metric spaces $f : X \rightarrow Y$ is a $D$-embedding for some $D \geq 1$ if there is $r > 0$ so that:

$$rd_X(x,y) \leq d_Y(f(x),f(y)) \leq Drd_X(x,y)$$

We define $\text{distortion}(f) = \inf\{D\,|\,\text{$f$ is a $D$-embedding}\}$.

We also define the Lipschitz norm for $f$ Lipschitz: $$\|f\| = \sup \left\{\frac{d_Y(f(x),f(y))}{d_X(x,y)} : x \neq y \in X\right\}$$

which is the least $C$ for which $f$ is $C$-Lipschitz.

The notes claim the following:

If $f$ is a bi-Lipschitz bijection, then $\text{distortion}(f) = \|f\|\|f^{-1}\|$.

This is what I am confused about. Here is my reasoning so far.

If $f$ is a bi-Lipschitz bijection, then $\|f^{-1}\| = \|f\|^{-1}$. After all:

\begin{align*} \|f\|^{-1} &= \sup \left\{\frac{d_Y(f(x),f(y))}{d_X(x,y)} : x \neq y \in X\right\}^{-1}\\ &= \inf\left\{\frac{d_X(x,y)}{d_Y(f(x),f(y))} : x \neq y \in X \right\} \tag{since $\sup$ exists and is positive}\\ &= \inf\left\{\frac{d_X(f^{-1}(x'),f^{-1}(y'))}{d_Y(x',y')} : x' \neq y' \in Y \right\}\\ &= \|f^{-1}\| \end{align*}

Then in fact $\|f\|\|f^{-1}\| = 1$, so bi-Lipschitz bijective $f$ has distortion $1$. This surprises me and seems wrong. Can someone point out where I went astray?

variance validation

Posted: 19 May 2022 04:00 AM PDT

The scores on a placement test given to college freshmen for the past five years are approximately normally distributed with a mean $μ = 74$ and a variance $σ^2 = 8$. Would you still consider $σ^2 = 8$ to be a valid value of the variance if a random sample of $20$ students who take the placement test this year obtain a value of $s^2 = 20$?

I think I'm supposed to use the F-distribution somehow in this problem, however, I've only been able to find out how to compare to sample variances--not how to compare a sample variance with a population variance. Any tips on how to solve this?

No comments:

Post a Comment