Monday, February 14, 2022

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


X^4+y^4+z^4 = 81. What is the maximum length of a rod within this volume.

Posted: 14 Feb 2022 01:34 AM PST

if x^4+y^4+z^4 = 81. Find maximum length of a rod placed in this volume.

Relationship between Heegner numbers and Lucas numbers?

Posted: 14 Feb 2022 01:32 AM PST

This question was inspired by Relatives of Heegner numbers?.

While investigating Heegner numbers (i.e. $1, 2, 3, 7, 11, 19, 43, 67, 163$) in relation to Fibonacci and Lucas numbers I noticed the following formula:

$Lucas_{n} = Fibonacci_{n-1} + Fibonacci_{n+1}$

Which provides a way to convert from Fibonacci numbers to Lucas numbers.

I haphazardly decided to use a similar equation $Heegner_{a-1} + Heegner_{a+1}$, which gave me the following:

4, 9, 14, 26, 54, 86, 206

I subtracted 7 from the above sequence and noticed the values:

-3, 2, 7, 19, 47, 79, 199

Were pretty close to a subset of the Lucas numbers with skipped values:

... −4, 3, −1, 2, 1, 3, 4, 7, 11, 18, 29, 47, 76, 123, 199 ...

Is there any relationship between Heegner numbers and Lucas numbers that might explain this? The example is a tad contrived, so it could be mathematical coincidence.

Meaning of ξ(1/2+it)=[eŖlog(r(s/2))π-1/4(–t2–1/4)/2]x[eiJlog(r(s/2))π-it/2ζ(1/2+it)] equation

Posted: 14 Feb 2022 01:30 AM PST

I was reading a fiction book and encountered ξ(1/2+it)=[eŖlog(r(s/2))π-1/4(–t2–1/4)/2]x[eiJlog(r(s/2))π-it/2ζ(1/2+it)] equation. It has something to do with prime numbers. Please explain what does it represent. Thanks in advance.

Using Parseval's Identity in finding summations

Posted: 14 Feb 2022 01:27 AM PST

Apply the Parseval's Identity on x/2 = summation from n = 1 to infinity ([(-1)^n+1]/n)sin(nx) on −π ≤ x ≤ π

use your result above to find the value of the sum, summation from k=1 to infinity 1/(2k)^2

I managed to obtain

π/6 = summation from n=1 to infinity 1/n^2

I am failing to see how to get the (2k)^2 as required by the second part

For which $\alpha$ does $\int_E \frac{x^\alpha}{\sqrt{x^4+y^2}}$ converges?

Posted: 14 Feb 2022 01:15 AM PST

Where $E$ is the area of the circle $x^2+(y-1)^2=1$ minus the area of a second circle $x^2+(y-0.5)^2=0.5^2$.

How to compute the derivatives of multivariate Gaussian expectations with respect to mean and covariance using characteristic functions?

Posted: 14 Feb 2022 01:09 AM PST

Consider $q(\boldsymbol{x}) \sim \mathcal{N}(\boldsymbol{\mu}, \boldsymbol{\Sigma})$, I need to compute the derivatives of $\mathbb{E}_q[V(x)]$ on $\mu$ and $\Sigma$. The results are expected as $\triangledown_{\boldsymbol{\mu}} \mathbb{E}_q[V(x)] = \mathbb{E}_q[\triangledown_{\boldsymbol{x}} V(x)]$ and $\triangledown_{\boldsymbol{\Sigma}} \mathbb{E}_q[V(x)] = \frac{1}{2} \mathbb{E}_q[\triangledown_{\boldsymbol{x}} \triangledown_{\boldsymbol{x}} V(x)]$, but I cannot figure out how to get them.

The Appendix A in the paper (http://www0.cs.ucl.ac.uk/staff/c.archambeau/publ/neco_mo09_web.pdf) provides a solution using characteristic functions, but the intermediate steps are missing. Could anyone give me some help? Thanks in advance!:)

Why doesn't Square root function give range as Real?

Posted: 14 Feb 2022 01:21 AM PST

When we write $x^2=4$. It means $x=+2$ or $x=-2$.

So then why is the range of the function $f(x) = \sqrt{(x^2+4} \quad$ $[2,\infty)$ and not $(-\infty, - 2] \cup [2,\infty)$?

Please give me a hint, where am I going wrong?

Which one of the first order predicate calculus express the following English statement?

Posted: 14 Feb 2022 01:16 AM PST

I have an exercise about Discrete Math. Please explain to me why "tigers and lions" converts tiger(x) v lion(x) not tiger(x) ^ lion(x). This image is my exercise: https://i.stack.imgur.com/94JwL.png

How did the 5 changed to 0 or something else in this question?

Posted: 14 Feb 2022 01:22 AM PST

Question:-

Qustion Image 1

Question Image 2

So i have my question in the image 2 , i have circle the 5 and also i have added a arrow to it so i want to ask how did that 5 became zero afterwards. This is a question of Homogeneous Linear Systems.

Prove $T_2\circ T_1$ isn't injective according to their rank

Posted: 14 Feb 2022 01:01 AM PST

Let $T_1 : U \rightarrow V$ and $T_2 : V\rightarrow W$ be linear maps such that $rkT_1 > rkT_2$. Prove that $T_2\circ T_1$ isn't injective.

I proved that $nullT_2>0$ and from that I can understand $T_2$ isn't injective (I thought it was enough but I was wrong), so how can I explain this is impact the Nullity of $T_2\circ T_1$?

Thank you for the help.

Suppose $M$ and $N$ are smooth manifolds and $F:M\to N$ a smooth map. Assume that $dF_p:T_pM \to T_{F(p)}N$ is the zero map. Show that $F$ is constant

Posted: 14 Feb 2022 12:47 AM PST

Suppose $M$ and $N$ are smooth manifolds and $F:M\to N$ a smooth map. Assume that $dF_p:T_pM \to T_{F(p)}N$ is the zero map for each $p \in M$. Show that $F$ is constant.

Letting $(U,\varphi)$ to be a chart on $M$ and $(V, \psi)$ be a chart on $N$ I have the coorindate representation $\hat{F}:=\psi \circ F \circ \varphi^{-1}:\varphi(U \cap F^{-1}(V)) \to \psi(V)$.

Lee, then suggests that $dF_p( \frac{\partial}{\partial x^i} \bigg|_p)=\frac{\partial \hat{F}}{\partial x^i} (\hat{p})\frac{\partial}{\partial y^j} \bigg|_{F(p)}$, but if the differential is the zero map, then $$dF_p( \frac{\partial}{\partial x^i} \bigg|_p)=\frac{\partial \hat{F}}{\partial x^i} (\hat{p})\frac{\partial}{\partial y^j} \bigg|_{F(p)}=0.$$

How does this imply that $\hat{F}$ would be constant? I'm considering an arbitary basis vector under the differential, but what if I considered something otherthan a basis vector?

Reasoning about the PDF of fair dice rolls from the definition $P(X \in A) = \int_\mathbb{R}1_Af_Xd\Lambda$

Posted: 14 Feb 2022 12:54 AM PST

Unfortunately, I haven't yet had the time to dig deeper into pure measure theory, so I get easily baffled by simple questions like these. For example, we know that if $X$ is an r.v. representing rolls on a fair six sided dice, then $P(X = z) = \frac{1}{6}$ for any $z \in \{1,2\dots,6\}$. Given this information, can we then conclude that $X$ has a probability density function $f_X$ in the sense that for all Borel sets $A \subset \mathbb{R}$ it holds true that $P(X \in A) = \int_{\mathbb{R}}1_Af_Xd\Lambda$, where $\Lambda$ is the Lebesgue measure? What confuses me in this setting is that the Lebesgue measure of any countable set is zero, so if, say $A = \{1\}$, then $\int_{\mathbb{R}}1_Af_Xd\Lambda = \int_{\{1\}}f_Xd\Lambda$, but $P(X = a) = \frac{1}{6}$.

I am asking this question because usually one would be given a summation instead of integration when the r.v.s are discrete, the density function is defined as an integral above with Lebesgue measure in my reading material.

Why are these statements about separability equivalent?

Posted: 14 Feb 2022 01:31 AM PST

I am reading something about separable extensions and passed by the following definitions:

(Separable degree) Let $E$ be an algebraic extension of a field $F$. and let $\sigma :F\rightarrow L$ be an embedding of $F$ in an algebraically closed field $L$. Let $S_\sigma$ be the set of extensions of $\sigma$ to an embedding of $E$ in $L$. When we define $$card(S_\sigma):=[E:F]_S$$ and call it the separable degree of $E$ over $F$.

(Separable extension) Let $E$ be a finite extension of $k$. We shall say that $E$ is separable over $k$ if $$[E:k]_S=[E:k]$$

(Separable element) An element $\alpha$ algebraic over $k$ is said to be separable over $k$ if $k(\alpha)$ is separable over $k$. This is equivalent to say that the minimalpolynom of $\alpha$ has no multiple roots

I somehow don't see why in the third definition we have the equivalence, i.e. the statement written in bolt is equivalent to the one don't written in bolt.

Can someone maybe explain this to me?

Thanks for your help

How to find a general formula for the area of a normal n-point star enclosed in a r-radius circle. (picture below).

Posted: 14 Feb 2022 12:52 AM PST

How to find a general formula for the area of a regular $n$-point star enclosed in a $r$-radius circle?

Example Figure

Does $(I-BB^\dagger)C=0$ hold for a symmetric positive semidefinite matrix $G = \begin{pmatrix} A & B \\ B^T &C \end{pmatrix}$ with $C \preceq B$?

Posted: 14 Feb 2022 12:56 AM PST

Given a symmetric positive semidefinite matrix $ G = \begin{pmatrix} A & B \\ B^T &C \end{pmatrix}$ where $A$, $B$ and $C$ are not invertible, and $C\preceq B$, I am wondering if the following equality hold $$(I-BB^\dagger)C = 0 $$ or equivalently $$C=BB^\dagger C$$ where $B^\dagger$ is the Moore-Penrose pseudo inverse of $B$.

I already know from here (Theorem 4.3) that we have $(I-AA^\dagger)B=0$ and $(I-CC^\dagger)B^T=0$ (we have it even without $C \preceq B$).

Edit : user 1551 proposed a direct counter-example to my proposition, I realized that it missed an assumption from my application: $C \preceq B$.

How to integrate exponential $x^2$ with $1/x^2$?

Posted: 14 Feb 2022 01:29 AM PST

I am trying to calculate the integral for $$\int_{-\infty}^{\infty} \frac { e^{-ax^2}}{x^2+\frac{1}{(2a)}}dx$$ I have no idea how to start on this and I have tried Wolfram alpha and similar sites ..the calculation time exceeds the usual time.

I am thinking of taking the derivation of exponential and integrate 1/() term but that would include logarithms and would make things more difficult and if I try to integrate exponential then I would have 1/()^2 term in next step which would again make it difficult ..I really don't know how to go on with this.

I think even a starting step on how to go about this would be a great help.

Bounded derivative of diffeomorphism from implicit function theorem

Posted: 14 Feb 2022 01:20 AM PST

Let us define $f$ and a set of functions $\{g_n\}$ as follows: $$f:[0,T]\mapsto\mathbb{R}\textrm{ s.t. } f(t)>0,\forall t \in[0,T]$$ $$\{g_n:\mathbb{R}\mapsto\mathbb{R}^+\}\textrm{ a set of Lipschitz functions such that }\exists K>0,\forall n, \forall t\in[0,T] \textrm{ s.t. } 0 < g_n(t) \leq K <+\infty $$

Moreover, $\forall n$ and $\forall t\in [0,T]$, $f(t)$ and $g_n(t)$ are solutions of

$$\phi(f(t), g_n(t))=0$$ with $\phi\in C_{\infty}(\mathbb{R}\times\mathbb{R};\mathbb{R})$ and where $\forall n, \forall t\in[0,T]$ , $\frac{\partial \phi}{\partial x_1} >0$ and $\frac{\partial \phi}{\partial x_2} >0$.

From the implicit function theorem, $\forall n$, $\forall t\in[0,T]$ there exists a diffeomorphism $\varphi_{t,n}$ such that $$x = \varphi_{t,n}(y)$$ $\forall x \in \mathcal{V}(f(t))$ and $\forall y \in \mathcal{V}(g_n(t))$, where $\mathcal{V}(.)$ stands for a neighborhood.

With this setting, is $\varphi'_{t,n}$ the derivative of $\varphi_{t,n}$ bounded, i.e., does the following hold ? $$\forall n, \forall t\in[0,T], \parallel \varphi'_{t,n}\parallel_\infty <+\infty$$

Solving for range of c values in Fixed Point Iteration

Posted: 14 Feb 2022 01:34 AM PST

For $x^3−2x^2−13x+30 = 0$, with root r = 3. I am supposed to add $cx$ to both sides of the equation before dividing by $c$ to obtain the fixed point equation $$g(x)=x$$ where $$g(x) = \frac{1}{c}x^3−\frac{2}{c}x^2−\frac{13-c}{c}x+\frac{30}{c}$$ I was then told to find the values of c such that the fixed point iteration locally convergent to r = 3.

My idea is to differentiate g(x) to get $g'(x) = \frac{3}{c}x^2-\frac{4}{c}x+\frac{c-13}{c}$. For fixed point iteration to locally converge, $|g'(x)| < 1$. However, when trying to solve $|g'(x)| < 1$, I realise there are both $x$ and $c$ present. Am I supposed to use $b^2 - 4ac = 0$ to solve this problem?

Calculate $\int_{-\infty}^{\infty}{\ \frac{dx}{\sqrt{1 + x^{4}}}\ x\ \sin [n\pi\phi(x)] \ }$; $\phi(x)\propto\int_{0}^{x}\frac{dy}{\sqrt{1+ y^{4}}} $

Posted: 14 Feb 2022 01:19 AM PST

Here's the integral: $$I(n) = \frac{1}{\alpha_{0}}\int_{-\infty}^{\infty}{\ \frac{dx}{\sqrt{1 + x^{4}}}\ \ x\ \ \sin \lbrack n\pi\phi(x) \rbrack \ }$$ where $n$ is an integer and $$\phi(x) \equiv \frac{1}{\alpha_{0}} \int_{0}^{x}\frac{dy}{\sqrt{1 + y^{4}}}\ \ \ \ ;\ \ \ \ \alpha_{0} \equiv \int_{0}^{\infty}\frac{dy}{\sqrt{1 + y^{4}}} = \frac{\Gamma\left( 1\text{/}4 \right)^{2}}{4\sqrt{\pi}} \ \ . $$ The integral converges because $\ \phi(x\to\infty)\to 1-O(1/x)\ \Rightarrow\ \sin( n\pi \phi )\to O(1/x)$.

It can be written more compactly as below, but that requires the inverse $x(\phi)$: $$\int_{- 1}^{1}{d\phi}\ \ x(\phi)\ \sin\lbrack n\pi \phi \rbrack$$ What is that $n$ dependence exactly?? Very curious to know. (Is there a large-$n$ expansion that is useful?)

(Also curious: what's the simplest derivation of the identity for $\alpha_{0}$ ?)

Probability that second of $20$ watches is broken given $2$ are broken and after discarding first based on $80\%$-correct expert advice?

Posted: 14 Feb 2022 01:06 AM PST

The problem goes this way. There are $20$ watches, $2$ of which are broken. And there is an expert who can say if the watch is broken. But his verdict is only correct with probability $0.8$ - in both cases, if the watch is broken and if it is not. So, then I choose one watch, show it to the expert and he says: it's broken. I choose then another watch of these $20$. What is the probability that this second watch is broken? Thanks in advance.

PS. I dont really understand how this information can help, but let it be. My math background: PhD, but it is 20 years ago, so I'm not an active mathematician anymore. The source of the problem: a student who i know. What I tried: naturally, Bayes theorem. But I still cannot solve it.

PPS. I found the solution. It goes without Bayes but with the law of total probability. Its rather technical.

Poisson Races: What is the Probability of a Tie? Is it monotone in the means?

Posted: 14 Feb 2022 01:13 AM PST

Consider a race between two Poisson distributed random variables. Conjecture: The race is more likely to be tied when the one behind gets faster.

If $N_1$ and $N_2$ are independent Poisson-distributed random variables with means $\mu_1$ and $\mu_2$, then $K=N_1-N_2$ follows the Skellam distribution. The probability that $N_1$ and $N_2$ are equal ("the race ends in a tie") is $p(K=0;\mu_1,\mu_2)$.

$\textbf{Claim.}$ $\;$ If $\,$ $0<\mu_1' < \mu_1'' < \mu_2-1,$ then $\,$ $p(0;\mu_1',\mu_2) < p(0;\mu_1'',\mu_2)$.

But: Is this claim true?

How to get all possible rotations of the cube, represented by matrix with data?

Posted: 14 Feb 2022 01:17 AM PST

Let's say I have a cube consists of 3x3x3 smaller cubes like a Rubik's cube, but each smaller cubes have a solid color (Well, it's not a Rubik's cube because rotation of individual elements doesn't matter, but let's name it Rubik's cube):

enter image description here

Each of the cubes have a number 0-26, starting from bottom:

enter image description here

Let's say that you have a [0-26] matrix with all 27 colors.

So I've made a special pattern:

 2,  5,  8,  1,  4,  7,  0,  3,  6,  11, 14, 17, 10, 13, 16,  9, 12, 15,  20, 23, 26, 19, 22, 25, 18, 21, 24  

On each iteration, I get color from the cube with number from this pattern. For example, for cube #0 I get the first digit from the pattern which is "2", so I get color from cube #2. So this pattern applies a transformation as like the Rubik's cube itself is rotated at 90 degrees. If I make this operation 4 times, the Rubik's cube will rotate 360 degrees and I will get back to the starting point.

To make it clear, this is how it looks like in Python:

map = [ 2,  5,  8,  1,  4,  7,  0,  3,  6,         11, 14, 17, 10, 13, 16,  9, 12, 15,         20, 23, 26, 19, 22, 25, 18, 21, 24]  for i in range(0, 27):      blocks[i] = blocks_orig[map[i]]  blocks_orig= blocks[:]  

So my question is: is it possible to make a pattern, which can "Rotate" the Rubik's cube at any possible directions (is it 24 combinations)? And if it not, what is the minimum count of patterns I have to make? Or maybe you know an easier solution for this task?

Is this difference of surface integrals zero? $\oint_S\bar{\psi}\nabla(x\cdot\nabla\psi)\cdot n dS-\oint_S(x\cdot\nabla\psi)\nabla\bar{\psi}\cdot ndS$

Posted: 14 Feb 2022 12:47 AM PST

This is the follow-up question of When does this integral vanish, which appears in the derivation of the quantum virial theorem? and building on this answer.

Does the following difference of surface integrals vanish, if all we know is $\nabla|\psi|^2\cdot\mathbf{n}=0$, i.e. the gradient vector field of $|\psi|^2=\bar\psi\psi$ is of zero local flux on the surface. $$ \oint_S\bar{\psi}\nabla\left(\mathbf{x}\cdot\nabla\psi\right)\cdot\mathbf{n}\mathrm{d}S-\oint_S(\mathbf{x}\cdot\nabla\psi)\nabla\bar{\psi}\cdot\mathbf{n}\mathrm{d}S $$ First of all, let us break $\nabla|\psi|^2$ apart: $$ \nabla|\psi|^2=\nabla(\bar\psi\psi)=\psi\nabla\bar\psi + \bar\psi\nabla\psi $$ This does not really help us, but for real $\psi$, we can continue. Since $\nabla|\psi|^2=2\psi\nabla\psi$, we know, that $\psi\nabla\psi\cdot\mathbf{n}=0$ at the surface of our zero-flux domains. This still does not really help, so let us assume, that $\nabla\psi\cdot\mathbf{n}=0$ (even if $\psi=0$). Finally, the second term disappears, which leaves us with: $$ =\oint_S\psi\nabla\left(\mathbf{x}\cdot\nabla\psi\right)\cdot\mathbf{n}\mathrm{d}S $$ Now, we can continue with the gradient of the scalar product. Here, $\nabla\mathbf{A}$ is the transposed of the Jacobian of $\mathbf{A}$. It does not really matter, that it is transposed, since the Jacobians should be symmetric here. $$ \begin{aligned} =&\oint_S\psi\nabla\left(\mathbf{x}\cdot\nabla\psi\right)\cdot\mathbf{n}\mathrm{d}S\\ =&\oint_S\psi\left[\left(\nabla\nabla\psi\right)\cdot\mathbf{x}\right]\cdot\mathbf{n}\mathrm{d}S +\oint_S\psi\left[\left(\nabla\mathbf{x}\right)\cdot\nabla\psi\right]\cdot\mathbf{n}\mathrm{d}S \end{aligned} $$ Now the Jacobi matrix of $\mathbf{x}$ is the unit matrix, so that: $$ \begin{aligned} =&\oint_S\psi\left[\left(\nabla\nabla\psi\right)\cdot\mathbf{x}\right]\cdot\mathbf{n}\mathrm{d}S +\oint_S\psi\nabla\psi\cdot\mathbf{n}\mathrm{d}S\\ =&\oint_S\psi\left[\left(\nabla\nabla\psi\right)\cdot\mathbf{x}\right]\cdot\mathbf{n}\mathrm{d}S \end{aligned} $$ From here, I don't really know how to continue. I can write everything as sums again, but that does not help:

$$ =\oint_S\psi\sum_jn_j\sum_ix_i \frac{\partial^2\psi}{\partial x_i\partial x_j}\mathrm{d}S $$

Can the initial difference be shown to be zero for the given conditions for a) any function, b) a real function?

PS: the function should be in $L^2$, i.e. it should be square integrable and vanish for any $x_i\rightarrow\pm\infty$. It should be continuous and almost everywhere differentiable.

Show that there is an isometry $f:\mathbb{R}^n\to \mathbb{R}^n$ such that $f|_X$ is $g$ where $g :X \to X$ is isometric on subset $X$.

Posted: 14 Feb 2022 01:25 AM PST

Here's the problem.

If $X \subset \mathbb{R}^n$ is any subset and $g:X\to X$ is an isometric map, show that there is a unique isometry $f:\mathbb{R}^n\to \mathbb{R}^n$ such that $f|_X$ is $g$.

I wrote my answer as follows.

Since $g$ is an isometry, $g(x)=Ax+a$ for some $A \in \text{O}(n)$ and some $a \in \mathbb{R}^n$.

Define $f: \mathbb{R}^n\to \mathbb{R}^n$ s.t $f(y)=Ay+a$ for all $y \in \mathbb{R}^n$.

Then, it is clear to see that $f$ is an isometry because $A \in \text{O}(n)$ and $a \in \mathbb{R}^n$.

Also, clearly $f|_X=g$.

Is this the right approach? I have some doubts because I am not sure if I could let $g(x)=Ax+a$ (which we usually do when $g : \mathbb{R}^n\to \mathbb{R}^n$).

Theorems that might be useful:

  • An isometry $f : \mathbb{R}^n\to \mathbb{R}^n$ is uniquely determined by the images $fa_0,fa_1,...,fa_n$ of a set $a_0,a_1,...,a_n$ of $(n+1)$ independent points.

  • If $\{a_0,a_1,...,a_n\}$ and $\{b_0,b_1,...,b_n\}$ are two sets of independent points in $\mathbb{R}^n$, then there is an isometry $f:\mathbb{R}^n\to \mathbb{R}^n$ with $fa_i=b_i$ for $0 \leq i \leq n$

Seeking for help to find a formula for $\int_{0}^{\pi} \frac{d x}{(a-\cos x)^{n}}$, where $a>1.$

Posted: 14 Feb 2022 01:20 AM PST

When tackling the question, I found that for any $a>1$,

$$ I_1(a)=\int_{0}^{\pi} \frac{d x}{a-\cos x}=\frac{\pi}{\sqrt{a^{2}-1}}. $$ Then I started to think whether there is a formula for the integral

$$ I_n(a)=\int_{0}^{\pi} \frac{d x}{(a-\cos x)^{n}}, $$ where $n\in N.$

After trying some substitution and integration by parts, I still failed and got no idea for reducing the power n. After two days, the Leibniz Rule for high derivatives come to my mind.

Differentiating $I_1(a)$ w.r.t. $a$ by $(n-1)$ times yields

$$ \displaystyle \begin{array}{l} \displaystyle \int_{0}^{\pi} \frac{(-1)^{n-1}(n-1) !}{(a-\cos x)^{n}} d x=\frac{d^{n-1}}{d a^{n-1}}\left(\frac{\pi}{\sqrt{a^{2}-1}}\right) \\ \displaystyle \int_{0}^{\pi} \frac{d x}{(a-\cos x)^{n}}=\frac{(-1)^{n-1} \pi}{(n-1) !} \frac{d^{n-1}}{d a^{n-1}}\left(\frac{1}{\sqrt{a^{2}-1}}\right) \tag{*}\label{star} \end{array} $$

I am glad to see that the integration problem turn to be merely a differentiation problem.

Now I am going to find the $(n-1)^{th} $ derivative by Leibniz Rule.

First of all, differentiating $I_1(a)$ w.r.t. $a$ yields $$ \left(a^{2}-1\right) \frac{d y}{d a}+a y=0 \tag{1}\label{diffeq} $$ Differentiating \eqref{diffeq} w.r.t. $a$ by $(n-1)$ times gets $$ \begin{array}{l} \displaystyle \left(a^{2}-1\right) \frac{d^{n} y}{d a^{n}}+\left(\begin{array}{c} n-1 \\ 1 \end{array}\right)(2 a) \frac{d^{n-1} y}{d a^{n-1}}+2\left(\begin{array}{c} n-1 \\ 2 \end{array}\right) \frac{d^{n-2} y}{d a^{n-2}}+x \frac{d^{n-1} y}{d a^{n-1}}+(n-1) \frac{d^{n-2} y}{d a^{n-2}}=0 \end{array} $$ Simplifying, $$ \left(a^{2}-1\right) y^{(n)}+(2 n-1) ay^{(n-1)}+(n-1)^{2} y^{(n-2)}=0 \tag{2}\label{diffrec} $$

Initially, we have $ \displaystyle y^{(0)}=\frac{1}{\sqrt{a^{2}-1}}$ and $ \displaystyle y^{(1)}=-\frac{a}{\left(a^{2}-1\right)^{\frac{3}{2}}}.$

By \eqref{diffrec}, we get $$ y^{(2)}=\frac{2 a^{2}+1}{\left(a^{2}-1\right)^{\frac{5}{2}}} $$ and $$ \displaystyle y^{(3)}=-\frac{3 a\left(2 a^{2}+3\right)}{\left(a^{2}-1\right)^{\frac{7}{2}}} $$ Plugging into \eqref{star} yields $$ \begin{aligned} \int_{0}^{\pi} \frac{d x}{(a-\cos x)^{3}} &=\frac{\pi}{2} y^{(2)}=\frac{\pi\left(2 a^{2}+1\right)}{2\left(a^{2}-1\right)^{\frac{5}{2}}} \\ \int_{0}^{\pi} \frac{d x}{(a-\cos x)^{4}} &=-\frac{\pi}{6} \cdot \frac{3 a\left(2 a^{2}+3\right)}{\left(a^{2}-1\right)^{\frac{7}{2}}} =-\frac{\pi a\left(2 a^{2}+3\right)}{2\left(a^{2}-1\right)^{\frac{7}{2}}} \end{aligned} $$ Theoretically, we can proceed to find $I_n(a)$ for any $n\in N$ by the recurrence relation in $(2)$ .

By Mathematical Induction, we can further prove that the formula is $$ \int_{0}^{\pi} \frac{d x}{(a-\cos x)^{n}}=\frac{\pi P(a)}{\left(a^{2}-1\right)^{\frac{2 n-1}{2}}} $$ for some polynomial $P(a)$ of degree $n-1$.

Last but not least, how to find the formula for $P(a)$? Would you help me?

Holomorphic n-th covering map between annulus must be $z^n$, essentially.

Posted: 14 Feb 2022 01:22 AM PST

Let $A_R$ denotes the annulus $\{ z\in \mathbb{C}:1<|z|<R \}$. I guess and tend to prove the following(may not true):

If $f:A_r\mapsto A_R$ is a holomorphic covering map with degree $n$, then $R=r^n$ and $f$ must has the form $$f=\phi\circ z^n\circ \psi.$$ Where $\psi\in \text{Aut}A_r$, and $\phi \in \text{Aut}{A_R}$.

Is this true? If not, does $R=r^n$ still holds(I strongly believe this is true)?


This question rises when I look this post. Which says about $A_r$ and $A_R$ are conformal equivalent iff $R=r$ (they have the same moduli). So how about the covering map? I think may it should first prove $$\text{Mod}(A_R)=n\text{Mod}(A_r).$$ Here $\text{mod}$ means the moduli. I want use $$\text{Mod}(A_r)=\frac{1}{\lambda(\Gamma_r)},$$ where $\lambda$ is the extremal length. Give a metric on $A_R$, pull back by $f$ to obtain a metric on $A_r$ I can prove $$\lambda(\Gamma_r)\geq n \lambda(\Gamma_R).$$ Hence $$\text{Mod}(A_R)\geq n \text{Mod}(A_r).$$ But for another direction, I don't know how to continue.

Find the maximum value of $|\arg (\frac{1}{1-z})|$ for $|z|=1$

Posted: 14 Feb 2022 01:10 AM PST

$$\arg \left(\frac{1}{1-z}\right)$$ $$=\arg (1) - \arg (1-z)$$ $$=-\arg (1-z)$$ Placing the modulus gives $$|\arg (1-z)|$$

Since it's a circle, one point is $(1,0)$, then the point which is farthest away is $(-1,0)$, so the arg should be $\pi$. The correct answer is $\frac{\pi}{2}$. How is true?

I think I went wrong in using $\arg (1-z)$ when should be $\arg (z-1)$. I am not sure if that changes things, but that's a possible flaw I noticed.

Minimum of a sum of positive functions is the sum of the minimums of the functions

Posted: 14 Feb 2022 01:06 AM PST

Let $f_1, \dots, f_n$ be positive functions from $\mathbb R^m \rightarrow \mathbb R$.

How do we show that $$\min_x \sum_{i=1}^n f_i(x) = \sum_{i=1}^n \min_x f_i(x)$$

Actually, I am not sure this is true. Maybe adding convexity of the functions helps ?

Minimum Number of Rectangles Needed to Cover a set of square in an $\text{N} \times \text{N}$ grid

Posted: 14 Feb 2022 01:05 AM PST

There is an $\text{N} \times \text{N}$ grid of squares. Some arbitrary subset of those squares must be covered by some number of rectangles (a given square must be covered entirely by one rectangle, rectangles can only cover these ares, not anything else). How can I find the set of rectangles that covers all of a given set of squares with the smallest possible number of rectangles?

What is the maximum number of rectangles needed to fill in any subset of an $\text{N} \times \text{N}$ square?

Area of an equilateral triangle divided by three lines

Posted: 14 Feb 2022 12:53 AM PST

An equilateral triangle is divided by three straight lines into seven regions whose areas are shown in the image below. Find the area of the triangle. enter image description here

How to solve this problem ?

No comments:

Post a Comment