Sunday, July 4, 2021

Recent Questions - Mathematics Stack Exchange

Recent Questions - Mathematics Stack Exchange


How to decompose a matrix as the sum of Kronecker products?

Posted: 04 Jul 2021 08:29 PM PDT

I encounter a problem revalent to Kronecker product (KP).

I want to decompose $A=\sum^r_{i=1}B_i\otimes C_i$, where $A\in \mathbb{R}^{8\times4}, B_i\in \mathbb{R}^{4\times2}, C_i\in \mathbb{R}^{2\times1}.$

I note that some matrices can be decomposed as a single KP $(r=1)$.

like, $A_1=\begin{bmatrix}1& 0\\1& 0\\0& 1\\0& 1\\1& 0\\1& 0\\0& 1\\0& 1\end{bmatrix}=\begin{bmatrix}1& 0\\0& 1\\1& 0\\0& 1\end{bmatrix}\otimes\begin{bmatrix}1\\1\end{bmatrix}$.

However, there are also many matrices need the sum of more than one KPs $(r>1)$.

like, $A_2=\begin{bmatrix}1& 0\\0& 0\\0& 1\\0& 0\\1& 0\\1& 0\\0& 1\\0& 1\end{bmatrix}=\begin{bmatrix}1& 0\\0& 1\\0& 0\\0& 0\end{bmatrix}\otimes\begin{bmatrix}1\\0\end{bmatrix}+\begin{bmatrix}0& 0\\0& 0\\1& 0\\0& 1\end{bmatrix}\otimes\begin{bmatrix}1\\1\end{bmatrix}$.

I know the above examples can be decomposed to a more number of the sum of KPs, but my question is how to decompose a matrix as the sum with the least number of KPs (with minimum $r$).

Is composition of two metrics a metric?

Posted: 04 Jul 2021 08:23 PM PDT

Let $d_1$ and $d_2$ be two metrics on a non empty set X. Prove or disprove: Composition of these metrics again a metric.

My Attempt:

Let $d_1 : X \to \Bbb R$ and $d_2 : X \to \Bbb R$ then $d_2 o d_1$ is defined if $X = \Bbb R$. But I have no information how to proceeds it further. Please help me.

An integral from MIT Integration Bee

Posted: 04 Jul 2021 08:20 PM PDT

Show that

$I = \int_{0}^{2\pi}cos(x)cos(2x)cos(3x)dx = \pi/2$.

This integral appeared in the 2019 paper. Below is my own solution:

$I = \int_{0}^{2\pi}cos(x)cos(2x)(cosxcos2x-sinxsin2x)dx= \int_{0}^{2\pi}cos^2(x)cos^2(2x) dx-\int_{0}^{2\pi}cos(x)cos(2x)sin(x)sin(2x) dx.$

Replacing $cos^2(x)= \frac{1+cos(2x)}{2}$ for the first integral and $sin(x)cos(x)= sin(2x)/2 $ for the second, we get

$\int_{0}^{2\pi}\frac{1+cos(2x)}{2}cos^2(2x) dx-\frac{1}{2}\int_{0}^{2\pi}cos(2x)sin^2(2x) dx = \frac{1}{2}\left(\int_{0}^{2\pi}cos^2(2x)dx \, + \int_{0}^{2\pi}cos(2x)cos(4x)dx \right)$

= $\frac{1}{2} \left( \pi \enspace + \enspace 0\right) \enspace \because $ the orthogonality of cos(mx)

= $\frac{\pi}{2}$

This solution is rather awkward, and I'm sure there's a better and faster approach to this integral. Could anyone provide a more elegant solution(or a sketch of it)? Thanks.

How can we prove that $4^n - 2^n + 1$ divides $n$ if $n$ is a power of 3?

Posted: 04 Jul 2021 08:05 PM PDT

My current idea to solve this problem is using some sort of modulo argument.

$4^n$ is equivalent to 1 mod 3, and $2^n$ is equivalent to $(-1)^n$ mod 3, so we can simplify $4^n - 2^n + 1$ to $1 - (-1)^n + 1$ = $2 - (-1)^n$, which implies that $n$ must be odd in order for $4^n - 2^n + 1$ to divide 3.

But my question is, how can we prove that this divides $3^n$, and not just $3$? I have no idea what to do at this point, so I would greatly appreciate any hints towards solving this problem.

Why does $\omega$ have the same cardinality as $\Bbb{N}$?

Posted: 04 Jul 2021 08:04 PM PDT

My question is basically as stated in the title. I'm teaching myself logic, so my understanding is quite basic. Could someone show me a proof that $\omega$ has the same cardinality as $\Bbb{N}$? Additionally, if you could help me understand why $\omega$ also has the same cardinality as something like $\omega^\omega$?

Do the phrases "arbitraty but FIXED" and "arbitrary" mean the same in the context of a proof?

Posted: 04 Jul 2021 08:03 PM PDT

I am confused but I'd think the answer is no because that being the case I could, for instance, in an induction proof suppose $p(k)$ for any (an arbitrary) $k$ (let's say a natural number) and then, if a found another natural number like $k+1$ I could conclude, in particular for $k+1$ what I assumed for $k$. Because after all that's a property of an arbitrary object. I mean: If I know that "for all object with a certain property something happens" whenever I found "a particular object the certain property" I can, then, claim that "the something happens", right? Yet I've given proofs arguing so because I chose $k$ to be ARBITRARY and apparently, along with good reason, these proofs are wrong given that they were based in such a property of arbitrary objects.

If they don't mean the same, when should I use each? and What does it mean "arbitrary but fixed" in the context of a proof by induction?

Dividing positive integer in half and half until indivisible, then sum over the series (formula for OEIS 225381)

Posted: 04 Jul 2021 07:59 PM PDT

Taking integer x as input:

If x is even number, then return x/2 as series item, giving the x/2 as input to continue the loop;

Else return floor(x/2) as series item, terminate the loop;

Finally we sum over the produced series.

Assuming this function is f(x).

f(1) = floor(1/2) = 1  f(2) = 2/2 + floor(1/2) = 2  f(3) = floor(3/2) = 2  f(4) = 4/2 + 2/2 + floor(1/2) = 4  f(5) = floor(5/2) = 3  f(6) = 6/2 + floor(3/2) = 5  f(7) = floor(7/2) = 4  f(8) = 8/2 + 4/2 + 2/2 + floor(1/2) = 8  f(9) = floor(9/2) = 5  f(10) = 10/2 + floor(5/2) = 8  f(11) = floor(11/2) = 6  f(12) = 12/2 + 6/2+ floor(3/2) = 11  f(13) = floor(13/2) = 7  f(14) = 14/2 + floor(7/2) = 11  f(15) = floor(15/2) = 8  f(16) = 16/2 + 8/2 + 4/2 + 2/2 + floor(1/2) = 16  f(17) = floor(17/2) = 9  f(18) = 18/2 + floor(9/2) = 14  

Note that f(x) is OEIS 225381:

1,2,2,4,3,5,4,8,5,8,6,11,7,11,8,16,9,14   

The question is, is there any simple rational formula to calculate the final summatory f(x) or to represent OEIS 225381?

Mathematics Book Recommendation

Posted: 04 Jul 2021 07:46 PM PDT

I am looking for a book that is regarded as the 'classics' of the field.

For example,

  • Algebraic Number theory - Cassels Frohlich
  • Complex analysis - Ahlfors
  • Category Theory - MacLane
  • Topology - Munkres

What are the classical books of other fields such as fourier analysis, set theory, or differential equation?

I am making a comprehensive list of mathematical classics, so the book of any field is welcomed.

Question on a construction of the Klein-Beltrami model

Posted: 04 Jul 2021 07:58 PM PDT

I was reading N. Hitchin's lecture notes Chapter 4 The Klein programme. At the beginning of page 67, Hitchin wrote that

We consider the subset $D\subset P^2(\mathbf{R})$ defined by $$ D = \{[a,b,c]\in P^2(\mathbf{R}): b^2-4ac<0\}. $$ If $a+c=0$, then $b^2-4ac>0$ so $D$ lies in the complement of the line $a+c=0$ in $P^2(\mathbf{R})$ and we can regard $D$ as the interior of the circle $(b/(a+c))^2+((a-c)/(a+c))^2 = 1$ in $\mathbf{R}^2$. The group $PGL(2,\mathbf{R})$, and its subgroup $PSL(2,\mathbf{R})$, acts naturally on this space of quadrics.

Hitchin said that historically, $D$ is the Klein-Beltrami model.

Question: Could someone explain how the group $PSL(2,\mathbf{R})$ acts naturally on $D$? I am thinking that maybe Hitchin means that $PSL(2,\mathbf{R})$ preserves the interior of the unit circle in $\mathbf{R}^2$. But this is clearly not true.

Line $y = mx$ through the origin that divides the area between the parabola $y = x-x^2$ and the x axis into two equal regions.

Posted: 04 Jul 2021 08:30 PM PDT

There is a line $y = mx$ through the origin that divides the area between the parabola $y = x-x^2$ and the x axis into two equal regions. Find m.

My solution:

enter image description here

enter image description here

When I compute my answer, I get $1-\frac{1}{\sqrt[3]{2}}$ (as seen in Wolfram below)

Solution in Wolfram:

https://www.wolframalpha.com/input/?i=%281-x%29%5E2%28-2x%2B2%29%3D1

Solution in Geogebra:

https://www.geogebra.org/calculator/kshvr8mw

My question: is there a way to find this solution without a GDC/Geogebra? In other words, a method which would lead to a simpler equation in the end?

Thank you.

Is $\{{(0,0)}\cup{(\frac1n,\sin\frac1n)\,|\,n=1,2,3,\dots}\}$ is a compact set?

Posted: 04 Jul 2021 07:47 PM PDT

I saw this problem on an old test paper that didn't come with an answer. I tried to search the internet and didn't find an answer. (I'm quite a newbie here, tell me if I shouldn't post such questions here)

I think it was a compact but I can't think up a proper proof.

To prove it, I tried to prove it's bounded and closed. Obviously it's bounded, I failed to prove that it's a closed set.

Could anyone help me?

Apparent contradiction in convexity of compositions of functions

Posted: 04 Jul 2021 08:11 PM PDT

I offer a proposition with both a proof and a counterexample. Thus, either the proof is incorrect, or the counterexample is not actually a counterexample, or both. Which is it?

Proposition. Given a function $h(x)$ which is twice differentiable, strictly convex, and strictly decreasing, there does not exist a strictly increasing, twice differentiable function $g(y)$ such that $f(x) \equiv (g \circ h)(x)$ is concave.

Proof. Suppose $g(y)$ exists. By the properties of concave functions and the chain rule,

$$0 \geq f''(x) = (g \circ h)''(x) = [g'(h(x)) h'(x)]' = g'(h(x)) \underbrace{h''(x)}_{\gt 0} + {g''(h(x))} \underbrace{[h'(x)]^2}_{\gt 0}$$

For the statement to hold, we need $g'(h(x)) \leq 0 $ and $g''(h(x)) \leq 0$. Thus $g'$ must be weakly decreasing (and concave), a contradiction.

Counterexample. Consider $h(x) = \exp (-x)$ and $g(y) = \log y$. $h$ is twice differentiable, convex, and strictly decreasing. $g$ is strictly increasing and twice differentiable. Finally, the function $f = (g \circ h)(x) = x$ is linear, and therefore concave.

What's going on?

No inclusion between that of $L^\infty$ and $L^p$.

Posted: 04 Jul 2021 08:23 PM PDT

Is there a function that is in $L^p$ but not in $L^\infty$ when the measure of our underlying set is infinite?

For the finite case this is simple, let $f(x)=1/\sqrt{x}$ with domain $(0,1)$. Clearly we are integrable, but not essentially bounded.

Is there a simple function for the infinite case?

"Jensen's Theorem" from a mathematical physics textbook

Posted: 04 Jul 2021 07:46 PM PDT

I am working through Arfken, Weber, and Harris' s "Mathematical Methods for Physicists" $7^{th}$ edition. Example 12.7.1 is:

"Prove Jensen's theorem (that $\left| F(z) \right|^2$ can have no extremum in the interior of a region in which $F$ is analytic) by showing that the mean value of $|F|^2$ on a circle about any point $z_0$ is equal to $|F(z_0)|^2$. Explain why you can then conclude that there cannot be an extremum of $|F|$ at $z_0$."

I am utterly confused. I thought the only condition for analyticity is that $F$ s expressible as $F(z)=F(x+iy)$ and that its partial derivatives $\frac{\partial F}{\partial x}$ and $\frac{\partial F}{\partial y}$ exist and are continuous. this implies that $F$ is expressible as $F(x,y)=u(x,y)+iv(x,y)$, leading to the Cauchy-Riemann equations:

$$ \frac{\partial u}{\partial x}=\frac{\partial v}{\partial y} \ \ , \ \ \frac{\partial u}{\partial y}=-\frac{\partial v}{\partial x}\to \frac{\partial ^2u}{\partial x^2}=-\frac{\partial ^2u}{\partial y^2} \ \ , \ \ \frac{\partial ^2v}{\partial x^2}=-\frac{\partial ^2v}{\partial y^2} $$

And so it is clear that $F$ has no extrenum since at any point $(x+iy)$, if $u(x_0,y)$ is concave up then $u(x,y_0)$ is concave down (similar for $v$) and so you can only be on a saddle and never on an extremum. Also, the Average of $F$ about any $z_0$:

$$ \frac{1 }{2 \pi }\int_0^{2 \pi } F\left(Re^{i\theta }+z_0\right) \, d\theta=\frac{1}{2 \pi }\int_0^{2 \pi } \left(F\left(z_0\right)+\frac{Re^{i\theta} }{1!}\left(\frac{dF}{dz}\right)_{z=z_0}+\frac{R^2 e^{2 \text{i$\theta $}}}{2!}\left(\frac{d^2F}{dz^2}\right){}_{z=z_0}+\text{...}\right) \, d\theta =F\left(z_0\right)+0+0\text{...} $$

is certainly equal to $F(z_0)$.

However, when we consider $|F|^2=u^2+v^2$, one can certainly think of an example where the value of $F$ at the saddle point is zero (example: $F(z)=z-iz=x+y-ix+iy \ \ , \ \ (x_0,y_0)=(0,0)$ ). The value of $u$ and $v$ either increases or decreases as you move away from $z_0$ and so $|F|^2$ only gets bigger (reminder, we assume, as in the given example, that $F(z_0)=0$). So the average on the circle also cannot be equal to zero. Considering the example, $|F|^2=2(x^2+y^2)$ clearly has a minimum at $(0,0)$ and the average of $|F|^2$ around a circle is not zero.

My question is why the book is correct / what am I missing. Also, when I search for Jensen's theorem, why do none of the three theorems I see resemble the one stated in the book?

How to find the coefficient of $x^{20}$ in the expansion of $(x^{3}+x^{4}+\cdots )^{3}$?

Posted: 04 Jul 2021 07:56 PM PDT

How to find the coefficient of $x^{20}$ in the expansion of $(x^{3}+x^{4}+\cdots )^{3}$?

Here is what i started, $(x^{3}+x^{4}+x^{5}\cdots )^{3}= x^{3}(1+x+x^{2}+ \cdots )^{3} = x^{9}(1+x+x^{2})^{3} = x^{9}(\frac{1}{1-x})^{3} = x^{9}(1-x)^{-3}$

I cannot finish it and I'm confused with this problem. Please help me get with this.

Thank you for any help anent this problem.

What does it mean "arbitrary but fixed" in a proof?

Posted: 04 Jul 2021 07:42 PM PDT

I've read related questions but I couldn't find a clear explanation as to, for example, when one should use "...an arbitrary but fixed...". Does it mean that my variable has a certain property so it's not any variable? but most importantly, When should one use it?

Circumscribed sphere about a tetrahedron

Posted: 04 Jul 2021 08:08 PM PDT

$ABCD$ is a tetrahedron such that $AB=2\sqrt{3}$, $BC=\sqrt{7}$, $AC=1$ and $AD=BD$. Plane $ABD$ is perpendicular to plane $ABC$. Find the shortest distance between lines $BD$ and $AC$ if the radius of the circumscribed sphere about the tetrahedron is equal to $2\sqrt{2}$.

Work Problem Solving

Posted: 04 Jul 2021 08:04 PM PDT

Can someone please help me with this problem?

A bakery makes 1200 cakes in 9 days using 10 ovens. How many cakes can be made in 12 days using 5 ovens?

Thank you so much for those who will reply!!

How to compute $\tilde a (t)$?

Posted: 04 Jul 2021 08:06 PM PDT

I am reading Tapp's book on differential geometry and am trying to understand the following: How are they computing $\tilde a^{\perp}$ here?

enter image description here

I've seen this previously in the book:

enter image description here

From here, I know I might have to compute $\tilde a (t)= \tilde a (t)^{||}+\tilde a (t)^{\perp}$ and I know $\tilde a (t)$.

enter image description here

But here, I am extremely confused at how to compute $a^{||}$. Can you help?

To check differentiability of $f(z)=z |z|$ at $(0,0).$

Posted: 04 Jul 2021 08:13 PM PDT

Here $f(z)=z |{z}|=(x+iy) \sqrt{x^2+y^2}$.Here CR equations are not satisfied.Then to check differentiability at $(0,0),$ I find the partial derivatives $u_x,u_y,v_x,v_y$ at $(0,0)$ and found that CR equations are satisfied at $(0,0)$ (Here $u_x,u_y,v_x,v_y$ are all zero at $(0,0).$) Hence given function is differentiable only at $(0,0).$

Any improvement in my answer and is there any other way to solve this.

Calculating probability from closed form function?

Posted: 04 Jul 2021 07:55 PM PDT

I am looking for an analytical way to calculate the probability as $$P(\underline{y}\leq y =f(x) \leq \overline{y})$$

Where $x$ has a known probability distribution, $f(x)$ is integrable and differentiable over the domain of $x$.

  1. Can we calculate the probability of any general class of $f(x)$ say polynomial, the algebraic sum of square exponential terms, etc? Is there a way to construct the distribution of $y$ first by some method and then calculate probability?

  2. If not for general class then how should $f(x)$ look like to have any efficient analytical method to calculate probability.

PS: By analytical here I mean that a method without numerical simulation like Monte-carlo.

Integral of function depending on positive definite matrix

Posted: 04 Jul 2021 08:16 PM PDT

Consider a positive definite matrix $M$, and consider the integral $$\int_{-\infty}^{\infty}dx(x^2I+M^2)^{-1}$$ If $v_n$ is an eigenvector of $M$ with eigenvalue $\lambda_n$, then the integral when applied to $v_n$ is $$\int_{-\infty}^{\infty}dx(x^2I+M^2)^{-1}v_n=\int_{-\infty}^{\infty}dx(x^2+\lambda_n^2)^{-1}v_n=\pi\lambda_n^{-1}v_n$$ And since this is true for all $v_n$, I'd like to claim that this integral is equal to $\pi M^{-1}$. Is this true? Can this be done with other integrals?

Modified Fibonacci sequence at nth term formula?

Posted: 04 Jul 2021 07:38 PM PDT

Fibo Numbers: 1,1,2,3,5,8,13,21,34,55 that is f(n+2)=f(n+1)+f(n)

Modified Fibo Number: 1,1,4,25,841,....... that is f(n+2)=( f(n+1)+f(n) )^2

Closed form of fibonacci number at nth term is

enter image description here

Hi readers, how do we derive the Closed form of this modified fibonacci number at nth term that is 1,1,4,25,841,....... that is f(n+2)=( f(n+1)+f(n) )^2. It would be helpful to post the derivation below :)

Approximate with an absolute error less than $0.01$s, the necessary time for the object to reach the ground

Posted: 04 Jul 2021 08:27 PM PDT

An object that falls freely through the air is subject not only to the gravitational force but also to (force of) air resistance. If the object has mass $m$ and is let to fall from height $h_0$, after $t$ seconds, the height of the object is:
$$h(t)=h_0-\frac{mg}{k}t+\frac{m^2g}{k^2}(1-e^\frac{-kt}{m})$$

where $g = 9.8\frac{m}{s^2}$ is the gravitational acceleration and $k$ is the air resistance coefficient.
$h_0 = 300$ m
$m = 0.2$ kg
$k = 0.05 \frac{kg}{s}$
Approximate with an absolute error less than $0.01$s, the necessary time for the object to reach the ground. For the solution, use a suitable numerical algorithm to implement in Maple. Then solve the problem using the predefined function corresponding from Maple and compare the results.

To be fair, I don't even know how to begin. This problem looks like physics but I got it from my Numerical Analysis course; the problem is translated so sorry if there are some poorly worded things in there, tried my best.

Thanks in advance

Passing Alternate Numbers of Coins Around a Table

Posted: 04 Jul 2021 07:52 PM PDT

I am working on a puzzle which I originally got from Rustan Leino's website (see here: http://leino.science/puzzles/passing-alternating-numbers-of-coins-around/)

The puzzle is as follows:

A game is played as follows. N people are sitting around a table, each with one penny. One person begins the game, takes one of his pennies (at this time, he happens to have exactly one penny) and passes it to the person to his left. That second person then takes two pennies and passes them to the next person on the left. The third person passes one penny, the fourth passes two, and so on, alternating passing one and two pennies to the next person. Whenever a person runs out of pennies, he is out of the game and has to leave the table. The game then continues with the remaining people.

A game is terminating if and only if it ends with just one person sitting at the table (holding all N pennies). Show that there exists an infinite set of numbers for which the game is terminating.

I have an idea of how to approach this question, although I don't quite have a formal proof. I was hoping I could get some people to chime in and let me know I'm on the right track (without disclosing anything about how to prove this).

My thought is if $N-\left\lfloor\frac{N}{2}\right\rfloor-1$ is a power of $2$ (i.e. $N-\left\lfloor\frac{N}{2}\right\rfloor-1=2^k$ for some $k\in\mathbb{N}_0$), then $N$ is terminating.

The informal, not completely developed idea is that, if you view this game as having "rounds" (where a "round" ends after everybody who is still in the game has passed on a coin to the person on their left), then, after the first "round", every player in an even-numbered position gets kicked out as well as the player in the first position. EDIT: All of these remaining players will have the same number of coins after the first "round" (that is, if you subtract the pennies the first person in the second "round" gets from the last player in the first "round").

After the first "round", if you have an odd number of people remaining, I think the game must go on infinitely. That's because, after the first "round", every player that was in an odd-numbered position in "round" 1 will have at least two pennies and, before it's their turn to pass coins to their neighbour, will receive at least one penny. So no player will get kicked out in the second "round". But every player who gave up two pennies in the second "round" will get two pennies in the third "round", meaning after the third "round" everybody will be in the same position as they were after the first round. So you get a cycle and the game doesn't end.

If you have an even number of people remaining after the first "round", each person remaining who gives up two pennies but receives only one penny will lose a penny each "round" versus the number of pennies they started off with during that round. Further, until players are eliminated, the people giving up two pennies but receiving two pennies will be the same each "round". Eventually, there will be a "round" where all of the players receiving one penny and giving up two pennies will be knocked out. This will halve the number of players. EDIT: I may have missed making this clear, but I want to clarify that I'm pretty sure players will all be knocked out in the same "round". In order for the players to be knocked out in different rounds, I think it would have to be the case that there is a round in which these players start off by have a different number of pennies (minus any pennies that might have been transferred over by the last player in the last round). But this shouldn't be the case.

So reasoning like this, if you eventually get a situation where you have an odd number of players remaining, the game must go on infinitely. But, on the flip side, if after the first "round" you have a power of 2 number of players remaining, then the game should eventually terminate.

Convergence and divergence of some improper integrals

Posted: 04 Jul 2021 08:02 PM PDT

(A) $\int_a^{\infty} \frac{1}{x^{\alpha}}dx$ if $a \gt 0$ and $ \alpha \in \mathbb R$

(B) $\int_1^{+ \infty}\frac{x}{1-e ^ x}dx$

(C) $\int_0^{\infty} \frac{cos(x)}{x^2}dx$

My attempt

(A)

If $\alpha \neq a$ you have $$\int_a^A \frac{dx}{x^{\alpha}}= \frac{A^{a-\alpha} - a}{a - \alpha}$$ so $$\int_a^A \frac{dx}{x^{\alpha}}= \frac{1}{\alpha - a}$$ converges if $\alpha \gt a$. On the other hand, if $\alpha \le a $, $\int_1^{+\infty} \frac{dx}{x^{\alpha}}$ diverges this contrasts with the integral of the same function over the interval (0,a].

(B)

$$\int_1^{\infty} \frac{xdx}{(1- e^x)} = -\int_1^{\infty} \frac{xdx}{(e^x - 1)}$$

then we study the convergence of

$$\int_1^{\infty} \frac{xdx}{(e^x - 1)}$$

which is integral of a positive function on $[1, \infty)$, to sufficiently large $x$ we have

$$\frac{x}{(e^x - 1)} \lt \frac{x^2}{e^x}$$

for this is equivalent to $e^x \gt \frac{x}{x-1}$, but the infinite integral of $\frac{x^2}{e^x}$ converges and hence $\int_1^{\infty } \frac{xdx}{(e^x - 1)}$ also converges by comparison.

(C)

$\int_1^{\infty} \frac{\cos x}{x^2} dx \lt \int_1^{\infty} \frac{1}{x^2} dx =[-\frac{1}{x}]_1^{\infty}=-\frac{1}{\infty}+\frac{1}{1} = 1$

I don't know how to do from 0 to 1 on C.Are A and B acceptable?

Thanks in advance for any help.

Existence of weak limits of a given set.

Posted: 04 Jul 2021 08:27 PM PDT

Consider the Banach algebra of Hilbert Schmidt operators $A=S_2(H)$ on a Hilbert space $H$. Let $X= A\otimes^\gamma A$ be the Banach algebra obtained by projective tensor product. Now consider the set $\{f_{e_{mn}\otimes e_{mn}}\}_{m,n\in \mathbb N}$ in $X^*$, where $f\in X^*$ is fixed.

Does this set have any weak limit points (w.r.t weak topology on $X^*$)?

Can there exist a non-principle ultrafilter $U$ on $\mathbb N\times \mathbb N$ (independent of the choice of $f$) such that $\lim_U f_{e_{mn}\otimes e_{mn}}$ exists (w.r.t weak topology on $X^*$)?

($\{e_{mn}\}$ is the standard basis for Hilbert space $S_2(H)$ and $f_x$ is defined as $f_x(y)=f(x.y)$)

Proving an identity involving Legendre polynomial

Posted: 04 Jul 2021 08:32 PM PDT

I came across an equation in this paper and I am having difficulty trying to prove it: $$P_l \left(1-2\frac{(u-u')(v-v')}{(v-u)(v'-u')}\right) = \frac{(-1)^l}{l!} \frac{(v-u)^{l+1}}{(v'-u')^l} \frac{\partial^l}{\partial u^l} \left(\frac{(u-u')^l (u-v')^l}{(v-u)^{l+1}}\right)$$ where $P_l$ is the Legendre polynomial and all the variables are real numbers. If I let $$a=\frac{(u-u')(v-v')}{(v-u)(v'-u')}$$ and writing out an explicit expression for the Legendre polynomial $$P_l (1-2a) = \sum_{k=0}^l {l\choose k} {l+k \choose k} \left( \frac{1-2a-1}{2}\right)^k \\ =\sum_{k=0}^l {l\choose k} {l+k \choose k} (-1)^k a^k$$ I get a sum of terms containing only $(u-u')^k(v-u)^{-k}$. However, expanding the derivative on the right-hand side $$\frac{\partial^l}{\partial u^l} \left(\frac{(u-u')^l (u-v')^l}{(v-u)^{l+1}}\right) \\ = \sum_{k_1+k_2+k_3=l} \frac{l!}{k_1! k_2! k_3!} \frac{\partial^{k_1}}{\partial u^{k_1}} (u-u')^l \frac{\partial^{k_2}{\partial u^{k_2}} (u-v')^l \frac{\partial^{k_3}}{\partial u^{k_3}} \frac{1}{(v-u)^{l+1}}$$ clearly shows that there are many more terms with other powers that do not fit this form. How do I prove this equation?

Closure of $A = \{(x,y) \in \mathbb{R^2} : x^2 - 4 < y \le7\}$

Posted: 04 Jul 2021 08:19 PM PDT

I'm trying to calculate the closure of:

$$ A = \{(x,y) \in \mathbb{R^2} : x^2 - 4 < y \le7\} $$

I've cheated a bit and plotted the set to get an idea of ​​what the closure might be, so by intuition it can be seen that:

$$ \overline A = B = \{(x,y)\in \mathbb{R^2} : x^2-4\le y \le 7 \} $$

To calculate it, the reasoning that I'm following is to find the set $A'$of its accumulation points, since

$$ \overline A = A' \cup A $$

Therefore:

$$ (x,y) \in B \Rightarrow \begin{cases} x^2-4\le y \lt 7 \phantom{40}\boldsymbol{(1)}\\ x^2-4\le y = 7 \phantom{40}\boldsymbol{(2)} \end{cases} $$

$$ \boldsymbol{(1)} \Rightarrow \begin{cases} x \ge 0 \Rightarrow \exists \epsilon_0 > 0: (x-\epsilon_0 ,y)\in B((x,y),\epsilon), (x-\epsilon_0,y) \in A \phantom{1}since \phantom{1} (x-\epsilon_0)^2-4<y \\ x \lt 0 \Rightarrow \exists \epsilon_1 >0:(x+\epsilon_1,y)\in B((x,y),\epsilon), (x+\epsilon_1,y) \in A \phantom{1}since \phantom{1} (x+\epsilon_1)^2-4<y \end{cases} $$

$$ \boldsymbol{(2)} \Rightarrow \exists \epsilon_2 \gt 0:(x, y - \epsilon_2) \in B((x,y), \epsilon), (x, y - \epsilon_2) \in A \phantom{1}since \phantom{1} x^2-4 \lt y - \epsilon_2 $$

From this it is concluded that:

$$ \forall \epsilon > 0: B((x,y),\epsilon)\setminus \{(x,y)\} \phantom{1}\cap A \ne \emptyset \Rightarrow (x,y)\in A' $$

Which means that $B \subset A'$

However I'm not entirely sure if this is enough or if I should prove that there are no more accumulation points of $A$ and thus obtain that $A' \subset B$. And if so, how could you do it? I've been thinking about it and I haven't come up with an easy-to-get contradiction. Any hints or suggestions would be so much appreciated.

From 1D gaussian to 2D gaussian

Posted: 04 Jul 2021 08:08 PM PDT

I read this:

The Gaussian kernel for dimensions higher than one, say N, can be described as a regular product of N one-dimensional kernels. Example:

g2D(x,y,$\sigma_1^2 + \sigma_2^2$) = g1D(x,$\sigma_1^2$)g2D(y,$\sigma_2^2$)

saying that the product of two 1 dimensional gaussian functions with variances $\sigma_1^2$ and $\sigma_2^2$ is equal to a two dimensional gaussian function with the sum of the two variances as its new variance.

I tried to deduce this by using:

g1D(x,$\sigma1^2$)g2D(y,$\sigma2^2$) = $\frac{1}{\sqrt{2\pi}\sigma_1}e^{\frac{-x^2}{2\sigma_1^2}}\frac{1}{\sqrt{2\pi}\sigma_2}e^{\frac{-y^2}{2\sigma_2^2}}$ = $\frac{1}{2\pi\sigma_1\sigma_2}e^{-(\frac{x^2}{2\sigma_1^2}+\frac{y^2}{2\sigma_2^2})}$

but I fail to obtain

$\frac{1}{2\pi(\sigma_1^2 + \sigma_2^2)}e^{\frac{-(x^2+y^2)}{2\sigma_1^2 + 2\sigma_2^2}}$

which is equal to g2D(x,y,$\sigma_1^2 + \sigma_2^2$).

Someone know how to get there?

No comments:

Post a Comment